id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.12869
The Field of Values of the Height Zero Characters
We determine what are the fields of values of the irreducible $p$-height zero characters of all finite groups for $p=2$; we conjecture what they should be for odd primes, and reduce this statement to a problem on blocks of quasi-simple groups.
Gabriel Navarro, Lucas Ruhstorfer, Pham Huu Tiep, Carolina Vallejo
2023-04-25T14:40:17Z
http://arxiv.org/abs/2304.12869v1
# The field of values of the height zero characters ###### Abstract. We determine what are the fields of values of the irreducible \(p\)-height zero characters of all finite groups for \(p=2\); we conjecture what they should be for odd primes, and reduce this statement to a problem on blocks of quasi-simple groups. 2010 Mathematics Subject Classification: Primary 20D20; Secondary 20C15 The research of the first author is supported by Ministerio de Ciencia e Innovacion PID2019-103854GB-I00. The third author gratefully acknowledges the support of the NSF (grants DMS-1840702 and DMS-2200850), the Simons Foundation, and the Joshua Barlaz Chair in Mathematics. The fourth author acknowledges support from the Rita Levi Montalcini Program (bando 2019) and from the INdAM-GNSAGA. Part of this work was done when the third author visited Princeton University and MIT. It is a pleasure to thank Princeton University and MIT for generous hospitality and stimulating environment. irreducible character of odd degree, but it is easy to find many 2-height zero characters having this field of values. (For instance, in a double cover of \(\mathsf{S}_{\mathsf{5}}\).) However, \(\mathbb{Q}(\sqrt{2})\) or \(\mathbb{Q}(\sqrt{-2})\), say, do not appear to be the field of values of any 2-height zero character. When studying fields of values of characters, character conductors are a fundamental invariant. If \(\chi\) is a character of a group \(G\) and \(\mathbb{Q}(\chi)\) is the smallest field extension of \(\mathbb{Q}\) containing the values of \(\chi\), then we define \(c(\chi)\), the conductor of \(\chi\), to be the smallest integer \(n\) such that \(\mathbb{Q}(\chi)\) is contained in the \(n\)-th cyclotomic field \(\mathbb{Q}_{n}=\mathbb{Q}(e^{2\pi i/n})\). If \(F\) is any subfield of \(\mathbb{C}\), then we write \(F(\chi)=\langle F,\mathbb{Q}(\chi)\rangle\). The following is the main result of this paper. Its proof uses the Classification of Finite Simple Groups, together with the work of [BoDR], and its refinement by [KL]. **Theorem A**.: _Let \(G\) be a finite group, and let \(\chi\in\operatorname{Irr}(G)\) of 2-height zero. Write \(c(\chi)=2^{a}m\), where \(m\) is odd and \(a\geq 0\). Then \(\mathbb{Q}_{2^{a}}\subseteq\mathbb{Q}_{m}(\chi)\)._ In fact, the fields of values of the 2-height zero irreducible characters can be characterized in the following way. Let \(\mathcal{F}_{2}\) be the set of Abelian number fields \(F\) such that \(\mathbb{Q}_{n}=\langle\mathbb{Q}_{m},F\rangle\), where here \(n\) is the conductor of the field \(F\), and \(n=2^{a}m\), for some odd number \(m\). **Theorem B**.: _The set consisting of the fields of values of the 2-height zero characters of finite groups is exactly \(\mathcal{F}_{2}\)._ As a consequence of Theorem B we obtain that the following are the quadratic number fields that appear as fields of values of 2-height zero characters. **Corollary C**.: _Let \(F\) be a quadratic number field. Then \(F=\mathbb{Q}(\chi)\) for some 2-height zero character \(\chi\) if and only if \(\mathbb{Q}(\chi)=\mathbb{Q}(\sqrt{d})\) where \(d\neq 1\) is an odd square-free integer._ In this context, it is natural to wonder what happens for odd primes. The fields of values of the characters of degree not divisible by \(p\) are conjectured to be precisely the Abelian number fields \(F\) such that \([\mathbb{Q}_{p^{a}}:\mathbb{Q}_{p^{a}}\cap F]\) is not divisible by \(p\) in [NT, Conjecture C]. This conjecture does not seem to follow from the McKay-Navarro conjecture [Nav1, Conjecture A]. As happens in the case of characters of degree not divisible by \(p\), we can only conjecture what the fields of values of the \(p\)-height zero characters should be for odd primes. This is Conjecture D below. The novelty is that we can show that the statement of Conjecture D follows from the statement of the Alperin-McKay-Navarro conjecture. For any prime \(p\), let \(\mathcal{F}_{p}\) be the set of Abelian number fields \(F\) with conductor \(n=p^{a}m\), where \(p\) does not divide \(m\), such that the degree \(|\mathbb{Q}_{n}:\langle\mathbb{Q}_{m},F\rangle|\) is not divisible by \(p\). Notice that the fields \(F\) with \(p\)-part of the conductor \(p^{a}\) such that \([\mathbb{Q}_{p^{a}}:\mathbb{Q}_{p^{a}}\cap F]\) is not divisible by \(p\) are a subclass contained in \(\mathcal{F}_{p}\). **Conjecture D**.: _The set of fields of values of the \(p\)-height zero characters of finite groups is exactly \(\mathcal{F}_{p}\)._ We show that any field in \(\mathcal{F}_{p}\) is the field of values of a \(p\)-height zero character in Theorem 5.1. Hence settling one of the containments in Conjecture D (and also reducing the proof of Theorem B to proving Theorem A). We reduce the verification of the other containment to a problem on quasi-simple groups in Theorem 6.3. We show that the statement of Conjecture D follows from the statement of the Alperin-McKay-Navarro conjecture [11, Conjecture B] in Theorem 5.3. A fundamental part of our work is devoted to show that Theorem A is true for quasi-simple groups. We believe that this will be useful in the final verification of the Alperin-McKay-Navarro conjecture. ## 2. Conductors Let us start by recording some elementary results on characters and conductors that we will frequently use. Recall that if \(\psi\) is a character of a finite group, then \(\psi(g)\) is a sum of \(o(g)\)-th roots of unity for \(g\in G\), and therefore the field of values \(\mathbb{Q}(\psi)\), which is the smallest field containing \(\psi(g)\) for all \(g\in G\), is contained in \(\mathbb{Q}_{|G|}=\mathbb{Q}(e^{2\pi i/|G|})\). The conductor \(c(\psi)\) is the smallest \(n\) such that \(\mathbb{Q}(\psi)\subseteq\mathbb{Q}_{n}=\mathbb{Q}(e^{2\pi i/n})\). Therefore \(c(\psi)\) divides \(|G|\). Moreover, \(\mathbb{Q}(\psi)\subseteq\mathbb{Q}_{m}\) if and only if \(c(\psi)\) divides \(m\). If \(F\) is an Abelian number field, that is \(F\subseteq\mathbb{C}\) and \(F/\mathbb{Q}\) is a Galois extension with \(\operatorname{Gal}(F/\mathbb{Q})\) abelian, then the Kronecker-Weber theorem implies that \(F\subseteq\mathbb{Q}_{n}\) for some \(n\) and \(c(F)\), the conductor of \(F\), is the smallest such \(n\). By elementary Galois theory, recall that \(c(\langle F_{1},F_{2}\rangle)\) is the least common multiple of \(c(F_{1})\) and \(c(F_{2})\). In this paper, if \(p\) is a prime and \(n\geq 1\) is an integer, then \(n_{p}\) is the largest power of \(p\) dividing \(n\), and \(n_{p^{\prime}}=n/n_{p}\). We call \(n_{p}\) the \(p\)-part of \(n\) and \(n_{p^{\prime}}\) the \(p^{\prime}\)-part of \(n\). For a fixed prime \(p\), we are interested in the \(p\)-parts of conductors. If \(\psi\) is a character and \(c(\psi)_{p}=1\), then \(\psi\) is called \(p\)-rational. If \(p=2\), \(\psi\) is either \(2\)-rational or \(c(\psi)_{2}\geq 4\). Notice that if \(\chi\) is linear character then \(c(\chi)=o(\lambda)\) unless \(o(\lambda)_{2}=2\) in which case \(c(\chi)=o(\lambda)/2\). **Lemma 2.1**.: _Let \(p\) be a prime. Suppose that \(\chi\in\operatorname{Irr}(G)\), and write \(c(\chi)=p^{a}m\), where \(n\) is not divisible by \(p\). If \(n\) is a natural number not divisible by \(p\) with \(\mathbb{Q}_{p^{\prime}}\subseteq\mathbb{Q}_{pn}(\chi)\), then \(\mathbb{Q}_{p^{\prime}}\subseteq\mathbb{Q}_{pm}(\chi)\). Moreover \(f\leq a\) unless possibly when \(f=1\) and \(a=0\)._ Proof.: By replacing \(n\) by \(mn\), we we may assume that \(m\) divides \(n\). If \(a=0\) and \(f=1\) then \(\mathbb{Q}_{p}\subseteq\mathbb{Q}_{pm}(\chi)\). Hence we may assume that \(a\geq 1\). Then \(a\geq 2\) if \(p=2\). In either case \(\mathbb{Q}_{p^{f}}\subseteq\mathbb{Q}_{pn}(\chi)\subseteq\mathbb{Q}_{p^{a}n}( \chi)=\mathbb{Q}_{p^{a}n}\) because \(m\) divides \(n\), so \(f\leq a\). If \(p=2\), we may also assume that \(f\geq 2\), because otherwise the result is trivial. Write \(F=\mathbb{Q}_{n}\), \(K=\mathbb{Q}_{m}\), \(L=\mathbb{Q}_{p^{a}m}\) and \(E=\langle F,L\rangle=\mathbb{Q}_{p^{a}n}\). We have that \(F\cap L=K\). Let \(J=\mathbb{Q}_{pm}(\chi)\), so that \(K\subseteq J\subseteq L\). Let \(M=\mathbb{Q}_{pn}(\chi)=\langle F,J\rangle\). Since \(\mathbb{Q}_{p^{f}}\subseteq M\subseteq\mathbb{Q}_{p^{a}n}\), we have that \(f\leq a\). Now, \(\mathbb{Q}_{p^{f}}\subseteq M\cap L=J\), by Lemma 2.6(i) of [11], for instance. **Lemma 2.2**.: _Let \(p\) be a prime. Suppose that \(\chi\) and \(\psi\) are characters of groups \(G\) and \(H\). Suppose that \(\mathbb{Q}_{pn}(\chi)=\mathbb{Q}_{pn}(\psi)\) for some \(n\) not divisible by \(p\)._ 1. _If_ \(n\) _divides_ \(m\)_, then_ \(\mathbb{Q}_{pm}(\chi)=\mathbb{Q}_{pm}(\psi)\)_._ 2. _If_ \(p=2\)_, or_ \(p\) _is odd and_ \(c(\chi)_{p},c(\psi)_{p}\geq p\)_, then_ \(c(\chi)_{p}=c(\psi)_{p}\)_._ Proof.: To prove part (i) just notice that \[\mathbb{Q}_{pm}(\chi)=\mathbb{Q}(\zeta_{p},\zeta_{m},\chi)=\mathbb{Q}(\zeta_{ p},\zeta_{n},\zeta_{m},\chi)=\mathbb{Q}_{np}(\chi)(\zeta_{m})=\mathbb{Q}_{ np}(\psi)(\zeta_{m})=\mathbb{Q}_{pm}(\psi).\] To prove part (ii) notice that \(\mathbb{Q}(\psi)\subseteq\mathbb{Q}_{pn}(\chi)\subseteq\mathbb{Q}_{c(\chi)_ {p}m}\) with \(m=nc(\chi)_{p^{\prime}}\). In particular \(c(\psi)_{p}\) divides \(c(\chi)_{p}\). By reversing the roles played by \(\chi\) and \(\psi\) we obtain the result. ## 3. Fields and Height Zero Characters Our notation for blocks follows [11]. We will frequently use the following facts on height zero characters. **Theorem 3.1**.: _Let \(B\) be a \(p\)-block of a finite group \(G\), and let \(\chi\in\operatorname{Irr}(B)\) with height zero._ 1. _If_ \(\psi^{G}=\chi\)_, where_ \(\psi\in\operatorname{Irr}(H)\) _of some subgroup_ \(H\) _of_ \(G\)_, then_ \(\psi\) _has height zero in its_ \(p\)_-block, and any defect group of the block of_ \(\psi\) _is a defect group of_ \(B\)_._ 2. _If_ \(N\triangleleft G\) _and_ \(\theta\in\operatorname{Irr}(N)\) _is under_ \(\chi\)_, then_ \(\theta\) _has height zero._ Proof.: Part (i) is Proposition 2.5(e) of [20]. Part (ii) is due to M. Murai, and is Proposition 2.5(a) of [20]. **Theorem 3.2**.: _Let \(p\) be a prime, and suppose that \(\chi\in\operatorname{Irr}(G)\) has height zero in its \(p\)-block. Let \(N\triangleleft G\), let \(\theta\in\operatorname{Irr}(N)\) be under \(\chi\), and let \(\psi\in\operatorname{Irr}(T|\theta)\) be the Clifford correspondent of \(\chi\) over \(\theta\). Then \(\psi\) and \(\theta\) have height zero. Also, \(\mathbb{Q}_{pn}(\chi)=\mathbb{Q}_{pn}(\psi)\), where \(n=|G|_{p^{\prime}}\). Therefore, if \(p=2\) or \(p\) odd and \(c(\chi)_{p}\geq p\), then \(c(\chi)_{p}=c(\psi)_{p}\)._ Proof.: We have that \(\psi\) and \(\theta\) have height zero by Theorem 3.1. We argue by induction on \(|G:N|\). Since \(\psi^{G}=\chi\), we have that \(\mathbb{Q}(\chi)\subseteq\mathbb{Q}(\psi)\). Let \(T^{*}\) be the semi-inertia group of \(\theta\) in \(G\) consisting of the elements \(g\in G\) for which there is some \(\sigma\in\operatorname{Gal}(\mathbb{Q}(\theta)/\mathbb{Q})\) such that \(\theta^{g}=\theta^{\sigma}\), as in Problem 3.9 of [11]. Let \(\eta=\psi^{T^{*}}\). Since \(\eta^{G}=\chi\), then we have that \(\eta\) has height zero. Also, \(\mathbb{Q}(\chi)=\mathbb{Q}(\eta)\), \(T\triangleleft T^{*}\) and \(T^{*}/T\) is abelian. If \(T^{*}<G\), by induction, we have that \(\mathbb{Q}_{p|T^{*}|_{p^{\prime}}}(\eta)=\mathbb{Q}_{p|T^{*}|_{p^{\prime}}}(\psi)\). By Lemma 2.2(i), we have that \(\mathbb{Q}_{pn}(\psi)=\mathbb{Q}_{pn}(\eta)=\mathbb{Q}_{pn}(\chi)\), and we are done. Thus we may assume that \(T^{*}=G\). Then \(T\triangleleft G\), and that \(G/T\) is abelian. By induction, we may assume that \(T=N\), and \(\psi=\theta\). If \(M\) is a maximal normal subgroup of \(G\) with \(N<M<G\), then again by induction (and using Lemma 2.2(i)), \(\mathbb{Q}_{pn}(\theta^{M})=\mathbb{Q}_{pn}(\theta)\). Hence it is enough to prove the statement in the case where \(G/N\) has prime order. Now \(G/N\) has prime order. Let \(\sigma\in\operatorname{Gal}(\mathbb{Q}_{pn}(\theta)/\mathbb{Q}_{pn}(\chi))\). We want to show \(\sigma\) is trivial. Assume that \(\sigma\neq 1\). Notice that \(\sigma\) is a \(p\)-element, since \(\operatorname{Gal}(\mathbb{Q}_{|G|}/\mathbb{Q}_{pn})\leq\operatorname{Gal}( \mathbb{Q}_{|G|}/\mathbb{Q}_{pn})\cong\operatorname{Gal}(\mathbb{Q}_{|G|_{p}}/ \mathbb{Q}_{p})\) is a \(p\)-group. By Clifford's theorem, we have that \(\theta^{\sigma}=\theta^{g}\) for some \(g\in G\). Also \(\theta^{\sigma}\neq\theta\) because \(\sigma\) is not trivial. In particular \(\langle gN\rangle=G/N\) is a group of order \(p\). Let \(b\) be the block of \(\theta\). Since \(\sigma\) fixes \(p^{\prime}\)-roots of unity, it follows that \(b^{\sigma}=b\). (Use, for instance, Theorem 3.19 of [13].) Then \(b^{g}=b\) and \(b\) is \(G\)-invariant. Then we apply Corollary 9.6 and Corollary 9.18 of [13], and conclude that \(\theta\) is \(G\)-invariant, a contradiction. The second part of the statement follows from Lemma 2.2. Notice if \(p\) is odd then \(c(\chi)_{p}\geq p\) implies that \(c(\psi)_{p}\geq 1\) because if \(c(\psi)_{p}=1\), then \(\psi\) is \(p\)-rational, and \(\chi=\psi^{G}\) is also \(p\)-rational. Notice that the hypothesis on the odd case of the second statement of the above theorem is necessary: if \(p=3\), \(\chi\in\operatorname{Irr}(\mathsf{S}_{3})\) has degree 2 and \(\psi\in\operatorname{Irr}(N)\) is under \(\chi\) with \(|N|=3\), then \(c(\chi)_{3}=1\) but \(c(\psi)_{3}=3\). **Corollary 3.3**.: _Let \(N\triangleleft G\), let \(\chi\in\operatorname{Irr}(G)\) of height zero in its \(p\)-block, and let \(\theta\) be an irreducible constituent of \(\chi_{N}\). If \(p\) is odd, assume that \(c(\chi)_{p}\geq p\). Then \(c(\theta)_{p}\leq c(\chi)_{p}\). In particular, if \(p=2\) and \(\chi\) is 2-rational, then \(\theta\) is \(2\)-rational._ Proof.: By Theorem 3.2, we may assume that \(\theta\) is \(G\)-invariant. Then \(\chi_{N}=e\theta\), \(\mathbb{Q}(\theta)\subseteq\mathbb{Q}(\chi)\), and the statement is clear. Of course, Corollary 3.3 is about height zero characters. (Consider, for instance, \(G=\mathsf{D}_{8}\), \(\chi\in\operatorname{Irr}(G)\) of degree 2, and \(N\) a cyclic subgroup of \(G\) of order 4.) Next we prove the normal defect case of Theorem A. We first need a lemma. Suppose that \(\chi\in\operatorname{Irr}(G)\) lies in a block \(B\) with defect group \(D\triangleleft G\). Let \(C=\mathbf{C}_{G}(D)\). Let \(b\) a block of \(CD\) covered by \(B\). By Corollary 9.21 of [13], we have that \(B=b^{G}\) is the only block of \(G\) covering \(b\). By Theorem 9.26 of [13], we have that \(b\) has defect group \(D\). By Theorem 9.12 of [13], there is a unique irreducible character \(\theta\in\operatorname{Irr}(b)\) such that \(D\subseteq\ker(\theta)\). This character has defect zero, viewed as a character of \(CD/D\), and it is called the canonical character of \(b\), which is uniquely defined up to \(G\)-conjugacy. The irreducible characters of \(b\) are described in Theorem 9.12 of [13]. **Lemma 3.4**.: _Suppose that \(\chi\in\operatorname{Irr}(G)\) has height zero and belongs to a \(p\)-block \(B\) with a normal defect group \(D\). Let \(C=\mathbf{C}_{G}(D)\) and \(Z=\mathbf{Z}(D)\). If \(\chi_{CD}\) is homogeneous, then \(\chi_{D}\) is homogeneous, the canonical character \(\theta\in\operatorname{Irr}(CD/D)\) of \(B\) is \(G\)-invariant, and \(G/CD\) is a \(p^{\prime}\)-group. If \(\lambda\) the irreducible constituent of \(\chi_{D}\), then \(\lambda\) is linear and \(c(\chi)_{p}=c(\lambda)\)._ Proof.: Let \(\eta\in\operatorname{Irr}(CD)\) be under \(\chi\). By Theorem 3.1, we have that \(\eta\) has height zero. By Theorem 9.12 of [13], we know that we can write \(\eta=\theta_{\lambda}\), where \(\lambda\in\operatorname{Irr}(D)\) is linear and \(\theta\in\operatorname{Irr}(CD/D)\) has defect zero. Moreover, \(\eta(x)=0\) if \(x_{p}\not\in D\), and \(\eta(x)=\theta(x_{p^{\prime}})\lambda(x_{p})\) if \(x_{p}\in D\). Thus \(\eta_{D}=\theta(1)\lambda\). Since \(\chi_{CD}\) is homogeneous, it follows that \(\chi_{D}\) is homogeneous then \(\lambda\) and \(\eta\) are \(G\)-invariant. We claim that \(\theta\in\operatorname{Irr}(CD/D)\) is \(G\)-invariant. View \(\theta\) as a character of \(C/Z\). Let \(x\in C\) and \(g\in G\). Since \(\theta\in\operatorname{Irr}(C/Z)\) has \(p\)-defect zero, if \(xZ\) is \(p\)-singular, then \(\theta(x)=0=\theta(x^{g})\) because \(x^{g}Z\) is also \(p\)-singular. If \(xZ\) is \(p\)-regular, then \(x^{g}Z\) is also \(p\)-regular, \(xZ=x_{p^{\prime}}Z\) and \(x^{g}Z=(x_{p^{\prime}})^{g}Z\). Since \(\eta\) is \(G\)-invariant, we have that \(\theta(x_{p^{\prime}})\lambda(x_{p})=\eta(x)=\eta(x^{g})=\theta((x_{p^{\prime }})^{g})\lambda((x_{p^{\prime}})^{g})\). Since \(\lambda\) is \(G\)-invariant and linear, we deduce that \(\theta(x_{p^{\prime}})=\theta((x_{p^{\prime}})^{g})\). Now \(\theta(x)=\theta(x_{p^{\prime}})=\theta((x_{p^{\prime}})^{g})=\theta(x^{g})\) because \(Z\) is contained in the kernel of \(\theta\), and we deduce that \(\theta\) is \(G\)-invariant. Therefore, we have that \(G/CD\) has order coprime to \(p\) by Theorem 9.22 of [13]. Since \(\chi_{CD}=e\eta\) and \(\eta_{D}=\theta(1)\lambda\), we have that \(\mathbb{Q}_{c(\lambda)}\subseteq\mathbb{Q}(\eta)\). Now, \(\theta\) is \(p\)-rational, because it is a defect zero character, and therefore, \(\mathbb{Q}(\theta)\subseteq\mathbb{Q}_{m}\), where \(m\) is a \(p^{\prime}\)-number. Then, using the formula for the values of \(\eta\), we have that \(\mathbb{Q}_{c(\lambda)}\subseteq\mathbb{Q}(\eta)\subseteq\mathbb{Q}_{c( \lambda)m}\). Therefore, \(c(\lambda)\) divides \(c(\eta)\) which divides \(c(\lambda)m\), implying that \(c(\eta)_{p}=c(\lambda)\). We apply Lemma 4.2(ii) of [NT], and we get that \(c(\chi)_{2}=c(\eta)_{2}\) if \(p=2\) and \(c(\chi)_{p}=c(\eta)_{p}\) if \(p\) is odd and \(c(\lambda)=c(\eta)_{p}>1\). If \(p\) is odd and \(c(\lambda)=c(\eta)_{p}=1\), then \(\lambda=1_{D}\). Therefore \(\eta=\theta\), and \(\chi\) has \(p\)-defect zero, so \(\chi\) is \(p\)-rational and \(c(\chi)_{p}=1\). In any case, we conclude that \(c(\chi)_{p}=c(\lambda)\). Next we prove Theorem A, and one of the containments of Conjecture D, in the case of blocks with a normal defect group. **Lemma 3.5**.: _Let \(\chi\in\operatorname{Irr}(B)\) of height zero, where \(B\) is a \(p\)-block with a normal defect group \(D\). Write \(c(\chi)=p^{a}m\), where \(m\) is not divisible by \(p\). Then \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pn}(\chi)\)._ Proof.: First notice that we may assume that \(a\geq 2\) as otherwise the result trivially holds. We argue by induction on \(|G|\). Write \(C=\mathbf{C}_{G}(D)\). Let \(\eta\in\operatorname{Irr}(CD)\) be an irreducible constituent of \(\chi_{CD}\). Let \(T=G_{\eta}\) be the stabilizer of \(\eta\) in \(G\) and \(\psi\in\operatorname{Irr}(T|\eta)\) be the Clifford correspondent of \(\chi\) over \(\eta\). By Theorem 3.1, we know that \(\psi\) has height zero and that \(D\) is a defect group of its block. By Theorem 3.2, we have that \(\mathbb{Q}_{pn}(\chi)=\mathbb{Q}_{pn}(\psi)\) where \(n=|G|_{p^{\prime}}\) and \(c(\chi)_{p}=c(\psi)_{p}\). Assume that \(T<G\). By induction \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pc(\psi)_{p^{\prime}}}(\psi)\subseteq \mathbb{Q}_{pn}(\psi)=\mathbb{Q}_{pn}(\chi)\). By Lemma 2.1 we conclude that \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pm}(\chi)\). Hence \(T=G\) and we are under the hypotheses of Lemma 3.4. If \(\lambda\in\operatorname{Irr}(D)\) lies under \(\chi\), then \(p^{a}=c(\chi)_{p}=c(\lambda)\). Notice that \(\chi_{D}=f\lambda\) and hence \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}(\chi)\subseteq\mathbb{Q}_{pm}(\chi)\). The next results are key to understand the statement of Conjecture D when the group possesses a normal subgroup of index \(p\) (a fundamental step in our reduction theorem). **Lemma 3.6**.: _Suppose that \(G/N\) is a \(p\)-group, and \(b\) is a \(G\)-invariant \(p\)-block of \(N\) covered by a block \(B\) of \(G\) with defect group \(D\). Suppose that \(D_{0}=D\cap N\triangleleft G\). Then \(G=DN\) and \(b\) has a \(G\)-invariant height zero \(p\)-rational irreducible character._ Proof.: By Corollary 9.6 of [12], \(B\) is the unique \(p\)-block covering \(b\). Then Theorem 9.17 of [12] implies that \(G=ND\) and \(D_{0}\) is the unique defect group of \(b\). Let \(C=\mathbf{C}_{N}(D_{0})\). Notice that \(CD_{0}\triangleleft G\). By the Fong-Reynolds correspondence, Theorem 9.14 of [12], we can find \(e\) a block of \(CD_{0}\) covered by \(b\) such that the block \(b_{T}\) of \(T=G_{e}\), the stabilizer of \(e\) in \(G\), inducing \(B\) and covering \(e\) has defect group \(D\). Notice that \(e\) has defect group \(D_{0}\). Since \(b\) is \(G\)-invariant, notice that \(TN=G\), \(b_{T}\) covers a unique block \(f\) of \(N_{e}=N\cap T\) that induces \(b\) and covers \(e\). By induction and the Fong-Reynolds correspondence, we may assume that \(e\) is \(G\)-invariant. Then we have that \(N/CD_{0}\) is a \(p^{\prime}\)-group, by Theorem 9.22 of [12]. Since \(e\) is \(G\)-invariant, we have that the canonical character \(\theta\in\operatorname{Irr}(CD_{0}/D_{0})\) of \(e\) is \(G\)-invariant. By Theorem 13.31 of [12], some irreducible constituent \(\xi\) of \(\theta^{N}\) is \(D\)-invariant. Thus \(\xi\) is \(G\)-invariant. Since \(b\) is the only block of \(N\) that covers \(e\), we have that \(\xi\in\operatorname{Irr}(b)\). Also, \(\xi\) is \(p\)-rational, because it has defect zero considered as a character of \(N/D_{0}\). It also has height zero because \(\theta\) has height zero and \(N/CD_{0}\) is a \(p^{\prime}\)-group. **Lemma 3.7**.: _Suppose that \(G/N\) is a cyclic \(p\)-group, and \(\theta\in\operatorname{Irr}(N)\) is \(G\)-invariant of \(p\)-height zero. Then every \(\chi\in\operatorname{Irr}(G|\theta)\) has \(p\)-height zero. Also, if \(D\) is a defect group of the block of \(\chi\), then \(DN=G\) and \(D\cap N\) is a defect group of the block of \(\theta\)._ Proof.: Let \(b\) be the block of \(\theta\). Let \(B\) be the unique \(p\)-block of \(G\) covering \(b\) by Corollary 9.6 of [12]. Let \(\chi\in\operatorname{Irr}(G|\theta)\), so that \(\chi\in\operatorname{Irr}(B)\). Since \(b\) is \(G\)-invariant, we have that \(G=DN\), where \(D\) is a defect group of \(B\), and \(D_{0}=D\cap N\) is a defect group of \(b\) by Theorem 9.17 of [12]. We have that \(\chi_{N}=\theta\) because \(G/N\) is cyclic and \(\theta\) is \(G\)-invariant (using Theorem 5.1 of [12] and the Gallagher correspondence Corollary 1.23 of [12]). Then \(\chi(1)_{p}=\theta(1)_{p}=|N:D_{0}|=|G:D|\). Thus \(\chi\) has height zero. **Lemma 3.8**.: _Suppose that \(G/N\) is a \(p\)-group. Let \(\theta\in\operatorname{Irr}(N)\) be of \(p\)-height zero and \(G\)-invariant. Let \(n=|G|_{p^{\prime}}\). Let \(D_{0}\) be a defect group of the block of \(\theta\), let \(H=\mathbf{N}_{G}(D_{0})\). Then there exists an \(H\)-invariant \(\varphi\in\operatorname{Irr}(N\cap H)\) of \(p\)-height zero such that \([\theta_{H\cap N},\varphi]\not\equiv 0\bmod\,p\), and \(\mathbb{Q}_{pn}(\varphi)\subseteq\mathbb{Q}_{pn}(\theta)\)._ Proof.: Let \(b\) be the \(p\)-block of \(\theta\) and let \(B\) be the only \(p\)-block of \(G\) covering \(b\). Since \(b\) is \(G\)-invariant, we have that \(G=DN\), where \(D\) is a defect group of \(B\), and \(D_{0}=D\cap N\) is a defect group of \(B_{0}\), by Theorem 9.17 of [12]. Since \(D\subseteq H\), note that \(G=HN\). Let \(M=H\cap N=\mathbf{N}_{N}(D_{0})\). Then \(H=MD\). Let \(e\) be the Brauer correspondent block of \(M\) (with defect group \(D_{0}\)) inducing \(b\). By the Harris-Knorr Theorem [12, Theorem 9.28], there is a unique block \(E\) of \(H\) covering \(e\) that induces \(B\). This block \(E\) has defect group \(D\). Let \(\mathcal{U}=\operatorname{Gal}(\mathbb{Q}_{|G|}/\mathbb{Q}_{pn}(\theta))\). Notice that \(\mathcal{U}\leq\operatorname{Gal}(\mathbb{Q}_{|G|}/\mathbb{Q}_{pn})\) which is a \(p\)-group. We must then work to show that \(e\) has a \(D\times\mathcal{U}\)-invariant height zero character \(\varphi\) with \([\theta_{M},\varphi]\not\equiv 0\bmod p\). We will use the \(\tilde{\ }\) construction as in [20, page 27] and the fact that \(e\) possesses an irreducible character \(\psi\) which is \(H\)-invariant and \(p\)-rational by Lemma 3.6. Let \(\delta=\theta_{M}\). Then \(\tilde{\delta}\) defined as \(\tilde{\delta}(x)=|M|_{p}\delta(x)\), if \(x\in M\) is \(p\)-regular, and \(0\), otherwise, is a generalized character by Lemma 2.15 of [20]. We can write \[\tilde{\delta}=\sum_{\xi\in\operatorname{Irr}(M)}[\delta,\xi]\tilde{\xi}\,.\] By Lemma 6.5.(b) of [20], we have that \[\frac{[\tilde{\delta},\psi]}{\psi(1)}\not\equiv 0\operatorname{mod}\mathcal{P}\,,\] where \(\mathcal{P}\) is the maximal ideal of \(\mathbf{R}_{M}\) the localization of the ring of algebraic integers \(\mathbf{R}\) at a prime \(M\) containing \(p\) (see [20, page 16]). Therefore \[\Lambda=\sum_{\xi\in\operatorname{Irr}(M)}[\delta,\xi]\frac{[\tilde{\xi},\psi] }{\psi(1)}\not\equiv 0\operatorname{mod}\mathcal{P}\,.\] Note that \([\tilde{\xi},\psi]=[\xi,\tilde{\psi}]\) whenever \(\xi\in\operatorname{Irr}(M)\). By Lemma 3.20 of [20], recall that \([\xi,\tilde{\psi}]=0\) if \(\xi\) is not in \(e\), so that \[\Lambda=\sum_{\xi\in\operatorname{Irr}(e)}[\delta,\xi]\frac{[\tilde{\xi},\psi] }{\psi(1)}\not\equiv 0\operatorname{mod}\mathcal{P}\,.\] By Lemma 3.22(a) of [20]\(\frac{[\tilde{\xi},\psi]}{\psi(1)}\in\mathbf{R}_{M}\cap\mathbb{Q}\) for every \(\xi\in\operatorname{Irr}(e)\), so \(\nu(\frac{[\tilde{\xi},\psi]}{\psi(1)})\geq 0\), where \(\nu\) is the valuation function defined in [20, page 64]. By Theorem 3.24 of [20], we have that \(\nu(\frac{[\tilde{\xi},\psi]}{\psi(1)})=0\) if, and only if, \(\xi\) has height zero in the \(p\)-block \(e\) (\(\xi\in\operatorname{Irr}_{0}(e)\)). By Lemma 3.21 of [20] we have that \(\frac{[\tilde{\xi},\psi]}{\psi(1)}\in\mathcal{P}\) whenever \(\xi\) does not have height zero in \(e\). Hence \[\Lambda\equiv\sum_{\xi\in\operatorname{Irr}_{0}(e)}[\delta,\xi]\frac{[\tilde{ \xi},\psi]}{\psi(1)}\not\equiv 0\operatorname{mod}\mathcal{P}\,.\] Consider \(\Omega=\{\xi\in\operatorname{Irr}_{0}(e)\ |\ [\delta,\xi]\not\equiv 0 \bmod\,p\}\). We have that \[\Lambda\equiv\sum_{\xi\in\Omega}[\delta,\xi]\frac{[\tilde{\xi},\psi]}{\psi(1) }\not\equiv 0\operatorname{mod}\mathcal{P}\,.\] The \(p\)-group \(D\times\mathcal{U}\) acts on \(\Omega\). Let \(\Omega=\Omega_{\xi_{1}}\cup\ldots\cup\Omega_{\xi_{r}}\) be the orbit decomposition of \(\Omega\). Then, given that \(\delta\) and \(\psi\) are \(D\times\mathcal{U}\)-invariant, we have that \[\Lambda\equiv\sum_{i=1}^{r}|\Omega_{\xi_{i}}|[\delta,\xi_{i}]\frac{[\tilde{\xi }_{i},\psi]}{\psi(1)}\not\equiv 0\,\mathrm{mod}\,\mathcal{P}\,.\] In particular there is some \(D\times\mathcal{U}\)-invariant \(\varphi\in\Omega\). The \(D\)-invariance of \(\varphi\) implies that \(\varphi\) is \(H\)-invariant and the \(\mathcal{U}\)-invariance of \(\varphi\) implies that \(\mathbb{Q}_{pn}(\varphi)\subseteq\mathbb{Q}_{pn}(\theta)\). The following is a consequence of an argument of J. Thompson, we refer the reader to Theorem 6.9 of [11]. **Lemma 3.9**.: _Suppose that \(G/N\) is a \(p\)-group, and let \(H\leq G\) such that \(G=NH\). Write \(M=N\cap H\). Let \(\theta\in\mathrm{Irr}(N)\) be \(G\)-invariant, and let \(\varphi\in\mathrm{Irr}(M)\) be \(H\)-invariant such that \([\theta_{M},\varphi]\not\equiv 0\) mod \(p\)._ 1. _Suppose that_ \(\xi\in\mathrm{Irr}(H)\) _extends_ \(\varphi\)_. Then there is an extension_ \(\chi\in\mathrm{Irr}(G)\) _of_ \(\theta\) _such that_ \(\mathbb{Q}(\chi)\subseteq\mathbb{Q}(\xi,\theta)\)_._ 2. _Suppose that_ \(\chi\in\mathrm{Irr}(G)\) _extends_ \(\theta\)_. Then there is an extension_ \(\xi\in\mathrm{Irr}(H)\) _of_ \(\varphi\) _such that_ \(\mathbb{Q}(\xi)\subseteq\mathbb{Q}(\chi,\varphi)\)_._ 3. _Suppose that_ \(G/N\) _has order_ \(p\)_. Then_ \(\mathbb{Q}_{p}(\chi,\varphi)=\mathbb{Q}_{p}(\theta,\xi)\) _for every_ \(\chi\in\mathrm{Irr}(G|\theta)\) _and_ \(\xi\in\mathrm{Irr}(H|\varphi)\)_._ Proof.: Write \(m=|G|\). In order to prove part (a), let \(\sigma\in\mathrm{Gal}(\mathbb{Q}_{m}/\mathbb{Q}(\xi,\theta))\). Since \(\xi^{\sigma}=\xi\), in particular \(\varphi^{\sigma}=\varphi\). By Theorem 6.9 of [11], there is a unique extension \(\chi\in\mathrm{Irr}(G)\) of \(\theta\) such that \(\chi_{H}=\Psi\xi+\Delta\), where \(\Delta\) is a character of \(H\) or zero, with \([\Delta_{M},\varphi]=0\) and \(\Psi\) is a character of \(H/M\) with trivial determinant. Now, \(\chi^{\sigma}\) is an extension of \(\theta=\theta^{\sigma}\) and \((\chi^{\sigma})_{H}=\Psi^{\sigma}\xi+\Delta^{\sigma}\), where \([(\Delta^{\sigma})_{M},\varphi]=0\) and \(\Psi^{\sigma}\) is a character of \(H/M\) with trivial determinant. By the uniqueness of \(\chi\), we get that \(\chi=\chi^{\sigma}\). Part (b) is proved in the same way. We prove part (c). We have that \(\theta\) extends to \(G\) by Theorem 5.1 of [11]. In fact, every character in \(\mathrm{Irr}(G|\theta)\) is an extension of \(\theta\) by the Gallagher correspondence. Let \(\chi\in\mathrm{Irr}(G|\theta)\). By part (b), there is an extension \(\xi\in\mathrm{Irr}(H)\) of \(\varphi\) such that \(\mathbb{Q}(\xi)\subseteq\mathbb{Q}(\chi,\varphi)\). By part (a), there is an extension \(\chi^{\prime}\in\mathrm{Irr}(G)\) of \(\theta\) such that \(\mathbb{Q}(\chi^{\prime})\subseteq\mathbb{Q}(\xi,\theta)\). Since \(\chi^{\prime}=\lambda\chi\) for some \(\lambda\in\mathrm{Irr}(G/N)\), we have that \(\mathbb{Q}_{p}(\chi)=\mathbb{Q}_{p}(\chi^{\prime})\). Since \(\mathbb{Q}(\theta)\subseteq\mathbb{Q}(\chi)\) and \(\mathbb{Q}(\varphi)\subseteq\mathbb{Q}(\xi)\), part (c) follows. In order to treat later the case where \(N\) is a normal subgroup of \(G\) of index \(p\), we need to extend Lemma 3.5, and prove the statement of Conjecture D in a slightly more general case than the normal defect group case. **Lemma 3.10**.: _Suppose that \(G/N\) has order \(p\). Let \(n=|G|_{p^{\prime}}\). Let \(\chi\in\mathrm{Irr}(G)\) have \(p\)-height zero. Suppose that \(\chi_{N}=\theta\in\mathrm{Irr}(N)\) and the defect group \(D_{0}\) of the block of \(\theta\) is normal in \(G\). Then \(\mathbb{Q}_{c(\chi)_{p}}\subseteq\mathbb{Q}_{pn}(\chi)\)._ Proof.: Write \(p^{a}=c(\chi)_{p}\). We may assume that \(a\geq 2\) as the statement is trivially satisfied otherwise. We argue by induction on \(|G|\). Write \(K={\bf C}_{N}(D_{0})D_{0}\triangleleft G\). Let \(\eta\) be an irreducible constituent of \(\chi_{K}\) and \(\psi\in{\rm Irr}(G_{\eta}|\eta)\) be the Clifford correspondence of \(\chi\). Using Theorem 3.2 and induction, we may assume that \(\eta\) is \(G\)-invariant. In particular, \(\theta_{K}=\chi_{K}\) is homogeneous. By Lemma 3.4, \(\chi_{D_{0}}=\theta_{D_{0}}\) is homogeneous. Write \(\theta_{D_{0}}=\theta(1)\lambda\), where \(\lambda\in{\rm Irr}(D_{0})\) is linear and \(G\)-invariant. Let \(D\) be a defect group of the block of \(\chi\). We have that \(G=ND\) and \(D_{0}=N\cap D\), using for instance Lemma 3.7. Let \(H={\bf N}_{G}(D)\) and \(C={\bf N}_{N}(D)=N\cap H\). If \(H=G\) then \(\chi\) lies in a block of normal defect, and we are done by Lemma 3.5. Hence we may assume that \(H<G\). By Theorem A of [NS], there are a block \(b^{\prime}\) of \(C\) with defect \(D_{0}\) and a \(D\)-invariant character \(\theta^{\prime}\) in \(b^{\prime}\) satisfying that \([\theta_{C},\theta^{\prime}]\equiv\pm 1\bmod p\). Since \(\lambda\) is \(G\)-invariant \(\theta^{\prime}\) is the unique irreducible constituent of \(\theta_{C}\) such that \(\theta^{\prime}(1)_{p}=|C:D_{0}|_{p}\), and \([\theta_{C},\theta^{\prime}]\not\equiv 0\bmod p\). The block \(B^{\prime}\) of \(H=CD\) covering \(\theta^{\prime}\) has defect group \(D\), the block of \(\theta^{\prime}\) has defect group \(D_{0}\), and \(\theta^{\prime}\) has height zero. Given \(\sigma\in{\rm Gal}(\mathbb{Q}_{|G|}/\mathbb{Q}_{n})\), we have that \(b^{\sigma}=b\) and \((b^{\prime})^{\sigma}=b^{\prime}\) because \(\sigma\) fixes \(p^{\prime}\)-roots of unity. Moreover \((\theta^{\sigma})^{\prime}=(\theta^{\prime})^{\sigma}\) under the canonical correspondence given by Theorem A of [NS] (because of part (c) in that statement, noticing that \(\lambda^{\sigma}\) is also \(G\)-invariant). This implies that \(\mathbb{Q}_{n}(\theta^{\prime})=\mathbb{Q}_{n}(\theta)\) by elementary Galois theory. Let \(\xi\in{\rm Irr}(H|\theta^{\prime})\). Then \(\xi\) extends \(\theta^{\prime}\) and, by Lemma 3.7, notice that \(\xi\) has height zero. By Lemma 3.9(c), we have that \(\mathbb{Q}_{p}(\chi,\theta^{\prime})=\mathbb{Q}_{p}(\theta,\xi)\). Since \(\mathbb{Q}_{n}(\theta)=\mathbb{Q}_{n}(\theta^{\prime})\) we have that \(\mathbb{Q}_{pn}(\chi,\theta)=\mathbb{Q}_{pn}(\chi,\theta^{\prime})=\mathbb{Q}_ {pn}(\xi,\theta)=\mathbb{Q}_{pn}(\xi,\theta^{\prime})\). We easily deduce that \(\mathbb{Q}_{pn}(\chi)=\mathbb{Q}_{pn}(\xi)\) using that \(\mathbb{Q}(\theta)\subseteq\mathbb{Q}(\chi)\) and \(\mathbb{Q}(\theta^{\prime})\subseteq\mathbb{Q}(\xi)\). We want to apply Lemma 2.2(ii). If \(c(\xi)_{p}=1\) then \(\mathbb{Q}(\xi)\subseteq\mathbb{Q}_{n}\) and consequently \(\mathbb{Q}(\chi)\subseteq\mathbb{Q}_{pn}(\chi)\subseteq\mathbb{Q}_{pn}\), but this is impossible as \(c(\chi)_{p}\geq p^{2}\). Hence \(c(\chi)_{p}\geq p\) and by Lemma 2.2(ii) \(c(\chi)_{p}=c(\xi)_{p}\). Recall that \(H<G\). Write \(k=|H|_{p^{\prime}}.\) By induction, \(\mathbb{Q}_{c(\chi)_{p}}=\mathbb{Q}_{c(\xi)_{p}}\subseteq\mathbb{Q}_{pk}(\xi) \subseteq\mathbb{Q}_{pn}(\xi)=\mathbb{Q}_{pn}(\chi)\), then we are done. ## 4. Character Triples and Fields If \(\chi\in{\rm Irr}(G)\) lies in a \(p\)-block \(B\), we denote \(h(\chi)\) the \(p\)-height of \(\chi\) (we will sometimes just refer to \(h(\chi)\) as the height of \(\chi\)). We remind the reader that if \(N\subseteq{\rm ker}(\chi)\), then the height of \(\chi\) as a character of \(G\) and as a character of \(G/N\) can be different. For instance, the character of degree \(2\) of \({\sf S}_{3}\) has \(2\)-height zero, but as a character of \({\sf S}_{4}\) has \(2\)-height \(1\). The next result clarifies this situation. **Lemma 4.1**.: _Suppose that \(\chi\in{\rm Irr}(B)\), where \(B\) is a \(p\)-block of \(G\) with defect group \(P\). Suppose that \(K\subseteq{\rm ker}(\chi)\) and let \(\bar{\chi}\in{\rm Irr}(G/K)\) the character \(\chi\) viewed as a character in \(G/K\). Let \(\bar{B}\) be the \(p\)-block of \(G/K\) containing \(\bar{\chi}\)._ 1. _There is a defect group_ \(\bar{D}\) _of_ \(\bar{B}\) _such that_ \(\bar{D}\leq PK/K\) 2. _We have that_ \[p^{h(\chi)}=\frac{|PK/K|}{|\bar{D}|}p^{h(\bar{\chi})}\,.\] _In particular,_ \(h(\chi)\geq h(\bar{\chi})\) _and if_ \(h(\chi)=0\) _then_ \(PK/K=\bar{D}\) _and_ \(h(\bar{\chi})=0\)_._ 3. _If_ \(K\subseteq\mathbf{Z}(G)\)_, then_ \(PK/K=\bar{D}\) _and_ \(h(\chi)=h(\bar{\chi})\)_._ Proof.: The first part is Theorem 9.9 of [23]. Since \(\chi\) lies over \(1_{K}\), it follows that \(B\) covers the principal block of \(K\), by Theorem 9.2 of [23]. By Theorem 9.26 of [23], we have that \(P\cap K\in\operatorname{Syl}_{p}(K)\), so \(|K:K\cap P|_{p}=1\). Now, \[\chi(1)_{p}=\bar{\chi}(1)_{p}=\frac{|G/K|_{p}}{|PK/K|}\frac{|PK/K|}{|\bar{D}|}p ^{h(\bar{\chi})}=\frac{|G|_{p}}{|P|}\frac{|PK/K|}{|\bar{D}|}p^{h(\bar{\chi})}\,,\] and we use the definition of \(h(\chi)\). The third part follows from Lemma 2.2 of [13]. **Lemma 4.2**.: _Suppose that \(G^{*}\) is a finite group, \(N,Z\triangleleft G^{*}\) such that \(N\cap Z=1\), where \(Z\subseteq\mathbf{Z}(G^{*})\). Let \(N^{*}=N\times Z\). Let \(\theta\in\operatorname{Irr}(N)\) be \(G\)-invariant, and \(\lambda\in\operatorname{Irr}(Z)\). Let \(\theta^{*}=\theta\times 1_{Z}\), \(\lambda^{*}=1_{N}\times\lambda\), and assume that \((\lambda^{*})^{-1}\theta^{*}\) extends to some \(\tau\in\operatorname{Irr}(G^{*})\). Then the map \(\chi^{*}\mapsto\chi^{*}\tau\) defines a character triple isomorphism \((G^{*},N^{*},\lambda^{*})\to(G^{*},N^{*},\theta^{*})\). Let \(\chi^{*}\in\operatorname{Irr}(G^{*}|\lambda^{*})\). If \(\chi=\chi^{*}\tau\) has height zero in \(G^{*}\), then \(\chi^{*}\) has height zero in \(G^{*}/N\)._ Proof.: We have that \(\tau_{N}=\theta\). The fact that \(\chi^{*}\mapsto\chi^{*}\tau\) defines a character triple isomorphism follows from Lemma 11.27 of [14]. Let \(\chi^{*}\in\operatorname{Irr}(G^{*}|\lambda^{*})\). Then \(N\subseteq\ker(\chi^{*})\) and we can see \(\chi^{*}\) has a character of \(G^{*}/N\) lying over \(\lambda\) (identified with a character of \(N^{*}/N\)). Assume that \(\chi=\chi^{*}\tau\in\operatorname{Irr}(G^{*}|\theta^{*})\) has height zero in \(G^{*}\), we want to prove that \(\chi^{*}\) has height zero in \(G^{*}/N\). Let \(B\) be the \(p\)-block of \(G^{*}/N\) that contains \(\chi^{*}\), and let \(D^{*}/N\) be a defect group of \(B\). By Proposition 2.5(b) of [23], \(D^{*}/N\) is contained in \(DN/N\), where \(D\) is a defect group of \(\chi\). Since \(\theta\) is an irreducible constituent of \(\chi_{N}\) and \(\chi\) has height zero, by Theorem 3.1, we know that \(\theta\) has height zero. By Theorem 9.26 of [23], we know that \(D\cap N\) is a defect group of the block of \(\theta\). Therefore: \[\chi^{*}(1)_{p}=(\chi(1)/\theta(1))_{p}=|G^{*}:DN|_{p}\,.\] By definition, \[\chi^{*}(1)_{p}=|G^{*}/N:D^{*}/N|_{p}p^{h(\chi^{*})}\geq|G^{*}/N:DN/N|_{p}p^{h (\chi^{*})}=|G^{*}:DN|_{p}p^{h(\chi^{*})}\,.\] We conclude that \(p^{h(\chi^{*})}=1\), as wanted. Next we use the theory of character triples, as developed in [14, Chapter 11]. Recall that if \(G/N\) is a group, by [14, Theorem 11.17] there exists a finite central extension \((\Gamma,\pi)\) of \(G/N\) such that \(A=\ker(\pi)\cong M(G/N)\) and the standard map \(\eta\colon\operatorname{Irr}(A)\to M(G/N)\) is an isomorphism. In particular, by [14, Theorem 11.19], if \(G/N\) is perfect then \(\Gamma\) is perfect. We will also make use of the results contained in [13, Section 3]. **Theorem 4.3**.: _Suppose that \((G,N,\theta)\) is a character triple, where \(\theta\in{\rm Irr}(N)\) and \(G/N\) is perfect. Then there exists a character triple isomorphism_ \[(G,N,\theta)\to(\Gamma,A,\lambda),\] _where \(\Gamma\) is perfect, \(A={\bf Z}(\Gamma)\), \({\mathbb{Q}}(\lambda)\subseteq{\mathbb{Q}}(\theta)\) and \({\mathbb{Q}}(\chi)={\mathbb{Q}}(\chi^{*},\theta)\) for every \(\chi\in{\rm Irr}(G|\theta)\), where \(\chi^{*}\) corresponds to \(\chi\) under the character triple isomorphism. Furthermore, if \(\chi\) has height zero in \(G\), then \(\chi^{*}\) has height zero in \(\Gamma\)._ Proof.: We consider a _canonically constructed_ character triple \((\Gamma,A,\lambda)\) isomorphic to \((G,N,\theta)\) in the sense of [GP]. Notice that the values of any \(\psi\in{\rm Irr}(\Gamma|\lambda)\) are in \({\mathbb{Q}}_{|G|}\). (See the paragraph before Corollary 3.3 of [GP].) By Theorem 3.6 of [GP], we have that whenever \(\sigma\in{\rm Gal}({\mathbb{Q}}_{|G|}/{\mathbb{Q}}(\theta))\), then \((\chi^{*})^{\sigma}=(\chi^{\sigma})^{*}\). Hence \({\mathbb{Q}}(\chi)={\mathbb{Q}}(\chi,\theta)={\mathbb{Q}}(\chi^{*},\theta)\), as wanted. We do notice that \(\Gamma\) is perfect using Theorem 11.19 of [Isa]. For the second part, we notice that the construction of the character triple isomorphism in [GP] follows the construction in Theorem 11.28 of [Isa]. We have that \((G,N,\theta)\) isomorphic to \((G^{*},N^{*},\theta^{*})\) where \(G^{*}\subseteq G\times\Gamma\), \(N^{*}=N\times A\), \(A\subseteq{\bf Z}(G^{*})\) and \(\theta^{*}=\theta\times 1_{A}\); in fact \(G\cong G^{*}/A\). Also \((G^{*},N^{*},\theta^{*})\) is isomorphic to \((G^{*},N^{*},\lambda^{*})\) where \(\lambda^{*}=1_{N}\times\lambda\) (here \(\theta^{*}(\lambda^{*})^{-1}\) extends to \(\tau\in{\rm Irr}(G^{*})\)). Finally \((G^{*},N^{*},\lambda^{*})\) is isomorphic to \((\Gamma,A,\lambda)\) using that \(\Gamma\cong G^{*}/N\). Given \(\chi\in{\rm Irr}(G|\theta)\) of height zero in \(G\cong G^{*}/A\), the first character triple isomorphism just sends \(\chi\) to \(\chi\) viewed as a character of \(G^{*}\). By Lemma 4.1(iii), we have that \(\chi\) has height zero as a character of \(G^{*}\). By Lemma 4.2 we have that \(\chi=\chi^{*}\tau\), and \(\chi^{*}\in{\rm Irr}(G^{*}|\lambda^{*})\) has height zero in \(G^{*}/N\). Since the last character triple isomorphism just sends \(\chi^{*}\) to \(\chi^{*}\) seen as a character of \(\Gamma\cong G^{*}/N\), we have that \(\chi^{*}\) has height zero as a character of \(\Gamma\), and the second part of the statement follows. We believe that the following result might be useful in the future. (It can be used, together with the ideas of the proof of Theorem 5.1 of [NTT], as an alternative to Theorem 4.3 in the proof of our main result.) **Theorem 4.4**.: _Suppose that \(\chi\in{\rm Irr}(G)\) has 2-height zero. Let \(n=|G|_{2^{\prime}}\), \(F={\mathbb{Q}}_{n}(\chi)\). Then \(\chi\) can be afforded by an absolutely irreducible \(F\)-representation._ Proof.: We want to show that \(m_{F}(\chi)=1\). We know by Corollary 10.13 of [Isa] that \(m_{F}(\chi)\leq 2\). Suppose that \(D\) is a defect group of the block of \(\chi\), and let \(H={\bf N}_{G}(D)\). Let \(C={\bf C}_{G}(D)\) and \(Z={\bf Z}(D)\). By Lemma 3.8, we have that \(\chi_{H}\) contains some \(\psi\in{\rm Irr}(H)\) with 2-height zero, \({\mathbb{Q}}(\psi)\subseteq F\) and \([\chi_{H},\psi]\) is odd. If we can show that \(\psi\) is afforded by an \(F\)-representation, then so it is \(\psi^{G}\) and by Corollary 10.2(c) of [Isa] we have that \(m_{F}(\chi)\) divides \([\psi^{G},\chi]=[\psi,\chi_{H}]\) which is odd. We work thus to show that \(\psi\) can be afforded by an \(F\)-representation. By Theorem 3.2, and arguing by induction on \(|G|\), we may assume that \(\psi\) is quasiprimitive. In particular, as \(DC\triangleleft H\), we have that \(\psi_{DC}\) is homogeneous. By Lemma 3.4 we have that \(H/CD\) is an odd-order group, and the canonical character \(\theta\in{\rm Irr}(C/Z)\) of the block of \(\psi\) is \(H\)-invariant. Then \(\psi_{DC}=e\nu\) is an \(F\)-valued character and \(\eta\in\operatorname{Irr}(CD)\). Notice that by Corollary 11.29 of [Isa] \(e\) is odd. Using again Lemma 3.4, we have that \(\eta=\theta_{\lambda}\), where \(\lambda\in\operatorname{Irr}(D)\) is linear. On the other hand, as \(CD\) is a central product of \(C\) and \(D\), we have that \(\eta=\nu\cdot\lambda\), where \(\nu\in\operatorname{Irr}(C|\mu)\) and \(\lambda\in\operatorname{Irr}(D|\mu)\) being \(\mu=\lambda_{Z}\). In particular, \(\eta_{C}=\nu\). We claim that \(\eta\) can be afforded by an \(F\)-representation. Let \(P\) be a Sylow \(p\)-subgroup of \(C\). Let \(x\in P-Z\). As \(\nu=\eta_{C}=(\theta_{\lambda})_{C}\), using the values of \(\theta_{\lambda}\) we have that \(\nu(x)=0\). If \(x\in Z\), then \(\nu(x)=\theta(1)\lambda(x)\). Therefore \[\nu_{P}=\frac{\theta(1)}{|P:D|}\mu^{P}\,.\] Notice that \(\frac{\theta(1)}{|P:D|}\) is not divisible by \(2\) because \(\theta\in\operatorname{Irr}(C/Z)\) has \(2\)-defect zero. Finally, since \(\psi_{DC}=e\eta\), where \(e\) is odd, then \(\psi_{C}=e\nu\) and thus \([\psi_{P},\mu^{P}]\) is odd. Hence \([\psi,\mu^{H}]\) is odd. Since \(\mu\) is afforded by an \(F\)-representation, we have that \(m_{F}(\psi)\) is odd, by Corollary 10.2(c) of [Isa]. But Corollary 10.13 of [Isa] implies that \(m_{F}(\chi)\leq 2\). Therefore \(m_{F}(\psi)=1\), and this completes the proof. According to M. Geline, Theorem 4.4 can also be proved by using the main theorem of [GG] and some number theoretical arguments. We notice that the case where \(\chi\) has odd degree of Theorem 4.4 follows from a theorem of Fong [Isa, Corollary 10.13] using [Isa, Corollary 10.2(h)]. ## 5. Fields of Characters in \(\mathcal{F}_{p}\) We briefly pause in our journey to the proof of Theorem A and the reduction theorem for Conjecture D to show the easy containments in Theorem B and Conjecture D. In other words, we show that every number field in \(\mathcal{F}_{p}\) is the field of values of some irreducible character of \(p\)-height zero. We will also show that the statement of Conjecture D is implied by the statement of [Nav1, Conjecture B]. **Theorem 5.1**.: _Suppose that \(\mathbb{Q}\subseteq F\subseteq\mathbb{Q}_{n}\), where \(n\) is the conductor of \(F\). Write \(n=p^{a}m\), where \(m\) is not divisible by \(p\). If \(|\mathbb{Q}_{n}:\langle F,\mathbb{Q}_{m}\rangle|\) is not divisible by \(p\), then there is a solvable group \(G\) and \(\chi\in\operatorname{Irr}(G)\) of \(p\)-height zero such that \(\mathbb{Q}(\chi)=F\)._ Proof.: Let \(\zeta_{n}\) be a primitive \(n\)-th root of unity, and let \(C_{n}=\langle\zeta_{n}\rangle\) be the cyclic group of order \(n\), which is acted on faithfully by \(\mathcal{G}=\operatorname{Gal}(\mathbb{Q}_{n}/\mathbb{Q})\). Let \(G\) be the semidirect product of \(C_{n}\) with \(H=\operatorname{Gal}(\mathbb{Q}_{n}/F)\leq\mathcal{G}\). If \(\lambda\in\operatorname{Irr}(C_{n})\) has order \(n\), then \(\lambda^{G}=\chi\in\operatorname{Irr}(G)\) has field of values \(F\). Let \(\nu=\lambda_{C_{m}}\), where \(C_{m}\leq C_{n}\) has order m. Then \(\nu\) has order \(m\). Notice that \(H_{\nu}=\operatorname{Gal}(\mathbb{Q}_{n}/\langle F,\mathbb{Q}_{m}\rangle)\) has order not divisible by \(p\) by hypothesis. Thus \(G_{\nu}=C_{n}H_{\nu}\). By the Fong-Reynolds Theorem 9.14 of [Nav2], it follows that \(\chi\) has height zero if and only if \(\lambda^{G_{\nu}}\) has height zero, which it has, because it has degree not divisible by \(p\). We can also prove one of the implications in Corollary C. **Corollary 5.2**.: _Suppose that \(d\neq 1\) is an odd square-free integer. Then there is a group \(G\) and a \(2\)-height zero character \(\chi\in\operatorname{Irr}(G)\) such that \(\mathbb{Q}(\chi)=\mathbb{Q}(\sqrt{d})\)._ _Proof._ By considering the cyclic group of order \(4\), we may assume that \(d\neq\pm 1\) is an odd square-free integer. Let \(F=\mathbb{Q}(\sqrt{d})\). If \(d\equiv 1\) mod \(4\), then \(F\subseteq\mathbb{Q}_{|d|}\), \(|d|\) is the conductor of \(F\), and we are done by Theorem 5.1. Suppose that \(d\equiv 3\) mod \(4\). Let \(n=2^{2}|d|\). By Theorem 5.1, we only need to show that \(\langle\mathbb{Q}_{|d|},\mathbb{Q}(\sqrt{d})\rangle=\mathbb{Q}_{n}\). Since \(|\mathbb{Q}_{n}:\mathbb{Q}_{|d|}|=2\), this can only fail if \(\mathbb{Q}(\sqrt{d})\subseteq\mathbb{Q}_{|d|}\), which is not possible because the conductor of \(\mathbb{Q}(\sqrt{d})\) is \(n\). \(\square\) We finish this section by showing that Conjecture D follows from the Alperin-McKay-Navarro conjecture [11, Conjecture B]. We recall that the Alperin-McKay-Navarro conjecture predicts, for a \(p\)-block \(B\) of a finite group \(G\) and its Brauer first main correspondent \(b\), that there is a bijection between the sets of height zero characters of \(B\) and \(b\) such that, if \(\chi\) corresponds to \(\chi^{*}\) then \(\mathbb{Q}_{n}(\chi)=\mathbb{Q}_{n}(\chi^{*})\) where \(n=|G|_{p^{\prime}}\). We care to mention that the currently accepted and most studied form of [11, Conjecture C] is more general, predicting the existence of an \(\mathcal{H}\)-equivariant bijection between height zero characters of \(B\) and \(b\), where the Galois group \(\mathcal{H}\), which contains \(\operatorname{Gal}(\mathbb{Q}_{|G|}/\mathbb{Q}_{n})\), is defined in [11, Section 2]. When we write _the Alperin-McKay-Navarro conjecture_ we refer to the statement predicting \(\mathcal{H}\)-equivariant character bijections. **Theorem 5.3**.: _Conjecture D follows from the Alperin-McKay-Navarro conjecture._ _Proof._ Let \(\chi\in\operatorname{Irr}(B)\) be of \(p\)-height zero, where \(B\) has defect group \(D\). By the Alperin-McKay-Navarro conjecture, there is \(\tau\in\operatorname{Irr}(\mathbf{N}_{G}(D))\) of heigh zero, in the Brauer correspondent block \(b\) of \(\mathbf{N}_{G}(D)\) such that \(\mathbb{Q}_{n}(\chi)=\mathbb{Q}_{n}(\tau)\), where \(n=|G|_{p^{\prime}}\). In particular \(c(\chi)_{p}=c(\tau)_{p}\) reasoning as in Lemma 2.2. Hence, it is enough to prove that \(Q_{c(\tau)_{p}}\subseteq\mathbb{Q}_{pm}(\tau)\) with \(m=|\mathbf{N}_{G}(D)|_{p^{\prime}}\). We may then assume that \(D\triangleleft G\). In this case, we can apply Lemma 3.5. \(\square\) ## 6. The Reduction There is one more issue that we have to solve before proving a reduction to quasi-simple groups of Conjecture D. If \(G/N\) is a \(p^{\prime}\)-group, \(\theta\in\operatorname{Irr}(N)\) is \(G\)-invariant and \(p\)-rational, it is not necessarily true that the characters of \(G\) over \(\theta\) are \(p\)-rational (even if they extend \(\theta\) and \(p\) is odd), as shown by \(p=3\) and \(\mathtt{SmallGroup}(24,4)\). We need the following. **Theorem 6.1**.: _Suppose that \(G/N\) is a simple group of order coprime to \(p\). Let \(\theta\in\operatorname{Irr}(N)\) be \(G\)-invariant and \(p\)-rational. If \(\chi\in\operatorname{Irr}(G|\theta)\), then \(c(\chi)_{p}\leq p\)._ _Proof._ Suppose first that \(G/N\) has order \(q\). Then this follows from Theorem B of [11]. Suppose now that \(G/N\) is perfect. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Then \(c(\chi)_{p}\leq p\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\theta)\). Since \(\theta\) is odd, \(p\)-rational is odd. By Theorem 4.3, there is an isomorphic character triple \((G^{*},N^{*},\theta^{*})\) such that \(N^{*}=\mathbf{Z}(G^{*})\), \(\mathbb{Q}(\theta^{*})\subseteq\mathbb{Q}(\theta)\) and \(\mathbb{Q}(\chi)=\mathbb{Q}(\chi)\). \(\mathbb{Q}(\chi^{*},\theta^{*})\) whenever \(\chi\in\operatorname{Irr}(G|\theta)\) and \(\chi^{*}\) corresponds to \(\chi\) under the character triple isomorphism. In particular, we have that \(G^{*}/N^{*}\) is a simple non abelian \(p^{\prime}\)-group, and \(\theta^{*}\) is \(p\)-rational. Let \(x\in G^{*}\), and let \(\delta\in\operatorname{Irr}(N^{*}\langle x\rangle)\) be over \(\theta^{*}\). Since \(N^{*}\langle x\rangle/N^{*}\) is cyclic and \(\theta^{*}\) is \(G^{*}\)-invariant, \(\delta_{N^{*}}=\theta^{*}\). Since \(x_{p}\in N^{*}\) and \(\delta\) is linear, we have that \(\delta(x)=\delta(x_{p})\delta(x_{p^{\prime}})=\theta^{*}(x_{p})\delta(x_{p^{ \prime}})\in\mathbb{Q}_{|G^{*}|_{p^{\prime}}}\). Then every \(\delta\in\operatorname{Irr}(N^{*}\langle x\rangle|\theta^{*})\) is \(p\)-rational. Since \(\chi_{N^{*}\langle x\rangle}\) is a sum of irreducible characters lying over \(\theta^{*}\) and the choice of \(x\in G^{*}\) was arbitrary, we are done. **Conjecture 6.2**.: _Let \(\chi\in\operatorname{Irr}(G)\) of \(p\)-height zero, where \(G\) is a quasi-simple group. Assume in addition that the \(p\)-block \(B\) containing \(\chi\) is not (virtual) Morita equivalent over an absolutely unramified complete discrete valuation ring to a \(p\)-block of any group \(H\) with \(|H:\mathbf{Z}(H)|<|G:\mathbf{Z}(G)|\). Write \(c(\chi)=p^{a}m\). Then \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pm}(\chi)\)._ Notice that if the \(p\)-block \(B\) of \(\chi\) is virtual Morita equivalent over an absolutely unramified complete discrete valuation ring to a \(p\)-block \(\tilde{B}\), by Theorem 1.6 of [KL], there is a height zero character \(\tilde{\chi}\) in \(\tilde{B}\) and an integer \(l\) not divisible by \(p\) such that \(\mathbb{Q}_{l}(\chi)=\mathbb{Q}_{l}(\tilde{\chi})\). By Lemma 2.2, we have that \(c(\chi)_{p}=c(\tilde{\chi})_{p}=p^{a}\), and therefore \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pl}(\chi)\) if and only if \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pl}(\tilde{\chi})\). Hence, if \(c(\chi)_{p^{\prime}}=m\) and \(c(\tilde{\chi})_{p^{\prime}}=m_{1}\), then \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pm}(\chi)\) if, and only if, \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pm_{1}}(\tilde{\chi})\), by Lemma 2.1. We will use this argument below. **Theorem 6.3**.: _Let \(G\) be a finite group, and let \(p\) be a prime. Let \(\chi\in\operatorname{Irr}(G)\) be of \(p\)-height zero, and write \(c(\chi)=p^{a}m\), where \(a\geq 0\) and \(p\) does not divide \(m\). If Conjecture 6.2 is true, then \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pm}(\chi)\)._ Proof.: We argue by induction on \(|G:\mathbf{Z}(G)|\). We may assume that \(a\geq 2\), because otherwise the statement is trivially satisfied. Let \(n=|G|_{p^{\prime}}\). By Lemma 2.1, it is enough to show that \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pn}(\chi)\). Step 1. If \(N\triangleleft G\) is a proper normal subgroup, then we may assume that \(\chi_{N}=e\theta\) for some \(\theta\in\operatorname{Irr}(N)\), with \(c(\theta)_{p}<c(\chi)_{p}\). Let \(T\) be the stabilizer of \(\theta\) in \(G\), let \(\psi\in\operatorname{Irr}(T|\theta)\) be the Clifford correspondent of \(\chi\) over \(\theta\). By Theorem 3.2, we have that \(\mathbb{Q}_{pn}(\psi)=\mathbb{Q}_{pn}(\chi)\) and \(c(\chi)_{p}=p^{a}=c(\psi)_{p}\). Assume that \(T<G\). Then \(|T:\mathbf{Z}(T)|<|G:\mathbf{Z}(G)|\), and by induction, if \(m_{1}\) is the \(p^{\prime}\)-part of the conductor of \(\psi\), we have that \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pm_{1}}(\psi)\subseteq\mathbb{Q}_{pn}( \psi)=\mathbb{Q}_{pn}(\chi)\). By Lemma 2.1, we deduce that \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pm}(\chi)\). Hence, we may assume that \(\chi_{N}=e\theta\). In particular, \(\mathbb{Q}(\theta)\subseteq\mathbb{Q}(\chi)\), and \(c(\theta)\) divides \(c(\chi)\). If \(c(\theta)_{p}=c(\chi)_{p}\), and \(m_{2}\) is the \(p^{\prime}\)-part of the conductor of \(\theta\), then by induction \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pm_{2}}(\theta)\subseteq\mathbb{Q}_{pm_ {2}}(\chi)\). Again by Lemma 2.1, we deduce what we want. Step 2. \(G\) does not have a normal subgroup of index \(p\). Otherwise, let \(N\) be a normal subgroup of \(G\) with \(G/N\) of order \(p\). By Step 1, \(\chi_{N}\) is homogeneous. Since \(G/N\) is cyclic we can write \(\chi_{N}=\theta\in\operatorname{Irr}(N)\). Suppose that \(G=NH\), \(M=N\cap H\) and there exists some \(H\)-invariant \(\varphi\in\operatorname{Irr}(M)\) of \(p\)-height zero such that \([\theta_{M},\varphi]\) is not divisible by \(p\) and \(\mathbb{Q}_{pn}(\varphi)\subseteq\mathbb{Q}_{pn}(\theta)\). Let \(\xi\in\operatorname{Irr}(H)\) be an extension of \(\varphi\). By Lemma 3.7, we have that \(\xi\) has \(p\)-height zero. By Lemma 3.9(c), we have that \(\mathbb{Q}_{p}(\xi,\theta)=\mathbb{Q}_{p}(\chi,\varphi)\). Notice that \(\mathbb{Q}_{pn}(\chi)\subseteq\mathbb{Q}_{pn}(\chi,\varphi)\subseteq\mathbb{Q}_ {pn}(\chi,\theta)=\mathbb{Q}_{pn}(\chi)\). Therefore \(\mathbb{Q}_{pn}(\chi)=\mathbb{Q}_{pn}(\chi,\varphi)=\mathbb{Q}_{pn}(\theta,\xi)\). Also, \(\mathbb{Q}_{pn}(\xi)\subseteq\mathbb{Q}_{pn}(\chi)\). Write \(c(\theta)_{p}=p^{b}\) and \(c(\xi)_{p}=p^{c}\). We have that \(\mathbb{Q}(\xi)\subseteq\mathbb{Q}_{pn}(\xi)\subseteq\mathbb{Q}_{pn}(\chi) \subseteq\mathbb{Q}_{p^{a}n}\). Therefore \(c\leq a\). Notice that if \(\theta\) and \(\xi\) are \(p\)-rational, then \(\mathbb{Q}(\chi)\subseteq\mathbb{Q}_{pn}(\chi)=\mathbb{Q}_{pn}(\chi,\varphi)= \mathbb{Q}_{pn}(\theta,\xi)=\mathbb{Q}_{pn}\), but we are assuming that \(a\geq 2\). Hence \(b,c\geq 1\). Now \(\mathbb{Q}_{pn}(\theta)\subseteq\langle\mathbb{Q}_{p^{b}n},\mathbb{Q}_{p}\rangle\), and \(\mathbb{Q}_{pn}(\xi)\subseteq\langle\mathbb{Q}_{p^{c}n},\mathbb{Q}_{p}\rangle\). Thus \[\mathbb{Q}(\chi)\subseteq\mathbb{Q}_{pn}(\chi)=\langle\mathbb{Q}_{pn}(\theta ),\mathbb{Q}_{pn}(\xi)\rangle\subseteq\langle\mathbb{Q}_{p^{b}n},\mathbb{Q}_{ p^{c}n},\mathbb{Q}_{p}\rangle\subseteq\mathbb{Q}_{p^{d}n}\,,\] where \(d=\max(b,c)\). Hence \(p^{a}\leq p^{d}\). Since \(b<a\) by Step 1 and \(c\leq a\), we conclude that \(d=c=a\). If \(|H:\mathbf{Z}(H)|<|G:\mathbf{Z}(G)|\) then, by induction, we have that \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pn}(\xi)\subseteq\mathbb{Q}_{pn}(\chi)\), and we are done (using Lemma 2.1). By Lemma 3.8, we may assume that the defect group \(D_{0}\) of the block of \(\theta\) is normal in \(G\). By Lemma 3.10, we are done in this case. Step 3.\(G\) does not have a proper normal subgroup of index not divisible by \(p\). Otherwise, let \(N\triangleleft G\) such that \(G/N\) is simple of order not divisible by \(p\). By Step 1, \(\chi_{N}=e\theta\). Recall that \(c(\chi)_{p}\geq p^{2}\). Hence, by Theorem 6.1, we have that \(c(\theta)_{p}\geq p\), in this case. By Lemma 4.2.(ii) of [NT], we conclude that \(c(\chi)_{p}=c(\theta)_{p}\), contradicting Step 1. Final Step. Let \(N\) be a maximal normal subgroup of \(G\). By Steps 2 and 3, \(G/N\) is simple non-abelian of order divisible by \(p\). By Step 1, write \(\chi_{N}=e\theta\), where \(c(\theta)_{p}=p^{b}\) and \(b<a\). By Theorem 4.3, there is a quasi-simple group \(G^{*}\) and a \(p\)-height zero character \(\chi^{*}\) of \(G^{*}\) such that \(\mathbb{Q}(\chi)=\mathbb{Q}(\chi^{*},\theta)\). Write \(c(\chi^{*})=p^{c}k\), where \(k\) is not divisible by \(p\). Then the conductor of the field \(\mathbb{Q}(\chi^{*},\theta)=\langle\mathbb{Q}(\chi^{*}),\mathbb{Q}(\theta)\rangle\) is the least common multiple of the conductors of \(\chi^{*}\) and \(\theta\). In particular, its \(p\)-part is \(p^{\max(c,b)}\). As \(c(\chi)=c(\mathbb{Q}(\chi^{*},\theta))\), we have that \(a=\max(c,b)\). Since \(b<a\), we have that \(a=c\). Thus \(c(\chi)_{p}=c(\chi^{*})_{p}\). If the \(p\)-block \(B^{*}\) of \(\chi^{*}\) is virtual Morita equivalent over an absolutely unramified complete discrete valuation ring to a \(p\)-block \(\tilde{B}\) of a group \(H\) with \(|H:\mathbf{Z}(H)|<|G:\mathbf{Z}(G)|\), then \(c(\chi^{*})_{p}=c(\tilde{\chi})_{p}\) for some \(\tilde{\chi}\in\tilde{B}\). As explained before the statement of this theorem, we would be done in this case, using induction. Therefore, by Conjecture 6.2, we have that \(\mathbb{Q}_{p^{a}}\subseteq\mathbb{Q}_{pk}(\chi^{*})\subseteq\mathbb{Q}_{pk}(\chi)\). By using Lemma 2.1, the proof of the theorem follows. ## 7. Theorem A for quasisimple groups The aim of this section is to prove Conjecture 6.2 in the case that \(p=2\), see Theorem 7.2 below. In the light of Theorem 6.3 (and Theorem 5.1), this will complete the proof of Theorem A (and Theorem B). The following is useful when working on extensions of quasi-simple groups. **Theorem 7.1**.: _Suppose that \(G/N\) is abelian. Write \(n=|G|_{2^{\prime}}\). Suppose that \(\chi\in\operatorname{Irr}(G)\) has 2-height zero and suppose that \(\mathbb{Q}_{2^{a}}\subseteq\mathbb{Q}_{n}(\chi)\), where \(c(\chi)_{2}=2^{a}\). Let \(\theta\in\operatorname{Irr}(N)\) be an irreducible constituent of \(\chi_{N}\), and write \(c(\theta)_{2}=2^{b}\). Then \(\mathbb{Q}_{2^{b}}\subseteq\mathbb{Q}_{n}(\theta)\)._ Proof.: We argue by induction on \(|G:N|\). We may assume that \(G/N\) has prime index. By Theorem 3.2, we may assume that \(\chi_{N}=\theta\) is irreducible. By Lemma 4.2.(ii) of [NT], we may assume that \(G/N\) has order 2. Let \(D\) be a defect group of the block of \(\chi\), such that \(DN=G\) and \(D_{0}=D\cap N\) is a defect group of the block of \(\theta\). Let \(H=\mathbf{N}_{G}(D_{0})\) and \(M=H\cap N\). By Lemma 3.8, there is \(\varphi\in\operatorname{Irr}(M)\) of height zero, \(\mathbb{Q}_{n}(\varphi)\subseteq\mathbb{Q}_{n}(\theta)\) with an extension \(\xi\in\operatorname{Irr}(H)\) such that \(\mathbb{Q}(\chi,\varphi)=\mathbb{Q}(\theta,\xi)\). Thus \(\mathbb{Q}_{n}(\chi,\varphi)=\mathbb{Q}_{n}(\theta,\xi)\). Hence \(\mathbb{Q}_{2^{a}}\subseteq\mathbb{Q}_{n}(\theta,\xi)\). We may assume that \(b\geq 2\). Suppose that \(c(\xi)_{2}=2^{c}\). We know that \(\mathbb{Q}_{2^{c}}\subseteq\mathbb{Q}_{n}(\xi)\) by Lemma 3.10. If \(\xi\) is 2-rational, then we are done. So we may assume that \(c\geq 2\), and thus \(i\in\mathbb{Q}_{n}(\xi)\). Then \(\mathbb{Q}_{n}(i)\subseteq\mathbb{Q}_{n}(\xi)\cap\mathbb{Q}_{n}(\theta)\). Since \(\mathbb{Q}_{n}(\xi),\mathbb{Q}_{n}(\theta)\subseteq\mathbb{Q}_{|G|}\), and \(\operatorname{Gal}(\mathbb{Q}_{|G|}/\mathbb{Q}_{n}(i))\) is a cyclic 2-group, we then have that \(\mathbb{Q}_{n}(\xi)\subseteq\mathbb{Q}_{n}(\theta)\) or \(\mathbb{Q}_{n}(\theta)\subseteq\mathbb{Q}_{n}(\xi)\). Since \(\mathbb{Q}_{2^{a}}\subseteq\mathbb{Q}_{n}(\theta,\xi)\), we may assume the second. Then \(\mathbb{Q}_{2^{a}}\subseteq\mathbb{Q}_{n}(\xi)\). Thus \(a\leq c\). If \(H<G\), then by induction \(\mathbb{Q}_{2^{c}}\subseteq\mathbb{Q}_{n}(\varphi)\subseteq\mathbb{Q}_{n}(\theta)\). Thus we may assume that \(D_{0}\triangleleft G\). But in this case, we are done by Lemma 3.5. **Theorem 7.2**.: _Let \(\chi\in\operatorname{Irr}(G)\) of 2-height zero, where \(G\) is a quasi-simple group. Assume in addition that the \(2\)-block \(B\) containing \(\chi\) is not (virtual) Morita equivalent over an absolutely unramified complete discrete valuation ring to a \(2\)-block of any group \(H\) with \(|H:\mathbf{Z}(H)|<|G:\mathbf{Z}(G)|\). If \(c(\chi)=2^{a}m\), where \(m\) is odd, then \(\mathbb{Q}_{2^{a}}\subseteq\mathbb{Q}_{m}(\chi)\)._ **Theorem 7.3**.: _Theorem 7.2 is true in the case \(G/\mathbf{Z}(G)\) is a simple group of Lie type in characteristic \(2\)._ Proof.: In the case \(S:=G/\mathbf{Z}(G)\) is isomorphic to \(\mathsf{A}_{5}\), \(\mathsf{A}_{6}\), \(\mathsf{A}_{8}\), \(\mathrm{SL}_{3}(2)\), \(\mathrm{SU}_{4}(2)\), \(\mathrm{Sp}_{6}(2)\), \(\mathrm{PSL}_{3}(4)\), \(\mathrm{PSU}_{6}(2)\), \(\Omega_{8}^{+}(2)\), \({}^{2}\!B_{2}(8)\), \(G_{2}(4)\), \(F_{4}(2)\), \({}^{2}\!F_{4}(2)^{\prime}\), or \({}^{2}\!E_{6}(2)\), the statement is checked using [GAP]. Hence we may assume that \(S\) is not isomorphic to any of these simple groups. This implies that \(G\) is a quotient (by a central subgroup) of \(\mathcal{G}^{F}\), where \(\mathcal{G}\) is a simple, simply connected, algebraic group in characteristic \(2\) and \(F:\mathcal{G}\to\mathcal{G}\) a Steinberg endomorphism. It follows from the main result of [Hum] that any 2-block \(B\) of \(G\) is either of defect \(0\), or of maximal defect. Moreover, in the former case \(\chi\in\operatorname{Irr}(B)\) is just the Steinberg character; in particular it is rational and so we are done. In the latter case, \(\chi(1)\) is odd, and the statement follows from [NT, Theorem A1]. **Theorem 7.4**.: _Theorem 7.2 is true in the case \(G/\mathbf{Z}(G)\) is an alternating or sporadic simple group._ Proof.: In the case \(S:=G/\mathbf{Z}(G)\) is isomorphic to \(\mathsf{A}_{n}\) with \(5\leq n\leq 8\) or one of \(26\) sporadic simple groups, the statement is checked using [GAP]. Hence we may assume that \(S=\mathsf{A}_{n}\) with \(n\geq 9\). (a) First we consider the case \(G=S\). If \(\chi\) extends to \(\mathsf{S}_{n}\) then \(\chi\) is rational. Otherwise, [JK, Theorem 2.5.13] shows that the \(\mathsf{S}_{n}\)-character lying above \(\chi\) is labeled by a self-associated partition of \(n\), with hook lengths along the main diagonal of the Young diagram being the \(k\geq 1\) odd integers \(2h_{1}+1>2h_{2}+1>\ldots>2h_{k}+1\), in which case the only possible irrational values of \(\chi\) are \[\frac{(-1)^{(n-k)/2}\pm\sqrt{(-1)^{(n-k)/2}\prod_{i=1}^{k}(2h_{i}+1)}}{2}.\] Since \(n=\sum_{i=1}^{k}(2h_{i}+1)\), we see that \((-1)^{(n-k)/2}=1\) if and only if \(\prod_{i=1}^{k}(2h_{i}+1)\equiv 1\pmod{4}\). It follows that \(\chi\) is \(2\)-rational, and we are done in this case. (b) It remains to handle the case \(G=2\mathsf{A}_{n}\). We change the notation, and let \(\tilde{B}\) the \(2\)-block of \(G\) containing \(\chi\). We also embed \(G\) in a double cover \(\tilde{G}=2\mathsf{S}_{n}\) of \(\mathsf{S}_{n}\). By [Den, Lemma 2.2], \(\tilde{B}\) contains a unique block \(B\) of \(G/\mathbf{Z}(G)\cong\mathsf{A}_{n}\). Similarly, by [Den, Lemma 2.1], \(\tilde{B}\) is covered by a unique block \(\tilde{B}_{s}\) of \(\tilde{G}\), and \(B\) is covered by a unique block \(B_{S}\) of \((\tilde{G})/\mathbf{Z}(G)\cong\mathsf{S}_{n}\). All these blocks \(\tilde{B}\), \(\tilde{B}_{S}\), \(B\), and \(B_{S}\) have the same weight \(w\geq 0\). If \(\chi\) is trivial on \(\mathbf{Z}(G)\), then we are done by (a). Hence we may assume that \(\chi\) is a spin character of \(G\). Since \(\tilde{B}\) contains a spin character of height zero, by [Den, Proposition 3.1] we must have that \(w\in\{0,1\}\) (and \(\chi\) is the unique spin character of height zero in \(\tilde{B}\)). The defect groups \(D\) of \(B\) are the Sylow \(2\)-subgroups of \(\mathsf{A}_{2w}\), and hence \(D=1\). This implies by [Den, Lemma 2.2] that the defect groups \(\tilde{D}\) of \(\tilde{B}\) are precisely \(\mathbf{Z}(G)\). Thus we are in the case of central defect, and \(\chi\) has relative defect zero, with respect to the faithful linear character \(\mu\) of \(\mathbf{Z}(G)\). Now [Nav0, Theorem 2.1] gives an explicit bijection between defect zero characters of \(G/\mathbf{Z}(G)\) and relative defect zero characters of \(G\). In particular, this bijection implies that \(\chi(g)=0\) whenever the \(2\)-part \(g_{2}\) of \(g\) is not in \(\mathbf{Z}(G)\). We now show that \(\chi\) is \(2\)-rational. Indeed, write \(g=g_{2}g_{2^{\prime}}\) as the product of the \(2\)-part and the \(2^{\prime}\)-part of \(g\). By the above, \(\chi(g)=0\) if \(g_{2}\notin\mathbf{Z}(G)\). If \(g_{2}\in\mathbf{Z}(G)\), then \(\chi(g)=\mu(g_{2})\chi(g_{2^{\prime}})=\pm\chi(g_{2^{\prime}})\) is \(2\)-rational. Hence the statement follows in this case as well. **Theorem 7.5**.: _Theorem 7.2 is true in the case \(G/\mathbf{Z}(G)\) is a simple classical group in odd characteristic._ Proof.: In the case \(S:=G/\mathbf{Z}(G)\) is isomorphic to \(\operatorname{PSU}_{4}(3)\), \(\operatorname{PSU}_{6}(2)\), \(\Omega_{7}(3)\), or \(G_{2}(3)\), the statement is checked using [GAP]. (Note that in Theorem 7.2 we are dealing with a \(2\)-block \(B\) of \(G\), and so we may assume \(\mathbf{O}_{2^{\prime}}(\mathbf{Z}(G))\) is cyclic. Hence in the case of covers of \(\operatorname{PSU}_{4}(3)\), it suffices to handle the two covers \(12_{1}\cdot\operatorname{PSU}_{4}(3)\) and \(12_{2}\cdot\operatorname{PSU}_{4}(3)\), which are given in [GAP].) Hence we may assume that \(S\) is not isomorphic to any of these simple groups, as well as any Lie type group in characteristic \(2\). This implies that \(G\) is a quotient (by a central subgroup) of \(\mathcal{G}^{F}\), where \(\mathcal{G}\) is a simple, simply connected, algebraic group in odd characteristic \(r\neq 2\) and \(F:\mathcal{G}\to\mathcal{G}\) a Steinberg endomorphism. Without any loss, we may replace \(G\) by \(\mathcal{G}^{F}\). Let \((\mathcal{G}^{*},F^{*})\) be dual to \((\mathcal{G},F)\); in particular, \(\mathcal{G}^{*}\) is of adjoint type, and let \(G^{*}:=(\mathcal{G}^{*})^{F^{*}}\). By the main result of [BrM], the set \(\operatorname{Irr}(B)\) of complex characters in the \(2\)-block \(B\) containing \(\chi\) is contained in \(\mathcal{E}_{2}(G,s)\) for some semisimple element \(s\in G^{*}\) of odd order. Suppose that \(s\) is not quasi-isolated (in the sense of [Bon]). Then, by the main result of [BoR], \(B\) is Morita equivalent to a \(2\)-block of a group \(H\) with \(|H:\mathbf{Z}(H)|<|G:\mathbf{Z}(G)|\). Moreover, by [FK, Proposition 4.2] this Morita equivalence descends to an absolutely unramified discrete valuation ring contrary to our assumption. Hence we may assume that \(s\) is quasi-isolated. Assume in addition that \(\mathcal{G}\) is not of type \(A\). By the classification result of Bonnafe [Bon, Table 2], the odd-order assumption on \(s\) implies that \(s=1\). In this case, by [CE, Theorem 21.14], \(\mathcal{E}_{2}(G,s)\) is just the set of irreducible characters in the principal \(2\)-block \(B_{0}\) of \(G\). In such a case, \(\chi(1)\) is odd, and the statement follows from [NT, Theorem A1]. It remains to consider the case \(\mathcal{G}\) is of type \(A\). The same arguments as in the preceding paragraph allow us to assume that \(s\neq 1\), and so \(s\) is **not** isolated, see [Bon, Table 2]. The main result of [BoDR] together with [FK, Proposition 4.2] now shows that \(B\) is again Morita equivalent over an absolutely unramified complete discrete valuation ring to a \(2\)-block of a group \(H\) with \(|H:\mathbf{Z}(H)|<|G:\mathbf{Z}(G)|\), contrary to our assumption. **Theorem 7.6**.: _Theorem 7.2 is true in the case \(G/\mathbf{Z}(G)\) is a simple exceptional group in odd characteristic._ Proof.: We keep the same notation from Theorem 7.5. That is \(G\) is a (quotient by a central subgroup) of \(\mathcal{G}^{F}\), where \(\mathcal{G}\) is a simple, simply connected, algebraic group in odd characteristic \(r\neq 2\) and \(F:\mathcal{G}\to\mathcal{G}\) is a Steinberg endomorphism. Arguing as in Theorem 7.5, we can also assume that \(s\) is isolated. By Lemma 2.1, we observe that is suffices to prove that \(\mathbb{Q}_{c(\psi)_{2}}\subset\mathbb{Q}_{|G|_{2^{\prime}}}(\psi)\) for every height zero character \(\psi\in\operatorname{Irr}_{0}(B)\). We may also assume \(c(\psi)_{2}\geq 4\) since otherwise \(\mathbb{Q}_{c(\psi)_{2}}=\mathbb{Q}\) and the statement is trivally true. (a) Let us first assume that the defect group of \(B\) has order \(|G^{*}:\mathbf{C}_{G^{*}}(s)|_{2}\) and \(\mathbf{C}_{\mathcal{G}^{*}}^{\circ}(s)\) has only components of classical type. Then \(\psi\in\mathcal{E}(G,st)\) for some element \(t\in G^{*}\) which is \(2\)-central in the group \(H:=\mathbf{C}_{\mathcal{G}^{*}}^{\circ}(s)^{F^{*}}\), see [Mal, (2.1)]. If \(\mathbf{Z}(\mathcal{G})\neq 1\) then we let \(\mathcal{G}\lhd\tilde{\mathcal{G}}\) a regular embedding with dual surjective morphism \(\iota^{*}:\tilde{\mathcal{G}}^{*}\to\mathcal{G}^{*}\). Otherwise set \(\tilde{\mathcal{G}}:=\mathcal{G}\). There exists a semisimple element \(\tilde{s}\in\tilde{G}^{*}:=(\tilde{\mathcal{G}}^{*})^{F^{*}}\) of \(2^{\prime}\)-order such that \(\iota^{*}(\tilde{s})=s\) and \(\tilde{t}\in\mathbf{C}_{\tilde{G}^{*}}(\tilde{s})_{2}\) with \(\iota^{*}(\tilde{t})=t\). We let \(\chi\in\mathcal{E}(\tilde{G},\tilde{s}\tilde{t})\) be a character covering \(\psi\). By [GM, Theorem 4.7.9], [GM, Proposition 4.5.5] and using that \({\bf C}_{\tilde{G}^{*}}(\tilde{s}\tilde{t})\) has only components of classical type we have \({\mathbb{Q}}(\chi)\subset{\mathbb{Q}}_{{\rm o}(\tilde{s}\tilde{t})}\) and so \(c(\chi)_{2}\leq{\rm o}(\tilde{t})\). (a1) Assume now first that \(|\chi(1):\psi(1)|_{2}>1\). Then \(G=E_{7}(q)\) and \(\chi=(\psi^{\prime})^{\tilde{G}}\) for some \(\psi^{\prime}\in{\rm Irr}(G{\bf Z}(\tilde{G})\mid\psi)\). In this case, \(c(\chi)_{2}=c(\psi^{\prime})_{2}\) by Theorem 3.2 and \({\mathbb{Q}}(\chi)\subset{\mathbb{Q}}(\psi^{\prime})\). Since \(\psi\in{\rm Irr}(G)\) has height zero, it follows that \(\psi_{{\bf Z}(G)}\) is trivial by [Ruh, Lemma 8.7]. Hence, we can choose \(\chi\in{\rm Irr}(\tilde{G}\mid\psi)\) with the additional property that \(\chi\) is trivial on \({\bf Z}(\tilde{G})\). A consequence of this choice is that \(c(\psi^{\prime})=c(\psi)\) and \({\mathbb{Q}}(\psi^{\prime})={\mathbb{Q}}(\psi)\). Since \(\chi\) is the unique character in its \({\rm Irr}(\tilde{G}/G)\)-orbit which is trivial on \({\bf Z}(\tilde{G})\), it follows that any Galois automorphism that stabilizes the \({\rm Irr}(\tilde{G}/G)\)-orbit of \(\chi\) also stabilizes \(\chi\). Hence, [GM, Theorem 4.7.9] and [GM, Proposition 4.5.5] show therefore that \({\mathbb{Q}}(\chi)\subset{\mathbb{Q}}_{{\rm o}(\tilde{s}){\rm o}(t)}\). Recall that \(\psi\in{\mathcal{E}}(G,st)\) has height zero and the defect group of \(B\) has order \(|G^{*}:{\bf C}_{G^{*}}(s)|_{2}\). Assume that \(e\) is an integer coprime to \({\rm o}(t)\) such that \(t\) is \(H\)-conjugate to \(t^{e}\). We claim that \(t=t^{e}\). Arguing as in [Mal, Theorem 5.9], we see that \(t\) lies in the centralizer of a Sylow \(d\)-torus \({\mathcal{S}}_{d}\) of \({\bf C}_{{\mathcal{G}}^{*}}^{\rm o}(s)\) for \(d\) the order of \(q\) modulo \(4\). By the proof of [BrR, Corollary 2.4], the Sylow \(2\)-subgroup \(W_{2}\) of the Weyl group of \(W:={\bf N}_{H}({\mathcal{S}}_{d})/{\bf C}_{H}({\mathcal{S}}_{d})\) is self-normalizing. Moreover, since \(t\) is \(2\)-central its centralizer \(W(t)\) in \(W\) contains a Sylow \(2\)-subgroup \(W_{2}\) of \(W\). Since \({\bf N}_{H}({\mathcal{S}}_{d})\) controlls \(H\)-fusion in \({\bf C}_{H}({\mathcal{S}}_{d})\) by [Mal, Proposition 5.11] it follows that \(t\) and \(t^{e}\) are conjugate by an element \(w\in{\bf N}_{W}(W(t))\). In particular, by conjugacy of Sylow subgroups in \(W(t)\) we can assume that \(w\in{\bf N}_{W}(W_{2})=W_{2}\) and so \(t=t^{e}\) as claimed. This implies that \({\mathbb{Q}}_{{\rm o}(t)}\subset{\mathbb{Q}}_{o(s)}(\psi)\) by [GM, Proposition 3.3.15] and thus \(c(\psi)_{2}\geq{\rm o}(t)_{2}\) by Lemma 2.1. On the other hand, \[c(\chi)_{2}=c(\psi)_{2}\geq{\rm o}(t)\geq c(\chi)_{2},\] and so \({\rm o}(t)_{2}=c(\psi)_{2}\). Hence, \({\mathbb{Q}}_{c(\psi)_{2}}\subset{\mathbb{Q}}_{o(s)}(\psi)\). (a2) Assume now that \(|\chi(1):\psi(1)|_{2}=1\). In this case, \(\tilde{t}\) is \(2\)-central in \({\bf C}_{\tilde{G}^{*}}(\tilde{s})\). The argument from the first case now show that \({\mathbb{Q}}_{c(\chi)_{2}}\subset{\mathbb{Q}}_{{\rm o}(\tilde{t})_{2}}\subset{ \mathbb{Q}}_{o(\tilde{s})}(\chi)\). Hence, the claim follows in this case from Theorem 7.1. (b) Suppose now that \(s=1\), i.e. that \(B\) is a unipotent block. We can assume that \(B\) has non-maximal defect since otherwise the statement follows from [NT, Theorem A.1]. By Lemma 3.5 we can also assume that \(B\) has non-central defect. In this case, \(B\) is one of the blocks considered in [Ruh, Lemma 7.1]. In case (i) of [Ruh, Lemma 7.1], the defect group of \(B\) is dihedral, so every character in \({\rm Irr}_{0}(B)\) is \(2\)-rational by [Sam, Theorem 8.1]. Hence, the claim holds. In case (ii), \(G=E_{8}(q)\) and the height zero characters where explicitly described in [Ruh, Lemma 7.4]. It follows from this description that \({\rm Irr}_{0}(B)\subset\cup_{t}{\mathcal{E}}(G,t)\), where \(t\in G^{*}\) runs over elements with \(t^{2}=1\), and all height zero characters of \({\rm Irr}_{0}(B)\) have \({\bf Z}(G)\) in their kernel. Now [GM, Theorem 4.7.9] and [GM, Proposition 4.5.5] show that \(\mathbb{Q}(\chi)\) is a cyclotomic field or \(\mathbb{Q}(\chi)\subset\mathbb{Q}(\sqrt{r})\) which in both cases implies the statement. (c) An analysis of the tables in [KM1] shows that in the remaining cases \(G=E_{8}(q)\) and \(\mathbf{C}_{\mathcal{G}^{*}}(s)\) is of type \(E_{6}A_{2}\). Let \(G(s)\) be the \(F\)-fixed points of the connected reductive group in duality with \(\mathbf{C}_{\mathcal{G}^{*}}(s)\). For \(t\in\mathbf{C}_{G^{*}}(s)_{2}\) let \(\psi_{G,st}:\mathcal{E}(G,st)\to\mathcal{E}(\mathbf{C}_{G^{*}}(st),1)\) be Digne-Michel's unique Jordan decomposition for groups with connected center as in [GM, Theorem 4.7.1]. Moreover, note that the centralizer of \(st\) in \(\mathbf{C}_{\mathcal{G}^{*}}(s)\) is connected. Hence, there exists a bijection \(\psi_{G(s),st}:\mathcal{E}(G(s),st)\to\mathcal{E}(\mathbf{C}_{G^{*}}(st),1)\) which can be uniquely determined from Digne-Michel's unique Jordan decomposition in a regular embedding of \(G(s)\). We have a bijection \(\mathcal{J}:\mathcal{E}_{2}(G,s)\to\mathcal{E}_{2}(G(s),s)\) which is the union of the bijections \(\psi_{G(s),st}^{-1}\circ\psi_{G,st}\) with \(t\in\mathbf{C}_{G^{*}}(s)_{2}\), see [Ruh, Lemma 2.3]. By [GM, Theorem 4.7.9] and the construction of \(\mathcal{J}\) it follows that \(\mathcal{J}\) is \(\operatorname{Gal}(\mathbb{Q}_{|G|}/\mathbb{Q}_{o(s)})\)-equivariant. Moreover, by [Ruh, Proposition D] there exists a bijection \(c\mapsto b\) between blocks contained in \(\mathcal{E}_{2}(G(s),s)\) and the blocks contained in \(\mathcal{E}_{2}(G,s)\) such that \(\mathcal{J}(\operatorname{Irr}_{0}(b))=\operatorname{Irr}_{0}(c)\). From this it follows that \(\mathbb{Q}_{o(s)}(\mathcal{J}(\chi))=\mathbb{Q}_{o(s)}(\chi)\) and \(c(\chi)_{2}=c(\mathcal{J}(\chi))_{2}\). Hence, it suffices to consider the height zero characters of the unipotent blocks of \(G(s)\). However, by [CE, Theorem 17.7] the unipotent blocks of \(G(s)\) are isomorphic in a natural way to the blocks of \(G(s)_{\text{sc}}\), the \(F\)-fixed points of the simply connected covering of \(G(s)\). By what we have established about unipotent blocks, \(\mathbb{Q}_{c(\mathcal{J}(\chi))_{2}}\subset\mathbb{Q}_{|G|_{2^{\prime}}}( \mathcal{J}(\chi)_{2})\). Therefore, \(\mathbb{Q}_{c(\chi)_{2}}\subset\mathbb{Q}_{|G|_{2^{\prime}}}(\psi)\) for all height zero characters \(\psi\). We finish this paper proving another corollary of Theorem A. For an integer \(e\geq 1\), let \(\sigma_{e}\) be the Galois automorphism in \(\operatorname{Gal}(\mathbb{Q}^{\text{ab}}/\mathbb{Q})\) fixing \(2^{\prime}\)-roots of unity and sending \(\xi\) to \(\xi^{1+2^{e}}\), where \(e\geq 1\). If \(G\) is any finite group, then \[\mathcal{G}=\operatorname{Gal}(\mathbb{Q}_{|G|}/\mathbb{Q}_{|G|_{2^{\prime}}} )=\langle\tau_{1},\tau_{2}\rangle,\] where \(\tau_{i}\) is the restriction of \(\sigma_{i}\) to \(\mathbb{Q}_{|G|}\). Notice that a character \(\chi\in\operatorname{Irr}(G)\) is \(2\)-rational if, and only if, \(\chi\) is \(\mathcal{G}\)-fixed. The set of \(2\)-height zero characters fixed under the action of \(\langle\sigma_{1}\rangle\) has been recently studied in connection with the number of generators of \(2\)-defect groups (see [RSV, NRSV, Val1]). We have the following. **Theorem 7.7**.: _Let \(\chi\in\operatorname{Irr}(G)\) of 2-height zero. Then \(\chi\) is 2-rational if, and only if, \(\chi\) is \(\sigma_{1}\)-fixed._ Proof.: Let \(m=|G|_{2^{\prime}}\). Suppose that \(\chi\) is \(\sigma_{1}\)-fixed. Then \(\mathbb{Q}_{m}(\chi)\) is also fixed by \(\sigma_{1}\). If \(\chi\) is not 2-rational, then \(i\in\mathbb{Q}_{m}(\chi)\) by Theorem A. However, \(\sigma_{1}(i)=i^{3}\neq i\), a contradiction. Theorem 7.7 is not true for characters which do not have height zero. The smallest example is an irreducible character \(\chi\) of degree \(2\) of a semidihedral group of order \(16\) with field of values \(\mathbb{Q}(\chi)=\mathbb{Q}(\sqrt{-2})\).
2310.06983
Violation of Expectation via Metacognitive Prompting Reduces Theory of Mind Prediction Error in Large Language Models
Recent research shows that Large Language Models (LLMs) exhibit a compelling level of proficiency in Theory of Mind (ToM) tasks. This ability to impute unobservable mental states to others is vital to human social cognition and may prove equally important in principal-agent relations between individual humans and Artificial Intelligences (AIs). In this paper, we explore how a mechanism studied in developmental psychology known as Violation of Expectation (VoE) can be implemented to reduce errors in LLM prediction about users by leveraging emergent ToM affordances. And we introduce a \textit{metacognitive prompting} framework to apply VoE in the context of an AI tutor. By storing and retrieving facts derived in cases where LLM expectation about the user was violated, we find that LLMs are able to learn about users in ways that echo theories of human learning. Finally, we discuss latent hazards and augmentative opportunities associated with modeling user psychology and propose ways to mitigate risk along with possible directions for future inquiry.
Courtland Leer, Vincent Trost, Vineeth Voruganti
2023-10-10T20:05:13Z
http://arxiv.org/abs/2310.06983v1
Violation of Expectation via Metacognitive Prompting Reduces Theory of Mind Prediction Error in Large Language Models ###### Abstract Recent research shows that Large Language Models (LLMs) exhibit a compelling level of proficiency in Theory of Mind (ToM) tasks. This ability to impute unobservable mental states to others is vital to human social cognition and may prove equally important in principal-agent relations between individual humans and Artificial Intelligences (AIs). In this paper, we explore how a mechanism studied in developmental psychology known as Violation of Expectation (VoE) can be implemented to reduce errors in LLM prediction about users by leveraging emergent ToM affordances. And we introduce a _metacognitive prompting_ framework to apply VoE in the context of an AI tutor. By storing and retrieving facts derived in cases where LLM expectation about the user was violated, we find that LLMs are able to learn about users in ways that echo theories of human learning. Finally, we discuss latent hazards and augmentative opportunities associated with modeling user psychology and propose ways to mitigate risk along with possible directions for future inquiry. ## 1 Motivation Plastic Labs is a research-driven product company whose mission is to eliminate the principal-agent problem [11] horizontally across human-AI interaction. In a near future of abundant intelligence, every human becomes a potent principal and every service an agentic AI. Alignment of incentives and information, then, must occur at the scale of the individual. Enabling models to deeply understand and cohere to user psychology will be critical and underscores the importance of research at the intersection of human and machine learning. ## 2 Introduction Large Language Models (LLMs) have been shown to have a number of emergent abilities [26]. Among those is Theory of Mind (ToM), defined as "the ability to impute unobservable mental states to others" [14]. The emergence of this specific capability is of significant interest, as it promises LLMs with the ability to empathize and develop strong psychological models of others, as humans do naturally. But how do you best position LLMs to demonstrate these qualities? Typical methods posit that connecting data sources deemed personal (e.g. email, documents, notes, activity, etc.) is sufficient for learning about a user. Yet these methods assume individual persons are merely the aggregate of their intentionally produced, often superficial, digital artifacts. Critical context is lacking -- the kind of psychological data humans automatically glean from social cognition and use in ToM (e.g. beliefs, emotions, desires, thoughts, intentions, knowledge, history, etc.). We propose an entirely passive approach to collect this data, informed by how developmental psychology suggests humans begin constructing models of the world from the earliest stages [18]. This cognitive mechanism, known as Violation of Expectation (VoE) [3], compares predictions about environments against sense data from experience to learn from the difference, i.e. errors in prediction. Inspired by prompting methodologies like Chain-of-Thought [25] and Metaprompt Programming [19], we design a _metacognitive prompting_ framework for LLMs to mimic the VoE learning process. And we show that VoE-data-informed social reasoning about users results in less ToM prediction error. This paper has the following two objectives: 1. Demonstrate the general utility of a metacognitive prompting framework for VoE in reduc ing ToM prediction error in a domain-specific application -- Bloom, a free AI tutor available on the web and via Discord. 2. Discuss at length opportunities for future work, including the practical and philosophical implications of this emergent capability to create psychological renderings of humans and ways to leverage confidential computing environments to secure them. We use OpenAI's GPT-41 API in the entirety of this experiment and its evaluation. Footnote 1: GPT-4 32k version: 0613 ## 3 Framing and Related Work **Predictive Coding and Theory of Mind**. While not yet a complete theory, Predictive Coding (PC) continues to gain traction as a framework for understanding how modeling and learning occur in biological brains. At a high level, PC hypothesizes that mental models of reality are built and employed by comparing predictions about environments with sensory perception [21]. PC-inspired approaches to machine learning show great initial promise as biologically plausible AI training methodologies [20]. ToM is the ability of some organisms to, despite lacking direct access to any experience but their own, ascribe mental states to others. Notably, PC "may provide an important new window on the neural computations underlying theory of mind" as ToM "exhibit[s] a key signature of predictive coding: reduced activity to predictable stimuli" [15]. That is, when others behave in line with our predictions (i.e. our ToM projections are accurate) less is learned. And the inverse applies -- the prediction errors enhance our capacity for high-fidelity ToM over time. **Emergent Behaviors**. Researchers have long been interested in getting large language models to exhibit "thinking" and "reasoning" behaviors. A number of papers have been influential in pioneering ways to elicit these via prompting [4, 25, 13, 27]. As model architectures have scaled, these abilities appear to have emerged without explicit training [26]. While there's considerable debate concerning the distinction between "emergent abilities" and "in-context learning," [16] these phenomena display clear utility, regardless of taxonomy. Quantifying just how vast the space of latent "overhung" LLM capabilities really is constitutes a major area of formal and enthusiast-driven inquiry. ToM is one such highly compelling research domain. Kocinski [14] shows that the OpenAI GPT-series of models possess the ability to pass fundamental developmental behavior tests. Some papers demonstrate how to improve these abilities [17] and others analyze these methods critically, questioning the premise of ToM emerging in LLMs [24, 22]. Adjacently, there's a clear trend of researchers pushing the limit of what types of cognitive tasks can be offloaded to LLMs. In order to scale supervision, eliminate human feedback, avoid evasive responses, and have transparent governing principles, Anthropic has experimented with delegating the work of human feedback to the LLM itself in their "constitutional" approach [2]. Other papers looking to achieve similar types of outcomes, without needing to update model weights, rely on in-context methods entirely [23, 28]. **Violation of Expectation**. One prime task candidate, which leverages emergent ToM abilities, is VoE. Similar to explanations from PC theories of cognition, VoE is an explicit mechanism that reduces prediction errors to learn about reality. While much of VoE happens in the unconscious mind and from an early age [18], research suggests that deliberate prediction making and error reduction also leads to enhanced learning outcomes [3]. Just as PC may play a role in ToM, VoE is a lightweight framework for identifying the data needed to minimize ToM error. Predicts are generated, compared against percepts, and learning is derived from the difference. **Prompting Paradigms**. Chain-of-Thought [25] prompting clearly shows that LLMs are capable "reasoning" generators and that this species of prompting can reduce the probability of generating incorrect answers. Yet, as this method is limited to one inference, the model often disregards that reasoning, especially during ToM-related tasks. Metaprompt Programming [19] seeks to solve the laborious process of manually generating task-specific prompts (which are more efficacious than general ones) by leveraging LLMs' ability to few-shot prompt themselves dynamically. Deliberate VoE as learning method, ToM, and these prompting approaches all echo the human phenomenon of metacognition -- put simply, thinking about thought. In the next section we introduce a _metacognitive prompting_ framework in which the LLM generates ToM "thoughts" to be used in further generation as part of a VoE framework to passively acquire psychological data about the user. ## 4 Methods The cognitive mechanism VoE can be broken down into two circular steps: 1. Making predictions about reality based on past learning. 2. Learning from the delta between predictions and reality. In the typical chat setting of a conversational LLM application, this means making a prediction about the next user input and comparing that with the actual input in order to derive psychological facts about the user at each conversational turn. We employ metacognitive prompting across both core parts of our framework shown in Figure 1: our _user prediction task_ and our _violation of expectation task_. ### Metacognitive Prompting Synthesized from the influences mentioned in Section 3, we introduce the concept of _metacognitive prompting_. The core idea is prompting the model to generate "thoughts" about an assigned task, then using those "thoughts" as useful context in the following inference steps. We find that in practice, this method of forced metacogntion enhances LLM ability to take context into account for ToM tasks (more discussion in Section 7.2, "Measuring Coherence"). **Task 1: User Prediction and Revision**. Given history of the current conversation, we prompt the LLM to generate a ToM thought including: * Reasoning about the user's internal mental state * Likely possibilities for the next user input * A list of any additional data that would be useful to improve the prediction The list serves as a query over a vector store to retrieve relevant VoE derived user facts from prior interactions. We then prompt the model in a separate inference to revise the original ToM thought given new information, i.e. the retrieved facts that have been derived and stored by VoE. These facts are psychological in nature and taken into account to produce a revision with reduced prediction error. **Task 2: Violation of Expectation and Revision**. We employ the same prompting paradigm again in the VoE implementation. The first step is to generate a "thought" about the difference between prediction and reality in the previous user prediction task. This compares _expectation_ -- the revised user prediction -- with _violation_ -- the actual user input. That is, how was expectation violated? If there were errors in the user predictions, what were they and why? Figure 1: Framework. Contained in the grey dotted box is an application’s core conversation loop (e.g. our AI tutor, Bloom) and drawn in blue is the metacognitive prompting framework described in section 4. This thought is sent to the next step, which generates a fact (or list of facts). In this step, we include the following: * Most recent LLM message sent to the user * Revised user prediction thought * Actual user response * Thought about how expectation was violated Given this context, fact(s) relevant to the user's actual response are generated. This generation constitutes what was learned from VoE, i.e. prediction errors in ToM. Finally, we run a simple redundancy check on the derived facts, then write them to a vector store. We used the OpenAI Embeddings API for the experiment in this paper. ## 5 Experiments Our experiment aims to show that using VoE derived data reduces error in LLM prediction about the next user input. This is especially useful and testable in conversations, so we use data from our AI tutor, Bloom, which is specifically prompted to keep a conversation moving forward to produce learning outcomes for users. Traditional conversation datasets often lean toward trivial dialogue, while instruction-following datasets are predominantly one-sided and transactional. Such datasets lack interpersonal dynamics, offering limited scope for substantive social cognition. Thus, our experiment employs an A/B test with two versions of our AI tutor, conversations with which more closely reflect psychologically-informative social interactions between humans. The first version -- the control -- relies solely on past conversation to predict what the user will say next. Yet the second version -- the experimental -- uses our metacognitive prompting framework in the background to make predictions. Crucially, and as described in Section 4, the framework leverages VoE to increase the amount of information at the model's disposal to predict user responses. These VoE facts are introduced to the AI tutor through the additional "thought revision" phase in the conversational loop, allowing it to reduce prediction error and psychologically cohere itself more closely to the user. We use the same LLM -- GPT-4 -- to classify how well each version predicts each user input. Its assessment is useful to discern whether VoE data can reduce LLM prediction error as LLMs are competent arbiters of token similarity. We do so by prompting GPT-4 to choose from 5 options that assess the degree to which a generated user prediction thought is accurate. The choices include "very," "somewhat," "neutral," "poorly," and "wrong." We include the most recent AI message, thought prediction, and actual user response in the context window. The evaluation scripts can be found on GitHub2. Footnote 2: [https://github.com/plastic-labs/voe-paper-eval](https://github.com/plastic-labs/voe-paper-eval) ## 6 Results **Dataset**. This experiment uses a dataset of conversations users had with Bloom. We built it by running an A/B test on the backend of Bloom's web interface. Only conversations of 3 or more turns are included. We recorded 59 conversations where the VoE version was active and 55 conversations where it was not. Within those, we collected 329 message examples from the VoE version and 637 from the non-VoE version. More on that difference in the "Considerations" paragraph in this section. **Chi Square Test**. We chose to give the model freedom to choose more granular assessments like values "somewhat", "neutral", and "poorly" rather than forcing it into a binary classification, but we found it barely used the "neutral" option. On a five-point scale, the top two ratings ("very" and "somewhat" predictions) are grouped as "good", neutral ratings are omitted from the analysis, and the lowest two ratings ("poorly" and "wrong") are grouped as "bad". We want to test the independence of two categorical variables: _assessment_ (good or bad) and _group_ (VoE or non-VoE). The observed frequencies are given in the following table: Figure 2: Results from A/B test in the Bloom Web UI. \begin{tabular}{|c|c|c|} \hline & VoE & Non-VoE \\ \hline Good & 113 & 173 \\ Bad & 199 & 442 \\ \hline \end{tabular} The Chi-square test statistic is calculated as: \[\chi^{2}=\sum\frac{(O_{ij}-E_{ij})^{2}}{E_{ij}}\] where \(O_{ij}\) are the observed frequencies and \(E_{ij}\) are the expected frequencies under the null hypothesis of independence. The expected frequencies are calculated as: \[E_{ij}=\frac{(row\ total_{i})(column\ total_{j})}{grand\ total}\] For each cell, we calculate the expected frequency and then the contribution to the Chi-square statistic. The degrees of freedom for the test are \((R-1)(C-1)\), where \(R\) is the number of rows and \(C\) is the number of columns. The Chi-Square Test indicated a significant relationship between assessment and group, \(X^{2}(1,927)=5.97\), \(p<.05\), such that VoE predictions were evaluated as good more often than expected and bad less often than expected. These results support our hypothesis that augmenting the Bloom chatbot with VoE reasoning reduces the model's error in predicting user inputs. **Reducing Prediction Errors**. The VoE version showed a significant reduction in prediction errors, resulting in fewer "wrong" values being generated. Overall, the VoE version exhibited a smoothing effect, enhancing the consistency of predictions. Although there was a slight decrease in "very" predictions, a relative increase of 51% in "somewhat" values was observed. This shift suggests an improvement in prediction fidelity, balancing out extreme predictions with more moderate ones. Notably, the VoE version generated 22.4% fewer "wrong" predictions compared to the Non-VoE version. **Considerations**. The inherent nature of VoE is to improve and refine over time. As the vector store becomes populated with more data, the accuracy and relevance of VoE's outputs are expected to increase, enabling more valuable responses for users. It's important to note the presence of latency in VoE Bloom. This likely contributed to the reduction in conversation turns to nearly half that of the non-VoE Bloom. Nevertheless, the fact we observe a statistical difference between the groups given this discrepancy in data size is noteworthy. There are a number of other practical factors in our data that might inhibit our ability to accurately measure the degree to which user prediction error was minimized. We used our conversational AI tutor's data for this study, which is subject to various issues that are being faced by all consumer-facing AI applications. This technology is new, and people are still learning how to interface with it. Many users ask Bloom to search the internet, do mathematical computations, or other things that aren't well served by the prompting framework around GPT-4. Finally, it's of conceptual interest that LLMs can, from prompting alone, reduce prediction errors via mechanisms similar to those posited by PC and VoE theories of human cognition. ## 7 Future Work and Beyond ### Improvements **Retrieval Augmented Generation**. Currently, our VoE fact retrieval schemes are quite naive. The "thought" generation steps are prompted to generate thoughts _and_ additional data points that would help improve the prediction. Those additional data points serve as a basic semantic similarity query over a vector store of OpenAI embeddings, and we select top \(k\) entries. Much could be done to improve this workflow, from training custom embedding models to improving the retrieval method. We also draw inspiration from the FLARE paper [12] and note the improved generation results that come from forecasting a conversation and incorporating that into the context window. **Training/Fine-Tuning**. Similar to how instruction tuning yielded much improved results in decoder-only LLMs, we believe that ToM tuning is a task that could yield better psychological models. The task of following instructions is a suf Figure 3: Plot of results found in Figure 2. VoE smooths the distribution of predictions, reducing prediction error by learning from prior generations. This echoes accounts of human learning described in PC and VoE theories. ficiently abstract idea. Making ToM predictions falls into the same category. ### Evaluation **Assessing Theory of Mind**. The authors of "Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models" [22] explicitly state that "the consequences of the success of these tests do not straightforwardly transfer from humans to models" and speak at length to the evolving landscape of datasets and evaluation methods aimed at machines instead of humans. The debate about whether or not LLMs "have" ToM is likely to continue and more semantic definitional work also needs to be done, but what's undeniable is the utility of this capability. Specifically interesting is boosting the performance of LLMs to minimize user prediction error, as much may become possible as a result of gains in that domain. **Measuring Coherence**. For this paper, we exclusively leverage OpenAI's closed-source models behind their API endpoints. Because of this, we are fundamentally limited in the ways in which we can measure user prediction error. In order to remain consistent, we have the same LLM that is generating the ToM predictions generate a naive assessment of its accuracy, which is described more in Section 5. Experiments with open source LLMs allow much more granular evaluation. E.g. computing the conditional loss over a sequence of tokens or creating new datasets by employing human labelers to train an evaluation model. Establishing a more rigorous standard around evaluating ToM predictions with multi-turn interpersonal conversation data is an imperative area of work as well. The space of open source models is relatively untested in regard to ToM abilities. Comprehensive study of how the open source model stable performs on already existing tasks is a crucial next step. Still further challenges exist in establishing reliable evaluation methods for measuring LLM coherence to users. Each user possesses not only unique psychological properties, but varying levels of awareness of that psychological profile. These subjective limitations demand novel approaches, research into which is only now becoming possible. ### Utility **Infrastructure**. In a world of abundant synthetic intelligence, if vertical-specific AI applications remain viable, they will seek to outperform foundational models within their narrow purview. Redundantly solving personalization and psychological modeling problems represents unnecessary development and data governance overhead _and_ risks contaminating datasets. Nor is it in the security or temporal interest of users to share such data. Horizontal frameworks and protocols are needed to safely and efficiently manage these data flows, improve user experience, and align incentives. **Products**. Ability to robustly model user psychology and make ToM predictions about internal mental states represents novel opportunity for the frontier of goods and services. Bespoke multi-modal content generation, high-fidelity human social simulation, on-demand disposable software, atomization of services, instant personalization, and more could all become possible. Much work will be needed to explore this design space. ### Security While ToM data holds powerful personalization potential, the management and use of that data entails profound responsibility and promises significant hazards. Such data, rich with insights into internal user identity and future behavior suggests immense utility. Yet, this utility makes it a likely target for misuse or object of mishandling -- more so given the remarkable inferential capabilities of LLMs. Security implications are far-reaching, from privacy invasion and identity theft to manipulation and discrimination. Moreover, any breach of trust impacts not just individual users, but the reputation and success of organizations employing it. Below is a non-exhaustive list of future work needed to secure such data throughout its lifecycle. **Encryption and Custody**. Due to the sensitive, individual nature of ToM data, encryption is a bare minimum security requirement, and there are strong arguments to be made for direct user key ownership. Formal investigations into appropriate solutions to both are needed. The process of transforming plaintext to ciphertext safeguards the data from keyless access. Several methods of encryption, including symmetric methods like the Advanced Encryption Standard, which uses the same key for encryption and decryption, and asymmetric encryption methods like RSA, which uses two keys, a public key for encryption and a private key for decryption [1], are plausible candidates. Models for key management will dictate the exact implementation of encryption against the data. A method such as Shamir's secret sharing can be used to split the decryption key between a user and a trusted platform hosting the data [8]. However, the intimate nature of the data may still warrant user ownership, preventing even the platform from accessing the data. **Confidential Computing**. This relatively new technology encrypts data in use (i.e. during processing). Confidential computing is a step beyond traditional methods that encrypt data at rest and in transit, thus providing a more comprehensive data protection framework. It leverages hardware-based Trusted Execution Environments (TEEs) to protect data during computation, enabling sensitive data to be processed in the cloud or third-party environments without exposing it to the rest of the system [7]. Further work can determine architectures for safely mounting user data into TEEs, decrypting, and then using it to improve interactions between users and LLMs. Work to explore how to create a scalable and performant design that does not sacrifice security is needed. Additional considerations need to be made for securely using data with third-party LLM APIs such as OpenAI's GPT-4 as opposed to self-hosted models. **Policy-Based Access Control**. Policy-Based Access Control (or Attribute Based Policy Control) is a method used to regulate who or what can view or use resources in a computing environment [9]. It's based on creating, managing, and enforcing rules for accessing resources to define the conditions under which access is granted or denied. Policies that can be applied on the data to ensure principles of least privilege to client applications and prevent data leakage are directions for further inquiry. LLM applications could be used to extend the policies to allow attributes based on the content of the data, such as grouping by topic. **Frontier Security**. LLMs' powerful inference abilities place them in a new category of digital actors. New paradigms of protection and security will be required. LLMs themselves might be leveraged to proactively monitor and obfuscate user activity or destroy unwanted statistical relationships. The advent of instant personalization may even make persistent application-side user accounts irrelevant or unsustainably hazardous. ### Philosophy **Extended Self**. Chalmers and Clark argued in 1998 that minds can be said to extend into the physical world and still legitimately be considered part of personal cognition [6]. High-fidelity human psychological renderings in AI agents suggest the potential for human agency and identity to extend in similar ways. Unanswered legal, metaphysical, and ethical questions arise from this prospect. **Phenomenology**. When humans impute mental states to others, presumably that assignment is grounded in lived personal experience. That is, we can imagine other people having experiences because we have had similar experiences ourselves. Additionally, we share with the objects of our ToM a genetic schema and physical substrate for intelligence and social cognition. While LLMs display ToM abilities and may well have access to orders of magnitude more accounts of internal mental states via the massive corpus of their pretraining data, none of that has been experienced first hand. Leaving aside that current LLMs likely have no mechanism for experience as we conceive of it [5], what are we to make of ToM in such alien minds? **Game Theory**. Our experiments and testing protocol assume users are unwise to model predictions about them. As users become aware that models are actively predicting their mental states and behavior, those predictions may become harder to make. Similarly, as LLMs take this into account, simulations will become still more complex. ## 8 Discussion Principal-agent problems are a set of well understood coordination failures that emerge from interest misalignment and information asymmetry between persons or groups and their proxies. In normal political and economic life, delegating an agent incurs costs and efforts to minimize that risk reduce the efficiency of the agent. We view our very early work in modeling user psychology as ultimately in service of eliminating the certitude of principal-agent problems from economic relations. As LLMs or other AI systems become increasingly capable and autonomous, they offer enormous economic potential. However, their alignment to human principals is not a foregone conclusion. On the contrary, we may instead see an _exaggeration_ of existing asymmetries between principals and agents, as well as the introduction of new concerns around latency, intelligence, and digital nativity. In order to achieve trustworthy and efficient agenti AI, _individual_ alignment is required. Human agents and deterministic software are already capable of operating _like_ their principals. LLMs promise massive reductions in marginal cost along that axis, but hardly class better than the status quo (and often much worse) with regard to user alignment. Yet the unique potential here is agents who are_ the principals themselves, that is, there is no meaningful practical or philosophical difference between discrete humans and the psychologically-aligned AIs acting on their behalf. LLMs are excellent simulators capable of assuming myriad identities [10]. They also excel at ToM tasks, and we've shown, can passively harvest and reason about user psychological data. These two interrelated qualities may very well make possible high-fidelity renderings of principals capable of flawlessly _originating_ and executing intent as their proxies with zero marginal agency cost. In this way LLMs may become more augmentation than tool, more appendage than agent. ## 9 Acknowledgements The authors are grateful to Ayush Paul and Jacob Van Meter for their work on the Bloom development team, Thomas Howell of Forum Education for extensive conceptual review and ideation, and Zach Seward for invaluable advice and mentoring. We are additionally grateful to Ben Bowman for advising the machine learning aspects of this paper and Lee Ahern from the Bellisario College of Communications at Pennsylvania State University for feedback on the statistical tests and results section.
2310.17656
Particles in a pocket
Communicating science through mobile smartphone and tablet applications is one of the most efficient ways to reach general public of diverse background and age coverage. The Higgsy project was created in 2022 to celebrate the 10th anniversary of the discovery of the Higgs boson at CERN. This project introduces a mobile game to search for the Higgs boson production in a generic particle detector. The MatterBricks is an augmented-reality project that was created for a major national event in Belgium, held in 2023. The main features of the two mobile applications and further prospects for reaching general public through mobile application development process are discussed.
Kirill Skovpen
2023-10-10T11:35:36Z
http://arxiv.org/abs/2310.17656v1
# Particles in a pocket ###### Abstract: Communicating science through mobile smartphone and tablet applications is one of the most efficient ways to reach general public of diverse background and age coverage. The Higgsy project was created in 2022 to celebrate the 10th anniversary of the discovery of the Higgs boson at CERN. This project introduces a mobile game to search for the Higgs boson production in a generic particle detector. The MatterBricks is an augmented-reality project that was created for a major national event in Belgium, held in 2023. The main features of the two mobile applications and further prospects for reaching general public through mobile application development process are discussed. Introduction If you are not reading a paper book while taking public transport, you are probably staring at your mobile phone, or sleeping. Mobile devices are everywhere these days providing us with communication means with our friends and relatives, daily news, blogs, entertainment, and much more. These small ingenious inventions can be easily put into one's pocket to carry along the superpowers and wisdom of past generations. Topics related to fundamental scientific research are not among the most popular things our society regularly researches on the net. While this observation can be simply an intrinsic property of the society, we think that it worth the candle to show it once again that the fundamental research connects to truly fascinating things that can not be overlooked. Creations that are driven by scientific advancements in the field of particle physics and related research areas include organization of masterclasses [1, 2, 3], gaming experiences [4, 5, 6, 7], demonstrator projects [8, 9], professional applications [10, 11], etc. Many outreach studies are performed within the International Particle Physics Outreach Group (IPPOG) [12]. Development of science-popularizing applications that can be installed on a handheld device is a very efficient method to reach diverse populations from different cultural, racial, educational, and social backgrounds, anywhere, anytime. In this work, we present Higgsy and MatterBricks mobile games (Fig. 1) that were developed for the iOS operating system [13], inspired by the rich world of particle physics. ## 2 Higgsy The discovery of the Higgs boson at the Large Hadron Collider (LHC) at CERN marked a scientific breakthrough in our understanding of fundamental interactions included in the standard model (SM) of particle physics [16, 17]. This outstanding scientific achievement was celebrated at its 10th anniversary in 2022 at CERN [18, 19]. The Higgsy project was created to relive the unforgettable experience of discovering the Higgs boson at the LHC and make it accessible to everyone [14]. The gameplay includes several interactive gaming modes, explaining the main features of the proton-proton collisions at the LHC, and inviting a player to participate in an actual hunt for the Higgs boson. The player can generate elementary particles and their decays to study the detector-level information arising from the interactions of these particles with the material of the detector. This learning gaming phase allows the player to become familiar with elementary particles and associate them with different types of interactions, appreciating experimental challenges in Figure 1: The logo images of the Higgsy (left) [14] and MatterBricks (right) [15] projects. properly identifying a certain type of events. Once familiar with the contents of the game, the player can make an attempt to properly identify a required number of events with the Higgs boson production in order to reach a statistically significant observation. The learning and Higgs-hunting phases of the game are illustrated in Fig. 2. ## 3 MatterBricks Novel technologies using virtual and augmented reality (AR) digital experiences have been extremely successful in significantly extending our real-world environment to unexplored territories. We decided to populate these unknown worlds with elementary particles created with MatterBricks [15] for the open symposium in Belgium [20]. The player gets introductory explanations about these particles (Fig. 3) to then dive into the world augmented with the products of their decays (Fig. 4). The goal of the game is to reassemble pairs of particles into their initial particle-origin. As some say, "gotta catch'em all". ## 4 Summary and outlook It's better to see something once, than to hear about it a thousand times. If you haven't installed Higgsy and MatterBricks on your phone or tablet, you should do it now. These small applications Figure 2: Screen captures of Higgsy showing learning (left) and Higgs-hunting (right) modes of the gameplay. Figure 4: Virtual particles projected onto real surroundings with the help of augmented reality. Figure 3: Screen captures of the main menu of MatterBricks. will fill you with joy and hunger for more science. If they really do, we have accomplished our mission. Both Higgsy and MatterBricks can be introduced in a classroom, played outside, or simply discovering it on their own. Our future work includes the development of similar applications for other mobile platforms, such as Android [21], in order to reach a broader audience.
2310.03623
Robustness and complexity of directed and weighted metabolic hypergraphs
Metabolic networks are probably among the most challenging and important biological networks. Their study provides insight into how biological pathways work and how robust a specific organism is against an environment or therapy. Here we propose a directed hypergraph with edge-dependent vertex weight as a novel framework to represent metabolic networks. This hypergraph-based representation captures higher-order interactions among metabolites and reactions, as well as the directionalities of reactions and stoichiometric weights, preserving all essential information. Within this framework, we propose the communicability and the search information as metrics to quantify the robustness and complexity of directed hypergraphs. We explore the implications of network directionality on these measures and illustrate a practical example by applying them to the small-scale e\_coli\_core model. Additionally, we compare the robustness and the complexity of 30 different models of metabolism, connecting structural and biological properties. Our findings show that antibiotic resistance is associated with high structural robustness, while the complexity can distinguish between eukaryotic and prokaryotic organisms.
Pietro Traversa, Guilherme Ferraz de Arruda, Alexei Vazquez, Yamir Moreno
2023-10-05T16:00:54Z
http://arxiv.org/abs/2310.03623v1
# Robustness and complexity of directed and weighted metabolic hypergraphs ###### Abstract Metabolic networks are probably among the most challenging and important biological networks. Their study provides insight into how biological pathways work and how robust a specific organism is against an environment or therapy. Here we propose a directed hypergraph with edge-dependent vertex weight as a novel framework to represent metabolic networks. This hypergraph-based representation captures higher-order interactions among metabolites and reactions, as well as the directionalities of reactions and stoichiometric weights, preserving all essential information. Within this framework, we propose the communicability and the search information as metrics to quantify the robustness and complexity of directed hypergraphs. We explore the implications of network directionality on these measures and illustrate a practical example by applying them to the small-scale e_coli_core model. Additionally, we compare the robustness and the complexity of 30 different models of metabolism, connecting structural and biological properties. Our findings show that antibiotic resistance is associated with high structural robustness, while the complexity can distinguish between eukaryotic and prokaryotic organisms. ## I Introduction A metabolic network [1; 2; 3; 4; 5] is a highly organized system of chemical reactions that occur in living organisms to sustain life and regulate cellular processes. Metabolic networks are incredibly complex because of the large number of reactions and the intricate web of interactions between molecules. Chemical reactions take some metabolites, usually called reactants or substrates, and turn them into products, which can be used by other reactions. This complexity allows organisms to perform various functions and respond to various challenges, but it makes understanding them much more challenging. The key functions of metabolism are the production of energy, the conversion of food into building blocks of proteins, lipids, nucleic acids, and carbohydrates, and the elimination of metabolic wastes. Given the network structure of metabolism, many researchers have attempted to characterize and understand it through network theory. It has been shown that graphs whose nodes are metabolites and are connected by chemical reactions have a scale-free distribution [3] and have been described as "among the most challenging biological networks and, arguably, the ones with most potential for immediate applicability" [6]. Other attempts have tried to give more concrete answers by focusing on graphs with reactions as nodes or bipartite graphs but missing a fundamental aspect of chemical reactions. To take place, they require a collective interaction of reactants to create multiple products. Hence, these are high-order interactions that the graphs cannot fully capture. As network theory has advanced, new structures have been devised that can capture high-order interactions. These structures called hypergraphs have been very successful in fields such as social sciences [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], epidemiology [12; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28], biology [22; 23; 24; 25; 26; 27; 28], etc. Recently, Mulas et al. [29; 30] applied hypergraphs to chemical networks trying to capture the high-order nature of chemical reactions. In this paper, we take the concept of chemical hypergraphs and apply it to metabolic networks. In addition, we take it a step further by showing how including weights in the treatment allows no biological or structural information to be lost. Therefore, we argue that metabolic hypergraphs are the right framework to address and understand metabolism, allowing a bridge between biology and network theory. This article aims to lay the foundation for a theory of metabolic networks based on hypergraphs. We describe the method by which each metabolic network can be represented as a hypergraph and introduce two applicable measures, namely, communicability and search information. The work is organized as follows. In section II we give the mathematical definitions regarding metabolic hypergraphs. We also comment on previous studies in the field of metabolic networks and on how they can be viewed as a simplification of the metabolic hypergraph we propose here. In section III we propose a generalization of the communicability and search information to hypergraphs. We keep this section general enough so that these measures can be easily applied to any hypergraph, directed or undirected, weighted or not. We use metabolic hypergraphs as an example and we report the results in section IV. We conclude by commenting on the possibility that this framework offers to motivate further research in this area. ## II Metabolic networks as hypergraph In this section, we give a formal definition of metabolic hypergraphs and introduce the notation that is used to characterize them. ### Hypergraphs definition A hypergraph \(H=\{V,E\}\) is a set of vertices or nodes \(v\in V\) and hyperedges \(e\in E\). Each hyperedge is a subset of \(V\) such that different nodes interact with each other if and only if they belong to the same hyperedge. Thus, unlike traditional graphs, where edges connect pairs of nodes, hyperedges represent interactions involving multiple nodes. If the dimension \(|e|\) of the hyperedges is \(2\), then the hypergraph is equivalent to a conventional graph. The total number of vertices is denoted as \(N=|V|\) and the number of hyperedges as \(M=|E|\). To interpret metabolic networks as hypergraphs, we first need to define a special type of hypergraph introduced by Chitra et al. [31]. A hypergraph with edge-dependent vertex weights (EDVW) \(H=\{V,E,W,\Gamma\}\) is a set of vertices or nodes \(v\in V\), hyperedges \(e\in E\), edge weights \(w(e)\) and edge-dependent vertex weight \(\gamma_{e}(v)\). If \(\gamma_{e}(v)=\gamma(v)\;\forall\,e\in E\), then the hypergraph is said to have edge-independent vertex weight. All the weights are assumed to be positive. These types of weights are a unique property of some higher-order systems and are crucial to encode in the hypergraph all the information contained in metabolic networks. In this paper, we deal with directed hypergraphs, which are an extension of directed graphs. In a directed hypergraph, each hyperedge is associated with a direction similar to the direction of an arrow connecting two vertices in a directed graph. In this context, a hyperedge \(e_{j}\) is divided into a head set \(H(e_{j})\) and a tail set \(T(e_{j})\). Similarly to the arrow, the direction goes from the tail to the head set, with the difference that the directed hyperedge is connecting multiple vertices. A vertex can belong solely to either the head or the tail of a hyperedge, but not both. Unless explicitly stated otherwise, any hypergraph in this paper is considered to be directed. Additionally, we define \(k_{v}^{out}\), the out-degree of a vertex \(v\in V\), as the number of hyperedge-tails that include \(v\). Similarly, \(k_{v}^{in}\) denotes the in-degree of a vertex \(v\in V\), the number of hyperedge-heads in which \(v\) is contained. We also use \(|H(e)|\) and \(|T(e)|\) to represent the number of vertices belonging to \(H(e)\) and \(T(e)\) respectively. Given a directed hypergraph \(\mathrm{H}=\{V,E\}\) of \(N\) vertices and \(M\) hyperedges, the incidence matrix is the matrix \(\mathcal{I}\in\mathbb{R}^{N\times M}\) such that: \[\mathcal{I}_{ij}=\begin{cases}1&\text{ if }v_{i}\in H(e_{j})\\ -1&\text{ if }v_{i}\in T(e_{j})\\ 0&\text{ if }v_{i}\not\in e_{j}\end{cases}, \tag{1}\] where \(H(e_{j})\) and \(T(e_{j})\) are, respectively, the head and the tail of the hyperedges \(e_{j}\). We can rewrite the incidence matrix as \[\mathcal{I}=\mathcal{I}_{H}-\mathcal{I}_{T}, \tag{2}\] where we separated the contributions coming from the head and the tail of the hyperedges in order to work with positive signed matrices. ### Metabolic hypergraphs In this article, we focus on metabolic networks. A metabolic network [2] is a set of biological processes that determines the properties of the cell. Several reactions are involved in metabolism, grouped into various metabolic pathways. A metabolic pathway is an ordered chain of reactions in which metabolites are converted into other metabolites or energy. For example, the glycolysis pathway is the set of reactions involved in the transformation of one molecule of glucose into two molecules of pyruvate, producing energy. Metabolic networks are among the most challenging and highest potential biological networks [3; 6]. The way to represent a metabolic network on a graph is not unique, and several approaches have been tried. One possible way is to consider metabolites (or reactions) as nodes and connect them if and only if they share a reaction (or metabolite). The resulting graph is undirected, and this may change the structural properties of the network in an undesirable way. In [32], the authors analyze the same dataset that we analyze for E.Coli and propose a directed graph with reactions as nodes that take into account the directionality of the reactions, highlighting the difference with the undirected counterparts. However, reactions are intrinsically higher-order interactions since they can occur only when all reactants are present. In Fig. 1, we illustrate the way to map a chemical reaction network into a hypergraph. The resulting hypergraph is a directed hypergraph with edge-dependent vertex weight, which we will refer to as metabolic hypergraph for brevity. More formally, we define a metabolic hypergraph as a 3-tuple \(H=\{V,E,\mathcal{S}\}\), where \(V=\{v_{1},v_{2},\ldots v_{N}\}\) is a set of N metabolites (vertices), and E is a set of oriented reactions (hyperedges). Each \(e\in E\) is a pair \((T(e),H(e))\), the tail and the head of the hyperedge which corresponds respectively to the inputs and outputs of the reaction. Note that \(T(e)\) or \(H(e)\) can also be empty sets. This is the case for external reactions that introduce inside the cell the ingested metabolites (the tail is an empty set), and external reactions that secrete metabolites (the head is an empty set). We also call the former source reactions and the latter sink reactions, and their effect on the measurements is discussed in more detail in section III. \(\mathcal{S}\) is the stoichiometry matrix associated with the chemical network and it represents the EDVW of the hypergraph. Indeed, one can notice that \(S\) can be rewritten using the EDVW matrix \(\Gamma\) as \(\mathcal{S}=\Gamma\circ\mathcal{I}\), where \(\mathcal{I}\) is the directed incidence matrix and "\(\circ\)" is the element-wise matrix product. ### Literature background There are different techniques to study metabolic networks. One popular method is using stochastic chemical kinetics [33], but this requires the notion of the kinetic rates constant, the rates at which metabolites are consumed per reaction, which are usually not available [34]. What instead is generally known are the reactions, the stoichiometry coefficient, and the structure of the metabolic network. Thus, several graph representations of metabolic networks have been tried. The most common one is the reaction adjacency matrix (RAG) defined as \(A^{RAG}=\mathcal{S}^{T}\hat{\mathcal{S}}\)[32], where \(\hat{\mathcal{S}}\) is the boolean version of the stoichiometry matrix. The biggest limitation of this model is that is undirected, while we know that the direction of reactions is chemically really important. A big improvement was proposed in [32] where the authors proposed a flux-dependent graph model, that accounts suitably for the directness of the reactions. However, graph representations of these systems are still missing a crucial point, which is the fact that reactions are higher-order object, that involves the interactions of all input metabolites to produce output metabolites. Therefore, hyperedges are the natural mathematical object to encode reactions. Mulas et al. [30] already took a step in this direction by defining a Laplace operator for chemical hypergraphs. The last step we make is to incorporate into the hypergraph model the weights associated with metabolites and reactions, using a similar framework to the EDVW defined in [31]. This last modification to the model is crucial to include biological and chemical constraints into the model. The great advantage of the metabolic hypergraph framework we propose is that it captures all the physical properties that a metabolic network displays: the directness of reactions, the higher-order interactions, and the chemical properties like mass conservation, thanks to the inclusion of weights. This framework represents a link between network theory and biology. We remark that the previous graph representation of metabolic networks can be seen as a pairwise projection of a metabolic hypergraph. For example, the RAG is an undirected projection of the hypergraph as in [35] and the flux-dependent graph [32] is similar to the normalized adjacency matrix defined in [36] but extended to directed and weighted hypergraphs. Projections are a pairwise simplification and can perform well depending on the task, but they don't contain all the information. ### Dataset In our experiments, the metabolic hypergraphs are taken from the BiGG Database [37]. We analyze 30 different models, with an increasing number of nodes describing different organisms (see Table 1 in the Appendix for the exact number of nodes and reactions of each BiGG model). We chose the metabolic networks in order to have a reasonable variety of organisms, and we avoided very large networks because of the computational costs. The majority of the data is composed of bacteria that can be divided into classes like antibiotic-resistant, aerobic or anaerobic, Gram-positive or Gram-negative. The other organisms are Eukaryotes and one in the Archaea domain. All data are publicly available on the BiGG models web page [38] in different formats. In this analysis, the _.json_ format is used. The data contain information on metabolites, reactions, and genes. Metabolites Figure 1: An example of a metabolic network mapped into a hypergraph with edge-dependent vertex weight. In a), we present a small network composed of three reactions and five metabolites. The first reaction \(r_{1}\) is reversible and is represented with the double arrow. In b), we show the corresponding stoichiometry matrix. Reacants are negative and products are positive. Note that we need to split the reversible reaction into two irreversible reactions \(r_{1}^{+}\) and \(r_{1}^{-}\) to write it in matrix form. This stoichiometry matrix is the weighted incidence matrix of the hypergraph with edge-dependent vertex weights shown in c). For the sake of visualization, only the hyperedge \(r_{1}^{+}\) is shown. The hyperedge \(r_{1}^{-}\) is just the same but with the opposite sign. Note that weights are both positive and negative, meaning that the hypergraph is directed. Indeed, we separate the head and tail of each hyperedge with a dashed line. are identified by a Bigg id, consisting of an abbreviation defining their type, for example, "h" for hydrogen and "ATP" for the adenosine triphosphate, and a subscript indicating the compartment to which they belong. Regarding the reactions, in addition to their IDs, the metabolites belonging to them are given, with their respective stoichiometric coefficients. We work in the convention in which a metabolite with a positive stoichiometric coefficient is a product, otherwise, it is a reactant. In the BiGG dataset, the direction of the reactions is also determined by the parameters "lower_bound" and "upper_bound." These parameters are associated with each reaction and correspond to the maximal flux of metabolites that can flow through. A value of \(\text{lower\_bound}=0\) and \(\text{upper\_bound}>0\) means that the reaction is annotated correctly, following the convention. On the contrary, if \(\text{lower\_bound}<0\) and \(\text{upper\_bound}=0\), the reactions are written with inverted orientations. These two parameters combined also determine if a reaction is reversible or not. If a reaction is reversible, both the direct and inverse reactions are present and will be characterized by a \(\text{lower\_bound}<0\) and \(\text{upper\_bound}>0\). We recall that we treat reversible reactions as two distinct hyperedges, see Fig. 1 for a visual example. It is important to notice that few reactions have \(\text{lower\_bound}=0\) and \(\text{upper\_bound}=0\). In practice, this implies that no flux of metabolites can flow through, so those reactions are discarded. Lastly, we highlight that some hyperedges may have an empty tail or head. These hyperedges correspond to reactions involved in the transportation of metabolites from the outside of the cell to the inside or vice-versa. For this reason, sometimes they may represent sinks and sources in the hypergraph. By source, we mean a node or hyperedge from which you can start and leave but never go back, while a sink is a trapping node or hyperedge that if it is reached, it is impossible to leave. ## III Measurements In this section, we define two measures of the chemical hypergraph based on the notion of paths or walks on hypergraphs. A _walk_ of length \(l\) from node \(v_{0}\) to node \(v_{l}\) is defined as a sequence of alternating nodes and hyperedges \((v_{0},e_{1},v_{1},e_{2},v_{2},...e_{l},v_{l})\). We also define the _dual walk_ form hyperedge \(e_{0}\) to hyperedge \(e_{l}\) of length \(l\) as the alternating sequence of alternating nodes and hyperedges \((e_{0},v_{1},e_{1},v_{2},e_{2},...v_{l},e_{l})\). We are interested in both metabolites and reactions, which is why it is useful also to consider the dual walk. ### Hypergraph communicability We are usually interested in understanding how paths are distributed because that is how information and interactions spread. In social systems, for example, the more the paths connecting two nodes, the easier is for information to spread from one another. Also, if one path of connection fails, the information can still be spread through other paths, even if they are longer than the path that failed. For this reason, the notion of paths and communication between nodes can also be related to the robustness of the network. However, having a robust network is not always positive. The same reasoning about the spreading of information applies to the spreading of viruses. If a network is robust, is way more difficult to design containment strategies for the virus, since shutting down a connection might not be enough because of the presence of alternative paths. A way to measure how nodes communicate within a network is called communicability and we extend this definition to hypergraphs. The communicability [39; 40] between two pairs of node \(p\) and \(q\) is defined as the weighted sum of all walks starting from node \(p\) and ending at node \(q\), as in \[G_{pq}=\sum_{k=0}^{\infty}c_{k}n_{pq}^{k}, \tag{3}\] where \(n_{pq}^{k}\) is the number of walks from \(p\) to \(q\) and \(c_{k}\) is the penalization for long paths. The most common choice is \(c_{k}=\frac{1}{k!}\) so that you recover an exponential expansion. For a graph, \(n_{pq}^{k}\) can be easily found by taking the k-power of the adjacency matrix, \((A^{k})_{pq}\). Hypergraphs don't have a unique definition of adjacency matrix, we thus have to use the definition of walk given above. The vertex-to-vertex communicability for a hypergraph with incidence matrix \(\mathcal{I}\) is defined as \[G_{pq}^{V}=\sum_{k=0}^{\infty}\frac{\left((\mathcal{I}_{T}\mathcal{I}_{H}^{t})^ {k}\right)_{pq}}{k!}, \tag{4}\] or in matrix form \[G^{V}=e^{\mathcal{I}_{T}\mathcal{I}_{H}^{t}}, \tag{5}\] where \(t\) indicates the transpose of the matrix. In metabolic hypergraphs, we are also interested in how reactions communicate with each other. For this reason, we define the hyperedge-to-hyperedge communicability based on the notion of dual path, \[G_{pq}^{E}=\sum_{k=0}^{\infty}\frac{\left((\mathcal{I}_{H}^{t}\mathcal{I}_{T}) ^{k}\right)_{pq}}{k!}, \tag{6}\] or in matrix form \[G^{E}=e^{\mathcal{I}_{H}\mathcal{I}_{T}}. \tag{7}\] The Estrada index [39; 41] of a hypergraph \(H\) is generalized as \[EE^{V}(H) =\text{Trace}\left(G^{V}\right), \tag{8}\] \[EE^{E}(H) =\text{Trace}\left(G^{E}\right).\] One can notice that the matrices \(\mathcal{I}_{T}\mathcal{I}_{H}^{t}\) and \(\mathcal{I}_{H}^{t}\mathcal{I}_{T}\) have the same spectrum except for the number of zero eigenvalues because of the difference in size. This means that for \(M>N\) for example (which is usually the case in metabolic hypergraphs), then the Estrada index defined on nodes and the one defined on the hyperedges are related by \(EE^{E}(H)=EE^{V}(H)+(M-N)\). We use the Estrada index defined on the nodes to measure the hypergraph robustness, also known as natural connectivity, as \[\bar{\lambda}^{V}=\log\bigg{(}\frac{EE(H)^{V}}{N}\bigg{)}. \tag{9}\] The same definition holds for \(\bar{\lambda}^{E}\) with the proper normalization. Since computing the exponential of very large matrices might be a difficult numerical task, we use an approximation for the calculation of the robustness based on eigenvalue decomposition. For simplicity, let us call \(A^{V}=\mathcal{I}_{T}\mathcal{I}_{H}^{t}\) (the same reasoning holds for \(A^{E}=\mathcal{I}_{H}^{t}\mathcal{I}_{T}^{t}\)) and order the spectrum of \(A^{V}\) in such a way that \(\lambda_{1}>\lambda_{2}>\lambda_{3}>...\lambda_{N}\). Then the natural connectivity or robustness of the hypergraph becomes \[\begin{split}\bar{\lambda}^{V}&=\log\left(\sum_{i= 1}^{N}e^{\lambda_{i}}\right)-\log(N)=\\ &=\log\left[e^{\lambda_{1}}\left(1+\sum_{i=2}^{N}e^{\lambda_{i}- \lambda_{1}}\right)\right]-\log(N)=\\ &=\lambda_{1}+\log\left(1+\sum_{i=2}^{N}e^{\lambda_{i}-\lambda_{1 }}\right)-\log(N)=\\ &=\lambda_{1}-\log(N)+\mathcal{O}\left(e^{-(\lambda_{1}-\lambda_ {2})}\right).\end{split}\] Thus if the spectral gap is large enough, the natural connectivity is dominated by the largest eigenvalue. Since the correction is exponential, this approximation is usually quite good. As a consequence of the common spectrum of \(\mathcal{I}_{H}^{t}\mathcal{I}_{T}\) and \(\mathcal{I}_{T}\mathcal{I}_{H}^{t}\), the difference in robustness is approximately \(\bar{\lambda}^{V}-\bar{\lambda}^{E}\approx\log(\frac{M}{N})\), which is usually quite small. This generalization of communicability applies also to undirected hypergraphs by substituting \(I_{H}\) and \(I_{T}\) with the undirected incidence matrix \(I\). ### Hypergraph search information _Rosvall et al._[42; 43] introduced the concept of search information, as a measure of complexity in urban graphs. The idea is to measure the number of binary questions one has to make in order to locate the shortest path connecting a node \(s\) to a node \(t\). As a consequence, this measure is based on walks like the communicability, but with the crucial difference that it considers only the shortest paths. This allows us to link the search information with the notion of complexity. While alternative pathways tend to make the network more robust, they also make the probability of finding the shortest path decrease and the complexity increases. This trade-off is the reason that motivated us to consider communicability and search information together. In [42], the search information is defined as a matrix \(S\) with entries \[S(i,j)^{V}=-\log_{2}\left(\sum_{\{p(i,j)\}}P\left(p(i,j)\right)\right), \tag{10}\] where \(\{p(v_{i},v_{j})\}\) is the set of all shortest paths from node \(v_{i}\) to node \(v_{j}\). The original definition was made for undirected and unweighted ordinary graphs, so a very different structure from directed hypergraphs with edge-dependent vertex weight but the meaning remains the same. What changes, is the probability of following the shortest path. The probability of making a step is proportional to the stoichiometric coefficients of the starting and arriving node, similar to what has been done in the normalized flow graph in [32]. The probability of taking a step in a directed hypergraph with EDVW is \[\begin{split} P(v\xrightarrow{}e)&=\frac{\gamma_{ e}(v)}{\sum_{h}\gamma_{h}(v)},\\ P(e\xrightarrow{}v)&=\frac{\gamma_{e}(v)}{\sum_{h }\gamma_{e}(n)}.\end{split} \tag{11}\] The probability of following a path is derived by multiplication of the single-step probability, \[P(v_{0},v_{l})=P(v_{0}\xrightarrow{}e_{1})P(e_{1}\xrightarrow{}v_{1})\dots P (e_{l}\xrightarrow{}v_{l}). \tag{12}\] It is important to note that the search information might be ill-defined if the hypergraph has sources or sinks. For example, by definition, there are no paths from a sink node \(v_{\text{sink}}\) to any other nodes \(v\), making the definition of \(S(v_{\text{sink}},v)\) unclear in this case. What we do to solve the problem is to set \(S(v_{\text{sink}},v)=0\) and then not count sink and source nodes when computing the average. With this convention, the access, hide, and average search information are defined as \[\begin{split} A^{V}(s)&=\frac{1}{N-N_{\text{ sources}}}\sum_{t}S^{V}(s,t)\\ H^{V}(t)&=\frac{1}{N-N_{\text{sinks}}}\sum_{s}S^{V }(s,t)\\ S^{V}&=\frac{1}{(N-N_{\text{sinks}})(N-N_{\text{ sources}})}\sum_{s,t}S^{V}(s,t).\end{split} \tag{13}\] As a consequence, the access information of a sink and the hide information of a source will be set to zero. Following [42], we introduce an additional normalization factor \(\ln_{2}M\) to take into account size effects. We denote the normalized average search information as \(\sigma^{V}=\frac{\bar{S}^{V}}{\log_{2}N}\). The interpretation of these measures is very intuitive. The access information measures how easy it is to reach the other nodes in the network, while the hide information estimates how hidden a node is. Consequently, very central and connected nodes in the hypergraph have low hide information because there are a lot of paths leading to them but have relatively high access information because there are also many paths departing from such nodes. ## IV Results and Discussion In this section, we apply the previously defined metrics to a range of metabolic hypergraphs. As illustrated in Fig. 1, these hypergraphs were constructed by starting with metabolic networks obtained from the BiGG Dataset [37]. The metabolic networks were selected to have a reasonable variety of organisms. The primary goal of this section is to demonstrate the practical application of our framework and the defined measurements. ### Exploring the E. coli Core Model: A Practical Example To provide a tangible illustration of our methodology, we focus on the BiGG model known as e_coli_core [44]. This model represents a small-scale version of Escherichia coli str. K-12 substr. MG1655, making it an ideal candidate for demonstrating the performance of our metrics and understanding their limitations. Additionally, an Escher map for this model is available online [45]. In Fig. 2, we show the access vs. hide information for reactions and metabolites. Regarding the reactions (Fig. 2 a), the measure correctly identifies the Biomass reaction as a central hub. Reactions are plotted with different colors based on the biological pathway they belong to. We can clearly see the behavior of sinks and sources in the reactions belonging to the extracellular exchange pathway. The pathways don't tend to separate into clusters, indicating that they all have a similar complexity. This could be an effect of the simplicity of this model or could be a property shared by all organisms. We didn't investigate further since the scope of this section was just to provide a practical example, but it could be worth it to explore it in future work. We also comment on the reactions that are ranked the highest by the average communicability. The average communicability is defined as \(\bar{G_{e}}=\frac{1}{M}\sum_{h\in E}G_{he}^{E}\) and is shown in Fig. 3. Notably, the Biomass reaction (1-st highest average communicability) and ATP synthase (2-st highest average communicability) are correctly identified as central reactions within the metabolism. The Biomass reaction is responsible for cell growth, while ATP synthase plays a crucial role in ATP synthesis, the primary energy source for the organism. The production of ATP is mainly due to the consumption of oxygen that occurs through the reaction CYTBD (cytochrome oxidase bd - 6-th highest average communicability). When oxygen is unavailable, Escherichia coli can still survive thanks to the activation of the anaerobic pathway, which derives energy from the reaction THD2 (NAD(P) transhydrogenase -3-rd highest average communicability). Regarding the metabolites (Fig. 2 b), we observe a clear distinction between those belonging to the cytosol compartment and those located in the extracellular compartment. As expected, extracellular metabolites tend to have, on average, higher hide information. It is important to clarify that metabolites with zero hide information are source nodes and remain initialized to zero because they are unreachable. However, an instructive observation could be made on o2_c. As commented in section III, a node with low but non-zero hide information is expected to be a central hub, but in reality, it has a very low degree. The explanation for this helps to understand the implications of network directionality. The node o2_c is only connected to the core metabolism via the irreversible CYTBD reaction as a substrate. Consequently, there cannot be any directed path from the core metabolism to o2_c, only the opposite. We conclude that the node o2_c does not belong to the largest strongly connected component. In practice, it behaves very similarly to a source node. Nonetheless, the hide information is not zero because a pathway originates from the transport of external oxygen to the cytosol. In contrast, in cyanobacteria, algae, and plants (not investigated here) O2 is produced via oxygenic photosynthesis. In those organisms, O2 should be part of the strongly connected component. ### Robustness and complexity across organisms Our study assesses the robustness and complexity of 30 distinct metabolic hypergraphs derived from various eukaryotic and prokaryotic organisms. In Fig. 4, we present the computed robustness values for several organisms arranged in ascending order. The BiGG model associated with the organisms _Staphylococcus aureus subsp aureus_[46; 47], _Mycobacterium tuberculosis_[48; 49], _Acinetobacter baumannii AYE_[50], and _Salmonella enterica_[51] are represented in different color because they are bacteria that have evolved resistance to antibiotics. Except for the first _Staphylococcus aureus subsp aureus_ model, antibiotic-resistant bacteria tend to exhibit relatively high robustness compared to other organisms. We measured the Spearman's rank correlation between robustness and antibiotic resistance obtaining a value of 0.424, revealing a moderate correlation. Here, the definition of robustness is based on the network's resilience to random or targeted node removal. The concept of natural connectivity quantifies this resilience by counting the number of closed loops in the network. If there are many alternative paths, it is less probable that a node removal will disconnect the network. In the context of biology, antibiotics operate by targeting and inhibiting some specific reactions, without which the cell dies [1]. Therefore, having a structurally robust metabolism is advantageous as it allows the organism to circumvent antibiotic inhibition by utilizing alternative reactions or pathways. However, this is not the whole picture since many other factors play a role. For example, bacteria are naturally subjected to random mutations that may strengthen their response to antibiotics, and this may not necessarily be reflected in a high structural hypergraph robustness. Conversely, a very robust metabolic hypergraph, with many alternative paths, may have a few but very important reactions that are easy to target with antibiotics. Hence, high structural hypergraph robustness does not guarantee antibiotic resistance. The complexity of metabolic networks is expected to be quite similar across organisms since they share many common reactions and metabolic pathways. Nevertheless, some differences are expected in the metabolism of aerobic and anaerobic organisms, as well as between eukaryotes and prokaryotes. Aerobic and anaerobic organisms should have a different metabolism because of the different ways they produce energy, while eukaryotes and prokaryotes have significantly different cell structures. With this in mind, we measure the average search information of the 30 different metabolic hypergraphs and report the results in Fig. 5. We notice a clear separation between eukaryotes and some aerobic organisms, showing a high complexity, and prokaryotes, which have a lower Figure 3: Reactions average communicability for the e_coli_core model. A simplified Escher map is used as a background to help with the visualization. For a more accurate version of the map, visit [45]. Figure 2: Access vs. hide information for reactions a) and metabolites b). Reactions are colored differently according to the pathway they belong to. Not that the \(y\) axis is cut for visualization purposes. Metabolites are divided into compartments, \(c\) stands for cytosol compartment, and \(e\) for extracellular space. complexity. A few outliers exist, including _Staphylococcus aureus subsp aureus N315_, which exhibits high complexity, potentially due to unusually large weights associated with certain reactions compared to other organisms. Setting all the weights to 1 would indeed lead to a much lower complexity, ranked slightly below the average, indicating a possible bias. In addition, one can also notice that the other model for _Staphylococcus_ has a low complexity. Another outlier is the first model we analyzed for _Homo sapiens - erythrocytes_[52] that may be expected to be complex. However, it is important to note that this model refers just to the erythrocyte metabolism (blood cells) rather than the entire human metabolisms. Erythrocytes lack mitochondria and produce ATP through anaerobic glycolysis, so their metabolism could be closer to that of anaerobic organisms. Conversely, the low complexity of the aerobic organisms _Acinetobacter baumannii AYE_, _Pseudomonas putida_, and _Helicobacter pylori_ is curious, and we don't have a clear motivation. Note that a generic human (_Homo sapiens_) cell has a similar com Figure 4: The robustness measured as the natural connectivity \(\bar{\lambda}^{V}\) of 30 different BiGG models. The organisms resistant to antibiotics are shown in different colors. The models are ordered in increasing robustness. Figure 5: The complexity measured as the average search information \(\sigma^{V}=\frac{S^{V}}{\log_{2}N}\) of 30 different BiGG models. The models are ordered in increasing complexity, and the y-axis is zoomed in for visualization purposes. plexity as a yeast cell (_Saccharomyces cerevisiae_). That is expected. Eukaryote cells have similar metabolic pathways. The additional complexity in human metabolism is due to the multi-cellularity, which is not accounted for in this study. ## V Conclusion Metabolic networks are very large and complex systems. For this reason, it is important to build a framework able to unite biology and network theory. Many successful studies have represented metabolic networks as graphs with metabolites as nodes, reactions as nodes, or both. Taking a step further, with the employment of hypergraphs, we are able to capture what all of these previous graph representations were missing, the higher-order interactions of reactions. In this paper, we show how metabolic networks are naturally mapped into hypergraphs. In particular, the stoichiometry matrix can be viewed as a weighted incidence matrix of a directed hypergraph with edge-dependent vertex weight. No information is lost representing metabolic networks as hypergraphs: the higher-order interactions between metabolites, the directionalities of reactions, and the stoichiometric weights are all included. Within this novel framework, we propose two measurements to characterize the hypergraph's robustness and complexity. We apply them to directed hypergraphs with EDVW, but the generalization to undirected and unweighted hypergraphs is straightforward. This approach allows analysis at the local scale, with the communicability and the access and hide information, and at the global scale, with the natural connectivity as a measure of robustness and the average search information as a measure of complexity. We comment on the complications introduced by directionality and how they can be reflected in the measures. To illustrate the practical application of our framework and metrics, we present an example using the e_coli_core model. This small-scale metabolism demonstrates how our metrics operate locally and offers valuable insights into the behavior of metabolic hypergraphs. At the global scale, we compare 30 different BiGG models in robustness and complexity, leading to some interesting results. We show that the metabolism of organisms that have evolved resistance to antibiotics is associated with hypergraphs that display high robustness. Furthermore, we observe that eukaryotic and prokaryotic organisms have different complexity values. A possibility for future works could be modifying the definition of the average search information and the probability of taking a step in the hypergraph. Here, we consider a walk biased by the stoichiometric weights, but more options could be explored. One possibility is to define the probabilities based on the communicability measure or on the rates obtained by flux balance analysis [32; 53]. Also, we didn't consider the information regarding genes that are contained in the BiGG models. Genomics plays a crucial, especially in resistance to antibiotics, and for this reason, it could be interesting to integrate it into this framework. Another possibility is to apply our measures to other contexts, like social or technological hypergraphs. We believe that this framework represents a promising approach to bridge network theory and biology. We hope that it may serve as a starting point, potentially reaching experts in the field who could further refine and utilize these findings to get more biological insights. **Data availability** All data are publicly available on the BiGG models [38] web page in different formats. In this analysis, the \(.json\) format is used. **Code availability** Custom code that supports the findings of this study is available from the corresponding author upon request. ###### Acknowledgements. P.T., G.F.A, and Y.M. acknowledge the financial support of Soremartec S.A. and Soremartec Italia, Ferrero Group. Y.M. acknowledges partial support from the Government of Aragon and FEDER funds, Spain through grant E36-20R (FENOL), and by the EU program Horizon 2020/H2020-SCI-FA-DTS-2020-1 (KATY project, contract number 101017453). We acknowledge the use of the computational resources of COSNET Lab at Institute BIFI, funded by Banco Santander (grant Santander-UZ 2020/0274) and by the Government of Aragon (grant UZ-164255). The funders had no role in study design, data collection, and analysis, decision to publish, or preparation of the manuscript.
2305.16926
Combining Global and Local Merges in Logic-based Entity Resolution
In the recently proposed Lace framework for collective entity resolution, logical rules and constraints are used to identify pairs of entity references (e.g. author or paper ids) that denote the same entity. This identification is global: all occurrences of those entity references (possibly across multiple database tuples) are deemed equal and can be merged. By contrast, a local form of merge is often more natural when identifying pairs of data values, e.g. some occurrences of 'J. Smith' may be equated with 'Joe Smith', while others should merge with 'Jane Smith'. This motivates us to extend Lace with local merges of values and explore the computational properties of the resulting formalism.
Meghyn Bienvenu, Gianluca Cima, Víctor Gutiérrez-Basulto, Yazmín Ibáñez-García
2023-05-26T13:38:36Z
http://arxiv.org/abs/2305.16926v2
# Combining Global and Local Merges in Logic-based Entity Resolution ###### Abstract In the recently proposed Lace framework for collective entity resolution, logical rules and constraints are used to identify pairs of entity references (e.g. author or paper ids) that denote the same entity. This identification is global: all occurrences of those entity references (possibly across multiple database tuples) are deemed equal and can be merged. By contrast, a local form of merge is often more natural when identifying pairs of data values, e.g. some occurrences of 'J. Smith' may be equated with 'Joe Smith', while others should merge with 'Jane Smith'. This motivates us to extend Lace with local merges of values and explore the computational properties of the resulting formalism. ## 1 Introduction Entity resolution (ER) is a data quality management task aiming at identifying database different constants (of the same type) that refer to the same real-world entity [21]. Given the fundamental nature of this problem, several variants of ER (also known as record linkage or deduplication) have been investigated. _Collective_ entity resolution [1, 1] considers the joint resolution (match, merge) of entity references or values of multiple types across multiple tables, e.g. using the merge of two authors to infer that two paper ids have to be merged as well. Various approaches to collective ER, with different formal foundations, have been developed: probabilistic approaches, deep learning approaches, and approaches based on rules and constraints, see [1] for a survey. We have recently proposed Lace [1], a declarative framework for collective ER based upon logical rules and constraints. Lace employs hard and soft rules to define mandatory and possible merges. The semantics of Lace is dynamic: ER solutions are generated by sequences of rule applications, where rules are evaluated over the current induced database, taking into account all previously derived merges. This makes it possible to support recursive scenarios (e.g. a merge of authors triggers a merge of papers which in turn enables another merge of authors), while ensuring that all merges have a (non-circular) derivation. The semantics is also global in the sense that _all_ occurrences of the matched constants are merged, rather than only those constant occurrences used in deriving the match. Such a global semantics is well suited for merging constants that are entity references (e.g. authors or paper ids) and has been used in other prominent logic-based approaches [1, 2, 3]. However, for merging attribute values (e.g. author names), a local semantics, which considers the context in which a value occurs, is more appropriate. Indeed, a local semantics allows some occurrences of 'J. Smith' to be matched to 'Joe Smith' and others to 'Jane Smith', without (wrongly) equating the latter two constants. _Matching dependencies_[1, 1, 2] are an example of a principled logical formalism for merging values. To the best of our knowledge, there is currently no ER framework that supports both global and local merges. This motivates us to introduce Lace\({}^{+}\), an extension of Lace with local merges of values, in which local merges may enable global merges, and vice versa. In particular, local merges can resolve constraint violations which would otherwise block desirable global merges. Lace\({}^{+}\) extends Lace's syntax by adding hard and soft rules for values, but it departs from Lace semantics by considering sets of constants, rather than single constants, as arguments in induced databases. Intuitively, such a set of constants provides alternative representations of the same information, e.g. different forms of a name. The semantic treatment of local merges within Lace\({}^{+}\) aligns with the design of the generic ER framework Swoosh [1]. Our main contributions are the introduction of the new Lace\({}^{+}\) framework and the exploration of its computational properties. Our complexity analysis shows that the addition of local merges does not increase the data complexity of the considered reasoning tasks. We also show how an existing answer set programming (ASP) encoding of ER solutions in Lace can be extended to handle local merges of values. For a discussion of related work, see [1], and for an extension of Lace with repairing, see [1, 1]. ## 2 Preliminaries **Databases** We assume that _constants_ are drawn from three infinite and pairwise disjoinsts: a set **O** of _object constants_ (or _objects_), serving as references to real-world entities (e.g. paper and author ids), a set **V** of _value constants_ (or _values_) from the considered datatypes (e.g. strings for names of authors and paper titles, dates for time of publication), and a set **TID** of _tuple identifiers (tids)_. A _(database) schema_\(\mathcal{S}\) consists of a finite set of _relation symbols_, each having an associated arity \(k\in\mathbb{N}\) and type vector \(\{\textbf{O},\textbf{V}\}^{k}\). We use \(R/k\in\mathcal{S}\) to indicate that the relation symbol \(R\) from \(\mathcal{S}\) has arity \(k\), and denote by \(\textbf{type}(R,i)\) the \(i\)th element of \(R\)'s type vector. If \(\textbf{type}(R,i)=\textbf{O}\) (resp. **V**), we call \(i\) an _object (resp. value) position_ of \(R\). A (**TID**-_annotated_) \(\mathcal{S}\)-_database_ is a finite set \(D\) of _facts_ of the form \(R(t,c_{1},\ldots,c_{k})\), where \(R/k\in\mathcal{S}\), \(t\in\textbf{TID}\), and \(c_{i}\in\textbf{type}(R,i)\) for every \(1\leq i\leq k\). We require that each \(t\in\textbf{TID}\) occurs in at most one fact of \(D\). We say that \(t\) (resp. \(c_{i}\)) occurs in position \(0\) (resp. \(i\in\{1,\ldots,k\}\)) of \(R(t,c_{1},\ldots,c_{k})\), and slightly abusing notation, use \(t\) and \(t[j]\) respectively to refer to the unique fact having tid \(t\), and to the constant in the \(j\)th position of that fact. The set of constants (resp. objects) occurring in \(D\) is denoted \(\text{Dom}(D)\) (resp. \(\text{Obj}(D)\)), and the set \(\text{Cells}(D)\) of _(value) cells_ of \(D\) is defined as \(\{\langle t,i\rangle\mid R(t,c_{1},\ldots,c_{k})\in D,\textbf{type}(R,i)= \textbf{V}\}\). **Queries** In the setting of **TID**-annotated \(\mathcal{S}\)-databases, a _conjunctive query_ (_CQ_) has the form \(q(\vec{x})=\exists\vec{y}.\varphi(\vec{x},\vec{y})\), where \(\vec{x}\) and \(\vec{y}\) are disjoint tuples of variables, and \(\varphi(\vec{x},\vec{y})\) is a conjunction of relational atoms of the form \(R(u_{0},u_{1},\ldots,u_{k})\), where \(R/k\in\mathcal{S}\) and \(u_{i}\in\textbf{O}\cup\textbf{V}\cup\textbf{TID}\cup\vec{x}\cup\vec{y}\) for \(0\leq i\leq k\). When formulating entity resolution rules and constraints, we shall also consider extended forms of CQs that may contain inequality atoms or atoms built from a set of binary _similarity predicates_. Note that such atoms will not contain the tid position and have a fixed meaning1. As usual, the _arity_ of \(q(\vec{x})\) is the length of \(\vec{x}\), and queries of arity 0 are called _Boolean_. Given an \(n\)-ary query \(q(x_{1},\ldots,x_{n})\) and \(n\)-tuple of constants \(\vec{c}=(c_{1},\ldots,c_{n})\), we denote by \(q[\vec{c}]\) the Boolean query obtained by replacing each \(x_{i}\) by \(c_{i}\). We use \(\text{vars}(q)\) (resp. \(\text{cons}(q)\)) for the set of variables (resp. constants) in \(q\). Footnote 1: The extension of similarity predicates is typically defined by applying some similarity metric, e.g. edit distance, and keeping those pairs of values whose score exceeds a given threshold. **Constraints** Our framework will also employ denial constraints (DCs) [1, 12]. A _denial constraint_ over a schema \(\mathcal{S}\) takes the form \(\exists\vec{y}.\varphi(\vec{y})\rightarrow\bot,\) where \(\varphi(\vec{y})\) is a Boolean CQ with inequalities, whose relational atoms use relation symbols from \(\mathcal{S}\). We impose the standard safety condition: each variable occurring in an inequality atom must also occur in some relational atom. Denial constraints notably generalize the well-known class of _functional dependencies (PDs)_. To simplify the presentation, we sometimes omit the initial quantifiers from DCs. **Equivalence Relations** We recall that an _equivalence relation_ on a set \(S\) is a binary relation on \(S\) that is reflexive, symmetric, and transitive. We use \(\textsf{EqRel}(P,S)\) for the smallest equivalence relation on \(S\) that extends \(P\). ## 3 Lace+ Framework This section presents and illustrates Lace+, an extension of the Lace framework to handle local merges of values. ### Syntax of Lace+ Specifications As in Lace, we consider _hard and soft rules for objects (over schema \(\mathcal{S}\))_, which take respectively the forms: \[q(x,y)\Rightarrow\textsf{EqO}(x,y)\quad q(x,y)\dashrightarrow\textsf{EqO}(x,y)\] where \(q(x,y)\) is a CQ whose atoms may use relation symbols from \(\mathcal{S}\) as well as similarity predicates and whose free variables \(x\) and \(y\) occur only in object positions. Intuitively, the above hard (resp. soft) rule states that \((o_{1},o_{2})\) being an answer to \(q\) provides sufficient (resp. reasonable) evidence for concluding that \(o_{1}\) and \(o_{2}\) refer to the same real-world entity. The special relation symbol EqO (not in \(\mathcal{S}\)) is used to store such merged pairs of object constants. To handle local identifications of values, we introduce _hard and soft rules for values (over \(\mathcal{S}\))_, which take the forms: \[q(x_{t},y_{t})\Rightarrow\textsf{EqV}(\langle x_{t},i\rangle, \langle y_{t},j\rangle)\] \[q(x_{t},y_{t})\dashrightarrow\textsf{EqV}(\langle x_{t},i\rangle, \langle y_{t},j\rangle)\] where \(q(x_{t},y_{t})\) is a CQ whose atoms may use relation symbols from \(\mathcal{S}\) as well as similarity predicates, variables \(x_{t}\) and \(y_{t}\) each occur once in \(q\) in position \(0\) of (not necessarily distinct) relational atoms with relations \(R_{x}\in\mathcal{S}\) and \(R_{y}\in\mathcal{S}\), respectively, and \(i\) and \(j\) are value positions of \(R_{x}\) and \(R_{y}\), respectively. Intuitively, such a hard (resp. soft) rule states that a pair of tids \((t_{1},t_{2})\) being an answer to \(q\) provides sufficient (resp. reasonable) evidence for concluding that the values in cells \(\langle x_{t},i\rangle\) and \(\langle y_{t},j\rangle\) are non-identical representations of the same information. The special relation symbol EqV (not in \(\mathcal{S}\) and distinct from EqO) is used to store pairs of value cells which have been merged. **Definition 1**.: _A Lace+ entity resolution (ER) specification \(\Sigma\) for schema \(\mathcal{S}\) takes the form \(\Sigma=\langle\Gamma_{O},\Gamma_{V},\Delta\rangle\), where \(\Gamma_{O}=\Gamma_{\theta}^{o}\cup\Gamma_{s}^{o}\) is a finite set of hard and soft rules for objects, \(\Gamma_{V}=\Gamma_{h}^{o}\cup\Gamma_{s}^{v}\) is a finite set of hard and soft rules for values, and \(\Delta\) is a finite set of denial constraints, all over \(\mathcal{S}\)._ **Example 1**.: _The schema \(\mathcal{S}_{\text{ex}}\), database \(D_{\text{ex}}\), and ER specification \(\Sigma_{\text{ex}}=\langle\Gamma_{\text{ex}}^{O},\Gamma_{\text{ex}}^{V},\Delta_{ \text{ex}}\rangle\) of our running example are given in Figure 1. Informally, the denial constraint \(\delta_{1}\) is an FD saying that an author id is associated with at most one author name, while the constraint \(\delta_{2}\) forbids the existence of a paper written by the chair of the conference in which the paper was published. The hard rule \(\rho_{1}^{o}\) states that if two author ids have the same name and the same institution, then they refer to the same author. The soft rule \(\sigma_{1}^{o}\) states that authors who wrote a paper in common and have similar names are likely to be the same. Finally, the hard rule \(\rho_{1}^{v}\) locally merges similar names associated with the same author id._ ### Semantics of Lace+ Specifications In a nutshell, the semantics is based upon considering sequences of rule applications that result in a database that satisfies the hard rule and denial constraints. Every such sequence gives rise to a solution, which takes the form of a pair of equivalence relations \(\langle E,V\rangle\), specifying which objects and cells have been merged. Importantly, rules and constraints are evaluated w.r.t. the induced database, taking into account previously derived merges of objects and cells. In the original Lace framework, solutions consist of a single equivalence relation over objects, and induced databases are simply defined as the result of replacing every object with a representative of its equivalence class. Such an approach cannot however accommodate local identifications of values. For this reason, we shall work with an extended form of database, where the arguments are _sets of constants_. **Definition 2**.: _Given an \(\mathcal{S}\)-database \(D\), equivalence relation \(E\) over \(\mathsf{Obj}(D)\), and equivalence relation \(V\) over \(\mathsf{Cells}(D)\), we denote by \(D_{E,V}\) the (extended) database induced by \(D\), \(E\), and \(V\), which is obtained from \(D\) by replacing:_ * _each_ \(\text{id}\ t\) _with the singleton set_ \(\{t\}\)_,_ * _each occurrence of_ \(o\in\mathsf{Obj}(D)\) _by_ \(\{o^{\prime}\mid(o,o^{\prime})\in E\}\)_,_ * _each value in a cell_ \(\langle t,i\rangle\in\mathsf{Cells}(D)\) _with the set of values_ \(\{t^{\prime}[i^{\prime}]\mid(\langle t,i\rangle,\langle t^{\prime},i^{\prime} \rangle)\in V\}\)_._ It remains to specify how queries in rule bodies and constraints are to be evaluated over such induced databases. First, we need to say how similarity predicates are extended to sets of constants. We propose that \(C_{1}\approx C_{2}\) is satisfied whenever there are \(c_{1}\in C_{1}\) and \(c_{2}\in C_{2}\) such that \(c_{1}\approx c_{2}\), since the elements of a set provide different possible representations of a value. Second, we must take care when handling join variables in value positions. Requiring all occurrences of a variable to map to the same set is too strong, e.g. it forbids us from matching \(\{\text{J. Smith},\text{Joe Smith}\}\) with \(\{\text{J. Smith}\}\). We require instead that the intersection of all sets of constants assigned to a given variable is non-empty. **Definition 3**.: _A Boolean query \(q\) (possibly containing similarity and inequality atoms) is satisfied in \(D_{E,V}\), denoted \(D_{E,V}\models q\), if there exists a function \(h:\mathsf{vars}(q)\cup\mathsf{cons}(q)\to 2^{\mathsf{Dom}(D)}\setminus\{\emptyset\}\) and functions \(g_{\pi}:\{0,\ldots,k\}\to 2^{\mathsf{Dom}(D)}\) for each \(k\)-ary relational atom \(\pi\in q\), such that:_ 1. \(h\) _is determined by the_ \(g_{\pi}\)_: for every_ \(a\in\mathsf{cons}(q)\)_,_ \(h(a)=\{a\}\)_, and for every_ \(z\in\mathsf{vars}(q)\)_,_ \(h(z)\) _is the intersection of all sets_ \(g_{\pi}(i)\) _such that_ \(z\) _is the_ \(i\)_th argument of_ \(\pi\)_;_ 2. _for every relational atom_ \(\pi=R(u_{0},u_{1},\ldots,u_{k})\in q\)_,_ \(R(g_{\pi}(0),g_{\pi}(1),\ldots,g_{\pi}(k))\in D_{E,V}\)_, and for every_ \(1\leq i\leq k\)_, if_ \(u_{i}\in\mathsf{cons}(q)\)_, then_ \(u_{i}\in g_{\pi}(i)\)_;_ 3. _for every inequality atom_ \(z\neq z^{\prime}\in q\)_:_ \(h(z)\cap h(z^{\prime})=\emptyset\)_;_ 4. _for every similarity atom_ \(u\approx u^{\prime}\in q\)_: there exist_ \(c\in h(u)\) _and_ \(c^{\prime}\in h(u^{\prime})\) _such that_ \(c\approx c^{\prime}\)_._ _For non-Boolean queries, the set \(q(D_{E,V})\) of answers to \(q(\vec{x})\) contains those tuples \(\vec{c}\) such that \(D_{E,V}\models q[\vec{c}]\)._ Observe that the functions \(g_{\pi}\) make it possible to map the same variable \(z\) to different sets, with Point 1 ensuring these sets have a non-empty intersection, \(h(z)\). It is this intersection set, storing the common values for \(z\), that is used to evaluate inequality and similarity atoms. Note that when constants occur in relational atoms, the sets assigned to a constant's position must contain that constant. The preceding definition of satisfaction of queries is straightforwardly extended to constraints and rules: * \(D_{E,V}\models\exists\vec{y}\cdot\varphi(\vec{y})\to\bot\) iff \(D_{E,V}\not\models\exists\vec{y}\cdot\varphi(\vec{y})\) * \(D_{E,V}\models q(x,y)\rightarrow\mathsf{EqQ}(x,y)\) iff \(q(D_{E,V})\subseteq E\) * \(D_{E,V}\models q(x_{t},y_{t})\rightarrow\mathsf{EqV}(\langle x_{t},i\rangle, \langle y_{t},j\rangle)\) iff \((t_{1},t_{2})\in q(D_{E,V})\) implies \((\langle t_{1},i\rangle,\langle t_{2},j\rangle)\in V\); where symbol \(\rightarrow\) can be instantiated by either \(\Rightarrow\) or \(\dashrightarrow\). We write \(D_{E,V}\models\Lambda\) iff \(D_{E,V}\models\lambda\) for every \(\lambda\in\Lambda\). With these notions in hand, we can formally define solutions of \(\mathsf{Lace}^{+}\) specifications. **Definition 4**.: _Given an ER specification \(\Sigma=\langle\Gamma_{O},\Gamma_{V},\Delta\rangle\) over schema \(\mathcal{S}\) and an \(\mathcal{S}\)-database \(D\), we call \(\langle E,V\rangle\) a candidate solution for \((D,\Sigma)\) if it satisfies one of the following:_ * \(E=\mathsf{EqRel}(\emptyset,\mathsf{Obj}(D))\) _and_ \(V=\mathsf{EqRel}(\emptyset,\mathsf{Cells}(D))\)_;_ * \(E=\mathsf{EqRel}(E^{\prime}\cup\{(o,o^{\prime})\},\mathsf{Obj}(D))\)_, where_ \(\langle E^{\prime},V\rangle\) _is a candidate solution for_ \((D,\Sigma)\) _and_ \((o,o^{\prime})\in q(D_{E,V})\) _for some_ \(q(x,y)\rightarrow\mathsf{EqQ}(x,y)\in\Gamma_{O}\)_;_ * \(V=\mathsf{EqRel}(V^{\prime}\cup\{((t,i),\langle t^{\prime},i^{\prime}\rangle) \},\mathsf{Cells}(D))\)_, where_ \(\langle E,V^{\prime}\rangle\) _is a candidate solution for_ \((D,\Sigma)\) _and_ \((t,t^{\prime})\in q(D_{E,V})\) _for some_ \(q(x_{t},y_{t})\!\rightarrow\!\mathsf{EqV}(\langle x_{t},i\rangle,\langle y_{t},i^ {\prime}\rangle)\in\Gamma_{V}\) _If also \(D_{E,V}\models\Gamma_{h}^{o}\cup\Gamma_{h}^{v}\cup\Delta\), then \(\langle E,V\rangle\) is a solution for \((D,\Sigma)\). We use \(\mathsf{Sol}(D,\Sigma)\) for the set of solutions for \((D,\Sigma)\)._ We return to our running example to illustrate solutions and the utility of local merges: **Example 2**.: _Starting from database \(D_{\mathsf{ex}}\), we can apply the soft rule \(\sigma_{1}^{o}\) to merge author ids \(a_{1}\) and \(a_{2}\) (more formally, we minimally extend the initial trivial equivalence relation \(E\) to include \((a_{1},a_{2})\)). The resulting induced instance is obtained by replacing all occurrences of \(a_{1}\) and \(a_{2}\) by \(\{a_{1},a_{2}\}\). Note that the constraint \(\delta_{1}\) is now violated, since \(t_{1}\) and \(t_{2}\) match on aid, but have different names. In the original Lace framework, this would prevent \((a_{1},a_{2})\) from belonging to any solution. However, thanks to the hard rule for values \(\rho_{1}^{v}\), we can resolve this violation. Indeed, \(\rho_{1}^{v}\) is applicable and allows us to (locally) merge the names in facts \(t_{1}\) and \(t_{2}\). The new induced database contains \(\{J.\) Smith, Joe Smith\(\}\) in the name position of \(t_{1}\) and \(t_{2}\), but the names for \(t_{3}\), \(t_{4}\), \(t_{5}\) remain as before. Note the importance of performing a local rather than a global merge: if we had grouped J. Smith with Joe Smith everywhere, this would force a merge of \(a_{3}\) with \(a_{4}\) due to the hard rule \(\rho_{1}^{o}\), which would in turn violate \(\delta_{2}\), again resulting in no solution containing \((a_{1},a_{2})\). Following the local merge of the names of \(t_{1}\) and \(t_{2}\), the hard rule \(\rho_{1}^{o}\) becomes applicable and allows us (actually, forces us) to merge (globally) author ids \(a_{1}\) and \(a_{5}\). We let \(\{E_{\mathsf{ex}},V_{\mathsf{ex}}\}\) be the equivalence relations obtained from the preceding rule applications. As the instance induced by \(\langle E_{\mathsf{ex}},V_{\mathsf{ex}}\rangle\) satisfies all hard rules and constraints, \(\langle E_{\mathsf{ex}},V_{\mathsf{ex}}\rangle\) is a solution. Another solution is the pair of trivial equivalence relations, since \(D_{\mathsf{ex}}\) satisfies the constraints and hard rules._ Similarly to [1], we will compare solutions w.r.t. set inclusion, to maximize the discovered merges. **Definition 5**.: _A solution \(\langle E,V\rangle\) for \((D,\Sigma)\) is a maximal solution for \((D,\Sigma)\) if there exists no solution \(\langle E^{\prime},V^{\prime}\rangle\) for \((D,\Sigma)\) such that \(E\cup V\subsetneq E^{\prime}\cup V^{\prime}\). We denote by \(\mathsf{MaxSol}(D,\Sigma)\) the set of maximal solutions for \((D,\Sigma)\)._ **Example 3**.: _The solution \(\langle E_{\mathsf{ex}},V_{\mathsf{ex}}\rangle\) described in Example 2 is not optimal as the soft rule \(\sigma_{1}^{o}\) can be applied to get \((a_{6},a_{7})\) or \((a_{7},a_{8})\). Notice, however, that it is not possible to include both merges, otherwise by transitivity, \(a_{6},a_{7},a_{8}\) would all be replaced by \(\{a_{6},a_{7},a_{8}\}\), which would violate denial \(\delta_{1}\) due to paper \(p_{5}\). We have two maximal solutions: a first that extends \(\langle E_{\mathsf{ex}},V_{\mathsf{ex}}\rangle\) with \((a_{6},a_{7})\) and the corresponding pair of names cells \((\langle t_{6},2\rangle,\langle t_{7},2\rangle)\) (due to \(\rho_{1}^{v}\)), and a second that extends \(\langle E_{\mathsf{ex}},V_{\mathsf{ex}}\rangle\) with \((a_{7},a_{8})\) and the corresponding name cells \((\langle t_{6},2\rangle,\langle t_{7},2\rangle)\) (again due to \(\rho_{1}^{v}\))._ The Lace\({}^{\star}\) framework properly generalizes the one in [1]: if we take \(\Sigma=\langle\Gamma_{O},\emptyset,\Delta\rangle\) (i.e. no rules for values), then \(E\) is a solution for \((D,\Sigma)\) in the original Lace framework iff \(\langle E\cap(\mathbf{O}\times\mathbf{O}),\mathsf{EqRel}(\emptyset,\mathsf{ Cells}(\bar{D}))\rangle\in\mathsf{Sol}(D,\Sigma)\). More interestingly, we show that it is in fact possible to simulate global merges using local merges. **Theorem 1**.: _For every ER specification \(\Sigma=\langle\Gamma_{O},\Gamma_{V},\Delta\rangle\) over \(\mathcal{S}\), there exists a specification \(\Sigma^{\prime}=\langle\emptyset,\Gamma_{V}^{\prime},\Delta\rangle\) (over a modified \(\mathcal{S}\), with all object positions changed to value positions, and all object constants treated as value constants) such that for every \(\mathcal{S}\)-database \(D\): \(\mathsf{Sol}(D,\Sigma^{\prime})=\{\langle\emptyset,V\cup V_{E}\rangle\mid \langle E,V\rangle\in\mathsf{Sol}(D,\Sigma)\}\), where \(V_{E}\) contains all pairs \((\langle t,i\rangle,\langle t^{\prime},j\rangle)\) such that \((t[i],t^{\prime}[j])\in E\)._ ## 4 Computational Aspects We briefly explore the computational properties of Lace\({}^{+}\). As in [1], we are interested in the _data complexity_ of the following decision problems: Rec (resp. MaxRec) which checks if \(\langle E,V\rangle\in\mathsf{Sol}(D,\Sigma)\) (resp. \(\langle E,V\rangle\in\mathsf{MaxSol}(D,\Sigma)\)), Existence which determines if \(\mathsf{Sol}(D,\Sigma)\neq\emptyset\), CertMerge (resp. PossMerge) which checks if a candidate merge belongs to \(E\cup V\) for all (resp. some \(\langle E,V\rangle\in\mathsf{MaxSol}(D,\Sigma)\), and CertAns (resp. PossAns) which checks whether \(\bar{c}\in q(D_{E,V})\) for all (resp. some) \(\langle E,V\rangle\in\mathsf{MaxSol}(D,\Sigma)\). Interestingly, we show that incorporating local merges does not affect the complexity of all the above decision problems. **Theorem 2**.: Rec _is P-complete;_ MaxRec _is coNP-complete;_ Existence, PossMerge_, and PossAns _are NP-complete;_ CertMerge _and_ CertAns _are_ \(\Pi_{2}^{p}\)_-complete. For specifications that do not use inequality atoms in denial constraints, Rec, MaxRec, and Existence are_ P_-complete;_ PossMerge _and_ PossAns _are NP-complete;_ CertMerge _and_ CertAns _are coNP-complete._ Due to Theorem 1, all lower bounds hold even for specifications that do not contain any rules for objects. An ASP encoding of the original Lace framework was proposed in [1]. We can extend that encoding to obtain a normal logic program \(\Pi_{Sol}\) whose stable models capture Lace\({}^{+}\) solutions: **Theorem 3**.: _For every database \(D\) and specification \(\Sigma=\langle\Gamma_{O},\Gamma_{V},\Delta\rangle\): \(\langle E,V\rangle\in\mathsf{Sol}(D,\Sigma)\) iff \(E=\{(a,b)\mid EqO(a,b)\in M\}\) and \(V=\{(\langle t,i\rangle,\langle t^{\prime},i^{\prime}\rangle)\mid EqV(t,i,t^{\prime},i^{ \prime})\in M\}\) for a stable model \(M\) of \((\Pi_{Sol},D)\)._ We refer to the appendix for the details and sketch here how rules for values are handled. Basically, every hard rule \(q(x_{t},y_{t})\Rightarrow\mathsf{EqV}(\langle x_{t},i\rangle,\langle y_{t},j\rangle)\) is translated into the ASP rule \(\textit{EqV}(x_{t},i,y_{t},j)\leftarrow\hat{q}(x_{t},y_{t})\). To define \(\hat{q}\), we use \(\mathsf{vPOS}(v)\) (resp. \(\mathsf{ODOS}\)) for the set of pairs \((u_{t},i)\) such that \(v\) occurs in a value (resp. object) position \(i\) in atom \(R(u_{t},v_{1},\ldots,v_{k})\in q\). The query \(\hat{q}\) is obtained from \(q\) by replacing each occurrence \((u_{t},i)\) of a non-distinguished variable \(v\) in \(q\) with a fresh variable \(v_{(u_{t},i)}\), and then: * for every join variable \(v\) in \(q\), take fresh variables \(u_{t}^{\prime},k,v^{\prime}\) and add to \(\hat{q}\) the set of atoms \(\{EqV(u_{t},i,u_{t}^{\prime},k)\mid(u_{t},i)\in\mathsf{vpos}(v)\}\cup\{EqO(v_{(u_{t },i)},v^{\prime})\mid(u_{t},i)\in\mathsf{opos}(v)\}\); * for each atom \(\alpha=v\approx w\), take fresh variables \(v^{\prime},w^{\prime}\) and replace \(\alpha\) by the set of atoms \(\{\textit{Val}(u_{t},i,v^{\prime})|(u_{t},i)\in\mathsf{vpos}(v)\}\cup\{\textit{ Val}(u_{t}^{\prime},j,w^{\prime})|(u_{t}^{\prime},j)\in\mathsf{vpos}(w)\}\cup\{v^{ \prime}\approx w^{\prime}\}\), where _Val_ is a predicate defined by the rule: \[\textit{Val}(u_{t},i,v)\leftarrow\textit{EqV}(u_{t},i,u_{t}^{\prime},j),\textit {Proj}(u_{t}^{\prime},j,v),\quad\text{and}\] ground atoms \(\textit{Proj}(t,i,c)\) of \(\textit{Proj}/3\) encode \(t[i]=c\). Soft rules for values are handled similarly: we use the same modified body \(\hat{q}\), but then enable a choice between producing _EqV_(\(x_{t},i,y_{t},j\)) or not applying the rule (adding a blocking fact _NEqV_(\(x_{t},i,y_{t},j\))). Additionally, \(\Pi_{\textit{Sol}}\) will contain rules that encode object rules (producing _EqO_ facts), rules that ensure _EqV_ and _EqO_ are equivalence relations, and rules that enforce the satisfaction of the denial constraints. ## Acknowledgements This work has been supported by the ANR AI Chair INTENDED (ANR-19-CHIA-0014), by MUR under the PNRR project FAIR (PE0000013), and by the Royal Society (IES\(\backslash\)R3\(\backslash\)193236).
2303.14267
A Self-supervised Framework for Improved Data-Driven Monitoring of Stress via Multi-modal Passive Sensing
Recent advances in remote health monitoring systems have significantly benefited patients and played a crucial role in improving their quality of life. However, while physiological health-focused solutions have demonstrated increasing success and maturity, mental health-focused applications have seen comparatively limited success in spite of the fact that stress and anxiety disorders are among the most common issues people deal with in their daily lives. In the hopes of furthering progress in this domain through the development of a more robust analytic framework for the measurement of indicators of mental health, we propose a multi-modal semi-supervised framework for tracking physiological precursors of the stress response. Our methodology enables utilizing multi-modal data of differing domains and resolutions from wearable devices and leveraging them to map short-term episodes to semantically efficient embeddings for a given task. Additionally, we leverage an inter-modality contrastive objective, with the advantages of rendering our framework both modular and scalable. The focus on optimizing both local and global aspects of our embeddings via a hierarchical structure renders transferring knowledge and compatibility with other devices easier to achieve. In our pipeline, a task-specific pooling based on an attention mechanism, which estimates the contribution of each modality on an instance level, computes the final embeddings for observations. This additionally provides a thorough diagnostic insight into the data characteristics and highlights the importance of signals in the broader view of predicting episodes annotated per mental health status. We perform training experiments using a corpus of real-world data on perceived stress, and our results demonstrate the efficacy of the proposed approach in performance improvements.
Shayan Fazeli, Lionel Levine, Mehrab Beikzadeh, Baharan Mirzasoleiman, Bita Zadeh, Tara Peris, Majid Sarrafzadeh
2023-03-24T20:34:46Z
http://arxiv.org/abs/2303.14267v1
A Self-supervised Framework for Improved Data-Driven Monitoring of Stress via Multi-modal Passive Sensing ###### Abstract Recent advances in remote health monitoring systems have significantly benefited patients and played a crucial role in improving their quality of life. However, while physiological health-focused solutions have demonstrated increasing success and maturity, mental health-focused applications have seen comparatively limited success in spite of the fact that stress and anxiety disorders are among the most common issues people deal with in their daily lives. In the hopes of furthering progress in this domain through the development of a more robust analytic framework for the measurement of indicators of mental health, we propose a multi-modal semi-supervised framework for tracking physiological precursors of the stress response. Our methodology enables utilizing multi-modal data of differing domains and resolutions from wearable devices and leveraging them to map short-term episodes to semantically efficient embeddings for a given task. Additionally, we leverage an inter-modality contrastive objective, with the advantages of rendering our framework both modular and scalable. The focus on optimizing both local and global aspects of our embeddings via a hierarchical structure renders transferring knowledge and compatibility with other devices easier to achieve. In our pipeline, a task-specific pooling based on an attention mechanism, which estimates the contribution of each modality on an instance level, computes the final embeddings for observations. This additionally provides a thorough diagnostic insight into the data characteristics and highlights the importance of signals in the broader view of predicting episodes annotated per mental health status. We perform training experiments using a corpus of real-world data on perceived stress, and our results demonstrate the efficacy of the proposed approach in performance improvements1. Footnote 1: Codes are available at [https://github.com/shayanfazel/tabluence](https://github.com/shayanfazel/tabluence) machine learning, eHealth, wireless health, mental health, self-supervised learning, remote health monitoring ## I Introduction The rising epidemic of mental health disorders, worsened by the recent COVID-19 pandemic, speaks to the growing need for effective and timely management of mental health disorders. The pandemic led to an increase in the need for mental health services, while concurrently, given the circumstances surrounding the outbreak, limited access to traditional modalities of care. This necessitated the explosion in the usage of alternative mechanisms to deliver mental health services, mainly through remote formats [1]. For that reason, in spite of increased barriers to access, unprecedented levels of funding have gone into programs to address mental health issues among the general public. For instance, in 2020 alone, the United States government spent around \(280\) billion dollars on mental health services [2]. Therefore, even as the pandemic, and associated restrictions on in-person activities, have subsided, the gaps it revealed in traditional in-person-based therapeutic services persist, and demand for remote solutions remains high. While much of the focus of remote mental health services has been around the use of video conferencing, instant messaging, and other modes of communication to facilitate interactions between therapists and patients, the use of mHealth technology and passive monitoring has the potential to be equally impactful at addressing barriers to care and gaps in monitoring. Furthermore, by leveraging personal digital devices equipped with numerous sensors that are capable of monitoring many aspects of an individual's physiology and lifestyle (e.g., heart rate, activity level), remote health monitoring provides a novel pathway to not only monitor existing indicators of mental health, but also to improve upon our understanding mental health disorders and their impacts on one's life. Stress, commonly defined as "physical, mental, or emotional strain or tension," is a widespread problem with numerous potential causes. According to the American Institute of Stress, \(73\%\) of people suffer from acute bouts of stress to a degree of magnitude that impacts their mental well-being. Incidents of Anxiety often manifest similarly to stress, however, it is notable that it is not always immediately tied to a specific triggering or inciting event and may take longer to resolve. All told, both stress and anxiety problems are very common, to the extent that most adults have been affected by at least one anxiety-related disorder [3, 4]. Anxiety-related disorders can have a significant negative impact on the quality of life, leading to other mental health disorders such as depression, as well as causing physical health problems [5]. In contemplating improved means to address mental health challenges generally, and anxiety-related disorders specifically, it is notable that a critical part of modern healthcare involves accurate and efficient tracking of individuals' well-being through time. Examples include tracking athletes and their training trajectories, and patients' rehabilitation exercises [6, 7, 8, 9, 10]. Compared to physiological health, the mental health domain is less investigated in the context of remote health monitoring. This is largely due to a confluence of reasons. For one, the statistical sufficiency of observations obtained via data-driven approaches is not often intuitively clear (e.g., can one draw a conclusion regarding depression from the number of phone calls?). Another reason is that the data required for enabling the use of artificial intelligence (AI) is often not readily available or exclusive due to privacy and regulatory concerns. The works in this domain, therefore, have mostly focused on longer-term patient phenotyping (e.g., classifying patients into bipolar disorder vs healthy) [11]. While these high-level labels are useful, they can be limited in their utility, as stress often manifests as an emotional and physiological response of an individual to a triggering event, and can occur to anyone regardless of a formal diagnosis. For example, arguing with someone and being anxious about a deadline are instances of interpersonal and work-related stress that are liable to occur to anyone regardless of the existence of a pre-existing mental health disorder. Furthermore, the highly localized and temporary nature of these shorter-term episodes, which given the scarcity of data, makes making the most out of the available observations critical. Inspired by the advancements in the domain of self-supervised learning, we propose a multi-modal self-supervised learning framework to learn the context of stress response from continuous physiological readings. This proposed setup addresses the following challenges and concerns regarding data-driven monitoring of stress and anxiety: * The proposed method is inherently modular with regards to the different modalities of data, and therefore proper data-layer transforms allows leveraging various devices (e.g., smartwatches and wearable sensors different from ours) to learn efficient representations for health monitoring. * The self-supervised component allows training the network with a higher level of granularity and makes training more efficient. This is especially needed as the amount of labeled data available is often limited and costly to acquire, in contrast to sensor data that is generally trivially available in large quantities. * The use of the attention mechanism enables a diagnostic view of the system, allowing the researchers to look into the empirical connection between various modes of data for specific monitoring tasks, counteracting the masking effect of many deep-learning frameworks on interpretability. * In developing this framework, we conducted experiments on real-world data collected on perceived stress and have shown that this approach improves the performance compared to prior work leveraging early-fused embeddings of the same benchmark dataset. ## II Related Works Stress and anxiety-related disorders are common mental health challenges. Such disorders can have significant negative impacts on people's lives, including higher chances of depression and suicide as well as associated comorbidities with physical health issues [5]. Unfortunately, in many cases, these issues remain inadequately treated due to challenges ranging from lack of viable access to therapeutic services to associated stigmas with utilization [3, 4]. However, even when an individual decides to seek psychotherapeutic help to alleviate these problems, challenges persist in the diagnosis and effective treatment of their disorder. At the inception of care, the steps to diagnose and monitor often include clinical evaluation and comparing personalized symptoms to standardized criteria, for example, the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), which is commonly used for this matter. Researchers continue to study and improve the practicality and accuracy of guidelines such as DSM-5 [12, 13], but there are challenges in converting aggregated and generalized diagnostic criteria, down to episodic-level incidents of stress and anxiety. The most obvious approach to doing so leverages biometric data, extracted from wearable sensors embedded in smart devices, that measure a physiological stress response. However, while such data is incredibly valuable, and notably, sensing devices have become increasingly sophisticated at monitoring physiological stress, the resulting analyses are incomplete at best. This owes to the fact that from the standpoint of straightforward correlative analytics, it is known that there is not a direct monotonic correlation between the emotional perception of stress an individual may feel and the manifestation of the underlying physiological stress response. A meta-analysis in the social stress domains, for instance, has recently shown that merely \(25\%\) of studies in the domain demonstrated a significant correlation between physiological stress and perceived emotional stress [14]. Given that self-reports of perceived stress often do not contain information on the physiological stress response, understanding the complex relationship between the two becomes a crucial matter [14]. It is also plausible to assume that such complexity also arises from various other confounding factors (e.g., demographics, occupation, and other mental health disorders such as attention-deficit hyperactivity disorder (ADHD) can influence how prone someone is to stress). This discrepancy has meaningful impacts on the utility of passive detection of stress based largely on physiological indicators. While sensors may be returning accurate readings on physiological stress, if they do not align with the user's own perceptions of stress, notably if they fail to properly account for moments when a user feels acute emotional distress, then it will demovitate further engagement with a mental health platform. This hindrance comes in spite of considerable progress that has been made in recent decades regarding the capabilities and efficacy of personal digital devices, including smartwatches, smartphones, and wearable devices. This fact has made such devices attract a lot of research and commercial attention, employing them for various monitoring objectives [15, 16]. These monitoring approaches focus primarily on fitness and health-related aspects, resulting in a large body of research and countless commercialized applications. Examples include tracking athletes' training, detecting falls for the elderly, tracking post-surgery therapeutic and rehabilitation exercises, and posture correction [17, 18, 7, 19]. While the central focus of health monitoring applications has undoubtedly been on physical health, a wide range of research works has focused on understanding the relationship between observations obtained leveraging digital devices and some aspects of individuals' mental health status. It is noteworthy that a primary goal in designing smart and automated approaches for mental health monitoring has to do with proposing meaningful passive-sensing tools so that informative observations regarding health status can be made by eliminating or diminishing the need to interfere with users' daily activities or request repeated active interactions. As a remote mental health monitoring task, social anxiety was studied in the previous literature, and it was shown that analyzing trajectories obtained via smartphone location services can paint a comprehensive picture concerning individuals' proneness to it. To do so, the movements and the nature of locations visited (which were obtained by cross-referencing location data with a map API) were taken into consideration, and the hypothesis of whether or not such corpus is informative for recognizing the presence of social anxiety was tested [20, 21, 22, 23]. Smartphones have also been helpful in developing an understanding of anxiety [24]. Another choice of hardware for gathering data pertinent to health data is application-specific wearable sensors. For instance, wearable electrocardiogram (ECG) sensors were used to recognize perceived anxiety via pattern recognition [25]. Smartwatches have a unique position amongst the wide range of various commonly used digital devices. They are in close contact with the skin and, given their attachment user's wrist, which is a distal point of a major appendage, make it possible to obtain most measurements (e.g., activity) at higher accuracy, as well as enabling additional measurements such as heart-rate or pulse oximeter. In case of the need for brief questions, interactions, or Ecological Momentary Assessments (EMAs), smartwatches can also be used to issue messages and acquire responses and entries by the user [15, 16, 26]. Additionally, smartwatches are prevalent, and relying on them as the hardware for health applications provides a better alternative in most cases to application-specific wearable devices in terms of cost, comfort, and user-friendliness. Data-driven analyses leveraging smartwatches' sensory readings have been successful at the problem of patient classification for bipolar disorder, schizoaffective disorder, and depression [11]. It has also been shown that physiological readings made by basic smartwatch sensors enable efficient modeling of perceived stress response [27]. In the health analytics domain, data and human annotations are often limited. Therefore, dealing with overfitting and memorization is a crucial matter. Additionally, it is beneficial to go beyond the limited number of human annotations available in training efficient inference pipelines. Less reliance on annotations by focusing on unsupervised and self-supervised approaches has received a lot of research attention in recent years [28, 29, 30, 31]. The core idea in most works in this area is that comparing and contrasting the latent representations of examples that are expected to share certain similarities (e.g., augmented versions of the same image) can benefit the trained weights and help with regularizing the learned decision boundaries [32]. In short, this work is primarily focused on addressing the limitations in the previous literature on remote mental health monitoring. The previous works do not go beyond leveraging scarcely available annotations in training network parameters and mainly rely on data augmentation to improve their performance. They do not focus on encapsulation in embedding different modalities, which can be an obstacle in employing optimal encoders for each modality and can hinder transfer learning. Additionally, they do not focus on the interpretability of the inference pipeline, which is crucial in health-related applications. To address these challenges, this work proposes a framework for leveraging smartwatch-based sensor-driven data to recognize _perceived stress_, enabling a novel approach to remote mental health monitoring. Our proposed inference pipeline is modular and hierarchical and is composed of modality-specific embedding branches. The final embedding is computed via a task-specific attention-pooling mechanism, which also provides an interpretation of the estimated contribution of each modality's information to the last embedding. During training, we leverage an inter-modality contrastive objective so as to encourage consistency among the predictions and tune all encoder branches. Figure 1 depicts the overview of our proposed framework. The details of our approach are discussed in the next section. ## III Methodology Consider a cohort of \(P\) individuals undergoing a study wearing a smartwatch-based remote monitoring system. This wearable setup allows for the collection of sensory readings pertinent to users' exhibited physiological and activity patterns throughout the day. In our case, the study in question monitors the connection between readings made by the smartwatch, which are mostly related to physiological signals, health and activity status, and short-term _perceived_ stress reported by the user. The smartwatch's readings can thus be grouped into features corresponding to several modalities: \(m\in\{1,2,\cdots,M\}\). This grouping depends mainly on the nature of the features, as well as the setup and the interface provided by the smartwatch. From each modality, we have a sequence of observed feature vectors: \[\mathbf{x}_{m}^{(p)}=\{x_{m,t}^{(p)}\}_{t\in[T_{m,\text{max}}^{(p)}]} \tag{1}\] From a user's timeline, we extract short-term timespans, each of which corresponds to an _episode_\(e\), which is the result of filtering the timeline and restricting it to the episode's timespan: \(e=(t_{\text{start}},t_{\text{end}})\): \[\mathbf{x}_{m,e}^{(p)}=\{x_{m,t}^{(p)}\in\mathbf{x}_{m}^{(p)}|t\in e\} \tag{2}\] We have a parameterized domain-specific2 encoder \(f(\cdot;\theta_{m})\) for each modality \(m\in[M]\), which performs the task of mapping the observed data from this modality to a _shared_ semantic space \(\mathcal{S}\): Footnote 2: The term _domain_ in this manuscript refers to the observation type, for example, a Transformer-based Language Model could efficiently represent data from textual domain, and there could be multiple _modalities_ with their observations being text data, each represented by their own specific encoder. \[f(\cdot;\theta_{m}):\mathcal{X}_{m}\rightarrow\mathcal{S}\quad\forall m\in[M] \tag{3}\] Hence, the latent embedding denoted by \(z_{m,e}^{(p)}\) can be found as follows: \[z_{m,e}^{(p)}=f(\mathbf{x}_{m,e}^{(p)};\theta_{m})\quad\forall m\in[M] \tag{4}\] One could argue that it is plausible to assume that the contributions of observations from different modalities to the final prediction on a specific task follow a non-uniform distribution in most cases. For instance, there is no reason to assume the statistical significance of heart-rate time series is the same as pulse oximeter readings for the task of stress detection. Going one step further, such disparity can manifest itself in the level of _instance_ representations as well. To illustrate this further, consider a simple case of "missingness" in data or presence of noise. This could mean that even though mode \(m_{1}\), for example, is more informative (in expectation) to the task \(\tau\), in an instance where the data from this group appears missing or clearly corrupt, the importance of other modalities could change respectively. Therefore, we have designed a _modality importance_ head, implemented as a fully connected pipeline, which determines the contribution of each mode by weighing their respective embedding vectors, which were projected to the same semantic space. The first step to this attention-based pooling mechanism involves using the modality importance head and obtaining (yet unnormalized) weight \(a_{i}^{(m)}\) for the latent embedding of modality \(m\)'s information in an instance \(i\): \[a_{i}^{(m)}\gets g(\mathbf{z}_{i}^{(m)};\mathbf{\psi})\quad\forall m\in[M] \tag{5}\] This is followed by a softmax operation to make sure that the summation of the predicted contributions maps to unity, in other words, the contribution matrix is right stochastic: \[\alpha_{i}^{(m)}=\frac{\exp(a_{i}^{(m)})}{\sum_{j\in[M]}\exp(a_{i}^{(j)})} \tag{6}\] And thus the final aggregated latent is computed using these attention weights: \(z=\sum_{i=1}^{M}\alpha_{i}^{(m)}\cdot z_{m}\). We leverage the cosine similarity \(\phi(\cdot,\cdot)\) to measure the compatibility between the latent representation of each mode and the aggregate representation \(z_{i}\). \[\phi(\mathbf{u},\mathbf{v})=\frac{h(\mathbf{u})^{T}\cdot h(\mathbf{v})}{\|h(\mathbf{u})\|_{2}\cdot \|h(\mathbf{v})\|} \tag{7}\] In other words, we use the aggregated embedding \(z\) as an anchor and define a contrastive objective to leverage the distances and inconsistency between the latent embeddings: \[\mathcal{L}_{\text{cl}}=\frac{1}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}\frac{1} {|\mathcal{M}|}\sum_{m\in\mathcal{M}}-\log\frac{\exp(\phi(\mathbf{z}_{i}^{(m)}, \mathbf{z}_{i})/\tau)}{\sum_{j\in\mathcal{B},j\neq i}\exp(\phi(\mathbf{z}_{i}^{(m)}, \mathbf{z}_{j})/\tau)} \tag{8}\] We have experimented with \(\mathcal{L}_{\text{cl}}\) in the following training schemes: Fig. 1: Our proposed multi-modal self-supervised learning pipeline. Modality-specific data from different distributions are encoded through dedicated encoders and mapped to a shared latent representation space. The aggregated embedding of the segment is then computed by applying attention pooling to the modality-specific representations. A self-supervised contrastive objective aligns this aggregate embedding and the mode-specific representations. * _Pre-training_: Pre-training the model parameters by optimizing \(\mathcal{L}_{\text{cl}}\) through a long training sequence. Afterward, start with the resulting weights as the initial point for the supervised fine-tuning of the model with the cross-entropy objective: \[\mathcal{L}_{\text{cross-entropy}}=-\sum_{c\in\mathcal{C}}y_{c}\ln p_{c}\] (9) In the equation above, \(\mathcal{C}\) is the set of all classes (e.g., in our experiments, the two categories of stressed and non-stressed for each episode), and \(p_{c}\) is the predicted probability of class \(c\) for an observation, computed by passing representations through a final projection and Softmax layer. * _Regularization_: Use \(\lambda_{\text{reg}}\cdot\mathcal{L}_{\text{cl}}\) as a regularization term in the overall loss, and train the model by optimizing this loss simultaneously as the supervised learning objective. There are several points worth remarking upon with regard to the comparison of these two training schemes. To begin with, deciding whether pre-training is going to lead to better generalization performance versus the regularization-based approach depends on model complexity, availability of data, and the challenges of the specific task that one is targeting. That being said, the regularization approach is expected to be considerably faster than the two-stage pre-training and fine-tuning method, and in our experiments on the task of predicting stress labels, it led to better test performance as well. ## IV Experiments ### _Data_ The cohort in this study consists of \(14\) college students who are ex or active-duty members of the United States military. The status of these individuals, both as military members as well as students, renders them an interesting cohort for our stress study, given that the individuals from both groups are known to be relatively more prone to experiencing stress. The smartwatch in this study was Garmin vivoactive 4S. Nevertheless, it is noteworthy that there is no component in the proposed methodology that limits the solution to this smartwatch. The feature groups and various modalities in our configuration are shown in Table II. ### _Labeling_ The focus of this study has been on making predictions on _perceived_ stress, for which the participants agreed to indicate the episodes in which they felt stressed and provide us with the intensity and timespans of these episodes. For each record input to our system by an individual, we created a softened (via a Gaussian function) time-series per the following steps: * The peak (corresponding to the _mean_ of this Gaussian function) is set to the given timestamp, or the midpoint of the timespan (\((t_{\text{start}}+t_{\text{end}})/2\)). Fig. 3: The average contribution of the four modalities to the final episode embeddings. Fig. 2: A sample portion of a continuous supervision signal generated based on user inputs, from which episode stress labels can be sampled * The standard deviation of \(30\) minutes (scaled proportional to the length of time-span, if a time span of over one hour is provided). * The magnitude of the peak point corresponds to the indicated for the episode: \(\{0,1,2,3\}\) for \(\{\texttt{None},\texttt{Low},\texttt{Medium},\texttt{High}\}\), respectively. The summation of these Gaussian signals comprises the signal used as the primary supervision objective. The way the labels are computed is by looking at the end-point of each episode, and its _stress_ label is marked True if the value of this signal on that point is larger than a threshold of \(0.5\), and False otherwise. ### _Modeling_ Our inference model is composed of a specific encoder for each modality. In our case, each encoder is defined based on an initial mapping and normalization (via fully connected layers) followed by a bi-directional recurrent neural network (RNN) in long short-term memory (LSTM) configuration. Specifically, the data from each modality was first projected to a \(32\)-dimensional vector via a multi-layer perception (MLP) with one hidden layer. The output was then forwarded to the modality-specific bi-LSTM with the hidden-layer neuron count of \(64\). The last stage for representing each modality was another fully-connected projection layer, generating a \(32\)-dimensional vector per modality, which were used as modality representations in our framework. Note that the overall pipeline does not have any constraint on the local modality encoders as long as they share the final semantic space to which they project that modality's observations. Given that we were mainly dealing with time-series data, we used RNNs to model each branch. Nonetheless, modalities from substantially different domains and their encoders (e.g., Transformer-based Language Model for textual data) can also fit into the same system. ### _Results_ Focusing on our real-world perceived stress corpus, we conducted experiments under the main settings of 1) supervised training baseline, 2) pre-training the contrastive objective and fine-tuning via supervised objective, and 3) training the supervised objective and simultaneously optimizing a scaled version of the contrastive term as a regularizing loss. We observed that leveraging more features and following a late-fusion protocol for combining modality representations did lead to an improved generalization performance over the supervised setup proposed in [27], which combined the features at the beginning of the pipeline. In the case of our cohort, training with contrastive regularization led to the best generalization on the unseen test data, and the results are shown in Table III. Note that, in general, it is hard to say which self-supervised setup (pre-training versus regularization) is best, as it could depend on other factors, including model complexity, optimization, data availability, and task difficulty. That being said, our approach allows learning high-quality representations by optimizing the modality-contrastive objective via both of these setups. Additionally, we focused on interpretability as well and leveraged the task-specific attention mechanism in our pipeline, which pools the representations from different modalities, to study the utility and contribution of observations from each feature group. This enables the network to dynamically assign weights to each modality's latent representation (in the shared space) as it processes each instance, allowing us to study their contribution both per instance and in expectation for performing the desired task. In Figure 3, we have shown the results on this matter for the contrastive regularization setup3. The results indicate that even though the contributions of the different modalities follow a non-uniform distribution as expected, none of them were ignored by the model and they all play a part in the final predictions. Footnote 3: The label heart in Figure 3 corresponds to the daily modality’s information, given that its main focus is heart-rate. ## V Discussion ### _Broader Impact_ In the context of remote health monitoring, there are several factors addressing which is of paramount importance. In what follows, we elaborate upon these factors and how the solution proposed in this work attempts to address them: * _Affordability and Compatibility_: For the scalability of a proposed remote health monitoring framework, focusing on widely available devices that are sold at affordable price renders it easier to deploy the system. In this work, we focused on basic physiological signals for which reading sensors are available in most commercially available smartwatches. Nonetheless, the proposed methodology has no intrinsic limitation regarding the modalities used; thus, additional data available in often more expensive devices (e.g., galvanic skin response) can also be utilized in the same methodology, and the main requirement is providing a modality-specific encoder fit for the data domain. Furthermore, this framework offers a more encapsulated view in representing different modalities as the observation from each can be embedded by a dedicated encoder first, and the contrastive objective encourages each local branch to optimize its parameters towards the given task as well. This has clear advantages in terms of transferring knowledge as well, an example of which could be initializing each branch separately via pre-trained weights so as to prepare a better starting point for the model and optimization. * _Ease of use_: Optimizing a remote health monitoring with regards to minimizing the amount of required user interaction makes it easier for individuals to use the system. This is why passive monitoring techniques are receiving more attention in the eHealth domain. * _Interpretation_: In all automated healthcare applications of machine learning, any insight and interpretation into what parts of the observation a model mostly focused on in determining the final decision, is crucial and can help experts better validate the system as well. In this work, we incorporated a task-specific attention mechanism for pooling the representations from different modalities, which helps determine the weights assigned to each modality (per instance and in expectation) to perform the task efficiently. * _Limited Data_: The data availability for eHealth applications is often limited due to the difficulty and costs of conducting large-scale studies, the exclusivity of data, and privacy reasons. It is, therefore, important to try to maximize the use of data in training inference pipelines. This work combines label smoothing with inter-mode self-supervision objectives to go beyond self-reported supervision objectives. ### _Limitations_ It is crucial to discuss the limitations of this work given the sensitive nature of dealing with health as its objective. In this work, we relied on self-reported entries to decide the supervision signals for individual timelines. This has the issue of being prone to human error, as one might not accurately recall the time and extent to which one has felt stress. Additionally, reports on the intensity of the felt stress are also subject to noise. Another challenge is the small size of our dataset. A primary reason behind our self-supervision component in this work was alleviating the negative impacts associated with the aforementioned limitations. ### _Conclusion_ We proposed a remote health monitoring solution that is modular and multi-modal, thus, allowing the use of various encoders best suited for each modality. We proposed an instance-level attention mechanism to tune the contribution of each modality to the final representation and provide insight into the expected importance of each modality for the task at hand. We conducted experiments with the proposed method to recognize perceived stress in short-term episodes and empirically demonstrated its superior performance over supervised training.
2309.01157
Large Language Models for Generative Recommendation: A Survey and Visionary Discussions
Large language models (LLM) not only have revolutionized the field of natural language processing (NLP) but also have the potential to reshape many other fields, e.g., recommender systems (RS). However, most of the related work treats an LLM as a component of the conventional recommendation pipeline (e.g., as a feature extractor), which may not be able to fully leverage the generative power of LLM. Instead of separating the recommendation process into multiple stages, such as score computation and re-ranking, this process can be simplified to one stage with LLM: directly generating recommendations from the complete pool of items. This survey reviews the progress, methods, and future directions of LLM-based generative recommendation by examining three questions: 1) What generative recommendation is, 2) Why RS should advance to generative recommendation, and 3) How to implement LLM-based generative recommendation for various RS tasks. We hope that this survey can provide the context and guidance needed to explore this interesting and emerging topic.
Lei Li, Yongfeng Zhang, Dugang Liu, Li Chen
2023-09-03T12:33:47Z
http://arxiv.org/abs/2309.01157v2
# Large Language Models for Generative Recommendation: A Survey and Visionary Discussions ###### Abstract Recent years have witnessed the wide adoption of large language models (LLM) in different fields, especially natural language processing and computer vision. Such a trend can also be observed in recommender systems (RS). However, most of related work treat LLM as a component of the conventional recommendation pipeline (e.g., as a feature extractor) which may not be able to fully leverage the generative power of LLM. Instead of separating the recommendation process into multiple stages such as score computation and re-ranking, this process can be simplified to one stage with LLM: directly generating recommendations from the complete pool of items. This survey reviews the progress, methods and future directions of LLM-based generative recommendation by examining three questions: 1) _What_ generative recommendation is, 2) _Why_ RS should advance to generative recommendation, and 3) _How_ to implement LLM-based generative recommendation for various RS tasks. We hope that the survey can provide the context and guidance needed to explore this interesting and emerging topic. ## 1 Introduction Over the years, recommender systems (RS) have undoubtedly made our daily life easier when it comes to finding things that we are interested in, such as movies, songs, and restaurants. In the meantime, the strong capability of large language models (LLM) in handling various tasks has impressed both practitioners in the field of artificial intelligence (AI) and the general public. As a result, it is natural to consider the combination of the two, i.e., RS and LLM [11]. Although natural language is an expressive medium, it can also be vague. For example, when an LLM is deployed for vehicle identification and scheduling, it would be dangerous to use vague descriptions (e.g., "a black SUV") to identify a vehicle rather than a precise identifier such as vehicle identification number (VIN) or plate number. Similarly, vagueness may also be a problem in recommendation scenarios that require precise and unique identifiers of items, because RS need to guarantee that recommendations made for users are things that factually exist so as to avoid the hallucination problem [1]. This also explains why an ID is usually assigned for each user/item in RS. Despite that, the current understanding of IDs is usually limited to one form. That is, most RS research considers each ID as a discrete token associated with an embedding vector. In this survey, we generalize the definition of ID as follows: **Definition 1** (ID in Recommender Systems).: _An ID in recommender systems is a sequence of tokens that can uniquely identify an entity, such as a user or an item. An ID can take various forms, such as a vector embedding, a sequence of numerical tokens, and a sequence of word tokens (including an item title, a description of the item, or even a complete news article), as long as it can uniquely identify the entity._ For example, a product in e-commerce platform may be assigned the ID "item_7391" and be further represented as a sequence of tokens such as [12, 13]. Note that the ID may not necessarily be comprised of numerical tokens. As long as it is a unique identifier for an item, it may be considered as the item's ID. For example, the title of the movie "The Lord of the Rings" can be considered as the ID of the movie. The ID may even be a sequence of words that do not convey any explicit meaning, e.g., "ring epic journey fellowship adventure" [10]. Actually, IDs in conventional RS can be seen as a special case of the above definition, i.e., a sequence of just one token. Under this definition, IDs resemble token sequences as in text, and thus naturally fit natural language environment as well as LLM. Due to the huge number of items in real-world systems, traditional RS usually take the multi-stage filtering paradigm [12] - some simple and efficient methods such as rule-based filtering are used to reduce the number of candidate items from millions to a few hundred, and then advanced recommendation algorithms are applied on the remaining items to further select a few number of items for recommendation. As a result, advanced recommendation algorithms are not applied to all items, but only a few hundred of items. The generative power of LLM has the potential to reshape the RS paradigm from multi-stage filtering to single-stage filtering. In the context of generative recommendation, an LLM itself can be the single and entire recommendation pipeline, which directly generates the items to recommend, eliminat ing the need for multi-stage filtering. In other words, advanced LLM-based recommendation algorithms are implicitly applied over all items in the system to decide which items to recommend. We term such a process _generative recommendation_ and formally define it as follows: **Definition 2** (Generative Recommendation).: _A generative recommender system directly generates recommendations or recommendation-related content without the need to calculate each candidate's ranking score one by one for sorting and ranking._ In a broader sense, this is in line with the trend of general AI research, which recently has been shifting from discriminative AI (such as classification and regression) to generative AI (e.g., ChatGPT1). Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) With the above definitions, we first answer why RS are developing towards generative recommendation in Section 2. In Section 3, we review ID creation approaches that could retain collaborative information in the LLM environment. Then, we show how typical recommendation tasks can be performed on LLM by providing general formulations in Section 4, and highlight opportunities in the LLM era in Section 5. At last, we conclude our survey in Section 6. It should be noted that our survey is different from some recent surveys on LLM-based recommendation [11, 12, 13, 14, 15] from two perspectives: 1) our survey is organized with generative recommendation as the key focus, eliminating discriminative recommendation models for clarity; 2) we develop a taxonomy for LLM-based recommendation research with strong inspiration from the recommendation community, instead of following the LLM taxonomy from the community of natural language processing (NLP). To sum up, this survey makes the following contributions: * To the best of our knowledge, this is the first survey that systematically summarizes research on LLM-based generative recommendation. To differentiate this topic from traditional RS, we have generalized the definition of ID for generative recommendation. * This survey is pragmatic as we provide the formulation for different LLM-based recommendation tasks when categorizing relevant research, which provides a useful guideline for future research. * We discuss important and promising directions to explore for LLM-based generative recommendation research, which may help broaden the scope of this under-explored research area. ## 2 Why Generative Recommendation To answer why RS are developing towards generative recommendation, we first discuss problems with discriminative recommendation. When the amount of items on a recommendation platform is prohibitively large, the ranking score calculation with regard to each item would be computationally expensive. Therefore, industrial RS usually consist of multiple stages to narrow down the candidate items. At the early stage, simple models (e.g., logistic regression) or straightforward filtering strategies (e.g., feature matching) are usually adopted to filter out less relevant items. Only in the final stage can the relatively complex and advanced models be utilized. This naturally causes a gap between academic research and industrial application. In consequence, although recent recommendation models are growing more fancy and sophisticated, few have been practically employed in industry. In the era of LLM, we see a great opportunity that could potentially bridge this gap. As both academic research and industry application may share the same backbone LLM, most research advancements on LLM may benefit its downstream applications. Regarding recommendation pipeline, the typical multiple stages could be advanced to one stage for generative recommendation, i.e., directly generating items to rec Figure 1: Pipeline comparison between traditional recommender systems and LLM-based generative recommendation. ommend. A graphical comparison between the two types of pipeline is shown in Fig. 1. At each step of recommendation generation, the LLM can produce a vector that represents the probability distribution on all possible ID tokens. After a few steps, the generated tokens can constitute a complete ID that stands for the target item. This process implicitly enumerates all candidate items to generate the target item for recommendation, which is different from traditional RS that draw items from a subset resulted from the previous filtering step. The key secret of LLM for generative recommendation is that we can use finite tokens to represent (almost) infinite items. Suppose that we have \(1000\) tokens for representing user or item IDs, which can be numerical tokens, word tokens or even out-of-vocabulary (OOV) tokens, and each ID consists of \(10\) tokens, then we can use these \(1000\) tokens to represent as many as \(1000^{10}=10^{30}\) items (i.e., unique IDs), which is almost an astronomical number and large enough for most of real-world RS. When applying beam search algorithm for generating item IDs, the probability vector at each step is bounded by \(1000\) tokens, making it computationally possible to directly generate recommendations out of the item pool. ## 3 ID Creation Methods When implementing generative recommendation with LLM, the input (particularly user and item IDs) should be made into the right format that is compatible with LLM. Intuitively, one would consider the metadata of users and items as an alternative, such as user name and item title. In fact, this type of ID representation is quite common in related work, as summarized in Table 1. Despite the popularity, it has two problems [11]. First, when the IDs are extremely long, e.g., in the case of item description, it would be computationally expensive to conduct generation. Besides, it would be difficult to find an exact match in the database for a long ID, i.e., the hallucination problem [1]; and double-checking the existence of each ID would take us back to discriminative recommendation since we need to compare it with each item in the database. Second, although natural language is a powerful and expressive medium, it can also be vague in many cases. For example, two different users' names could be identical. Besides, two irrelevant items could have very similar titles, such as Apple the fruit and Apple the company, while two closely related items may have very different titles, such as the classic "beer and diaper" example in data mining. Therefore, we need concise and unique representations of IDs in recommendation scenarios to precisely distinguish one user or item from the others. Associating each ID with an embedding vector is a common practice in traditional RS, but it would cost a lot of memory to store them, since industry-scale RS usually involve tons of users and items. In addition, these IDs are OOV tokens to LLM, and thus are not very compatible with the models. This is why a new way of representing IDs, i.e., a sequence of tokens rather than a single embedding, is needed. The key idea is to use a small amount of tokens to represent an astronomical number of users or items as explained in the previous section. To make IDs reasonably short, similar users or items could share more tokens in their ID sequences, while the remaining tokens can be used to guarantee their uniqueness. In the following, we review three typical ID creation approaches that follow this principle. Most of these ID creation methods aim to encode the user-user, item-item, or user-item collaborative information into the ID representations, which combines the success of collaborative filtering from traditional RS with the emerging LLM for effective recommendation. ### Singular Value Decomposition [23] acquire an item's ID tokens from its latent factors. Specifically, they first perform truncated singular value decomposition on user-item interaction data to obtain the item embedding matrix. After a set of operations, including normalization, noise-adding, quantization, and offset adjustment, each item's embedding becomes an array of integers, which serves as this item's ID sequence. In particular, the noise-adding operation can ensure that there are no identical item embeddings, and thus make each item ID unique. ### Product Quantization [15] quantize item embeddings with product quantization (PQ) [17] to obtain their IDs. For PQ, there are in total \(D\) vector sets, and each set is comprised of \(M\) centroid embeddings. They first encode an item's textual description with BERT [15] to get the item's embedding vector, which is further divided into \(D\) segments for quantization. For the \(i\)-th embedding segment, its nearest centroid embedding from the \(i\)-th vector set can be easily found. The index of this centroid embedding then becomes the item's \(i\)-th ID token. All these ID tokens together form the item's complete ID. \begin{table} \begin{tabular}{l l l} \hline \hline **Item ID** & **User ID** & **Related Work** \\ \hline Token Sequence (e.g., “\(56\)” ) & Token Sequence & PS [18, 19], VIF5 [14], VIF5 [15], OpenP5 [20], POD [11, 19], GPTRec [21], GPTRec [22], GPTRec [23], [24], [25] \\ \hline Item Title (e.g., “\(Fair\) X”) & Interaction History (e.g., [19], [10], [12], GPTRec [21], GPTRec [21], GPTRec [20], TALLRec [22], NIR [23], NIR [23], PALR [24], BoxGRPT [25], PBRN [21], RReL [21], KnowRec [20], BIORec [21], [22], [23], [24], [25], [26], [27], [28] \\ \hline Item Title & Metadata (e.g., age) & Chat-REC [18, 19, 19, 10] \\ \hline Metadata & Metadata & Mo-Rec [18, 19, 10] \\ \hline Embedding ID & Embedding ID & PEPLER [11, 19, 10] \\ \hline \hline \end{tabular} \end{table} Table 1: Methods of representing IDs for LLM-based generative recommendation. ### Collaborative Indexing [11] compose an item ID with nodes on a hierarchical tree. Technically, they first construct an item graph whose edge weights denote the co-occurring frequency of any two items in all users' interaction history. Then, the graph's adjacency matrix and Laplacian matrix as well as the latter's eigenvectors can be computed. With the eigenvectors, spectral clustering [23] can be applied to group similar items into the same cluster. By recursively doing so, large clusters can be further divided into smaller ones. When the number of nodes in each cluster is smaller than a threshold, these clusters and their sub-clusters naturally constitute a hierarchical tree whose leaf nodes are the items. After assigning tokens to each node, each item has a unique ID sequence by following a path from the root node to the leaf node. In addition to the above three ID creation approaches, [11] discussed other strategies such as sequential indexing based on user interaction history, and semantic indexing based on item metadata information, which are effective approaches to creating item IDs. We omit the details because they are quite simple and straightforward. ## 4 How to Do Generative Recommendation With the above-defined user and item IDs, we now describe how to perform different generative recommendation tasks with LLM. A summary of relevant research on each task is given in Table 2. We can see that there are a few models that have the ability to perform multiple recommendation tasks, e.g., P5 [1]. To allow LLM to understand each task, especially those that have the same input data, we can construct a prompt template [12] that describes the task and then fill the user and item information such as their IDs in the prompt. During the inference stage, all kinds of output (e.g., predicted item IDs) are auto-regressively generated as natural language generation. In the following, we introduce the general formulation of each task, followed by the recent progress. Finally, we discuss how to evaluate these tasks. ### Rating Prediction In conventional RS, the rating prediction task is formulated as follows: given a user \(u\) and an item \(i\), a recommendation model \(f(u,i)\) needs to estimate a score \(\hat{r}_{u,i}\) that the user would rate the item. In the context of LLM, \(u\) and \(i\) are no longer embedding IDs, but two sequences of tokens as defined in Definition 1. The two IDs can be filled in an instruction prompt \(p(u,i)\), e.g., "how would _user_1234_ rate _item_5678_", such that LLM can understand this task. After feeding \(p(u,i)\) into the LLM, it can generate a numerical string in the scale of 1 to 5 such as "4.12", indicating that the user is likely to interact with the item. There are some studies [1] that tested LLM with this task, among which many [1, 12, 12, 13, 14, 15] are based on ChatGPT. As users may not leave an explicit rating for each item with which they interact, the rating prediction task can be less practical for real-world systems. Instead, implicit feedback, e.g., clicking, is easier to collect. Thus, how to infer users' preferences from such implicit feedback motivates the development of top-\(N\) recommendation task. ### Top-\(N\) Recommendation The top-\(N\) recommendation task, a.k.a., ranking, aims to select \(N\) items as recommendations for a given user \(u\). To this end, traditional RS usually compute a score \(\hat{r}_{u,i}\) w.r.t. each item \(i\) in the item set \(\mathcal{I}\). After filtering out those that the user already interacted with, i.e., \(\mathcal{I}_{u}\), the top candidates can be selected as recommendations from the remaining items as \(\text{Top}(u,i):=\arg\max_{i\in\mathcal{I}/\mathcal{I}_{u}}^{N}\hat{r}_{u,i}\). However, due to the context length limit of LLM, it is impossible to feed the model all the items. As a result, the community has explored two approaches to solve the problem. One is _straightforward recommendation_[23, 1, 13], which uses a prompt that only contains user information (ID or user metadata) and asks the LLM to directly generate recommendations for this user. The second is _selective recommendation_[13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29], which provides both user information and a list of candidate items \(\mathcal{I}_{c}\) in the prompt and asks the LLM to select items for recommendation out of the candidates. For example, the candidate list may consist of a testing item and a number of sampled negative items. After filling the user and candidates in a prompt \(p(u,\mathcal{I}_{c})\), e.g., "select one item to recommend for _user_1234_ from the following candidates: _item_6783_, _..., _item_9312_, _item_2834_", the LLM can generate an item ID, e.g., "9312" as recommendation. When combined with beam search, the model can produce a number of item IDs, i.e., a list of \(N\) recommendations. Besides generating item IDs, some recent studies [13], \begin{table} \begin{tabular}{l l l l l} \hline \hline **Rating Prediction** & **Top-N Recommendation** & **Sequential Recommendation** & **Eyalibandic Recommendation** & **Review Summarization** & **Conversational Recommendation** \\ \hline P5 [1], [14], [20], [21], [22] & [23], [24], [25], [26], [27], [28] & [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [88], [89], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [18], [19], [10], [11], [12], [14], [15], [17], [19], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [20], [22], [23], [22], [24], [25], [26], [27], [28], [29], [20], [20], [21], [22], [20], [21], [20], [23], [22], [24], [25], [26], [27], [28], [29], [20], [21], [22], [23], [20], [21], [22], [20], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [21], [22], [20], [23], [22], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [21], [22], [20], [22], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [20], [22], [21], [22], [20], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [20], [21], [22], [20], [21],[22], [ 2023e; Bao _et al._, 2023b; Lin _et al._, 2023b] instruct LLM to answer whether a user is going to interact with a given item by generating "yes" or "no". Although the "yes" or "no" answer is generated by LLM, these methods can be considered as discriminative recommendation since they need to generate an answer or a score (e.g., the probability of "yes") for each item. ### Sequential Recommendation The sequential recommendation task goes one step further than the top-\(N\) recommendation with the consideration of the time or order of interaction. Specifically, its objective is to predict the next item with which a user \(u\) is likely to interact based on his/her past interactions. The items interacted by the user are chronologically ordered according to their timestamps, which can be denoted as \(I_{u}\). Considering the sequential nature of such data, researchers usually employ sequential models to deal with the problem, such as Markov chains, recurrent neural networks (RNN), and Transformer [22]. Again, we can first fill the user and the item sequence in a prompt \(p(u,I_{u})\), e.g., "given _user_1234_'s interaction history _item_3456_,..., _item_4567_, _item_5678_, predict the next item with which the user will interact", and then prompt LLM to generate an item ID as a prediction, e.g., "6789". To reduce the inference time, we can cut off the relatively old items before filling the item sequence in the prompt. This task is a trending topic, as evidenced by a significant number of LLM-based models [15, 16, 17, 18, 19, 20], instruct LLM to answer whether a user is going to like a specific item. ### Explainable Recommendation Besides generating recommendations, explanations that allow users to know the reason behind them are equally important. There are various methods to explain a recommendation to a user, such as explicit item features [19] and visual highlights [3]. We refer interested readers to the survey [19] for a comprehensive examination of explainable recommendations. A typical LLM-based recommendation explanation task can be natural language explanation generation. That is, given a pair of user \(u\) and item \(i\), we direct the model to generate a sentence or paragraph in natural language to explain why \(i\) is recommended for \(u\). Ground-truth explanations can be mined from user reviews [15]. As the inputs (i.e., \(u\) and \(i\)) are identical to those for rating prediction, we can put them in a prompt \(p(u,i)\) to inform the LLM that this is an explanation task, e.g., "explain to _user_1234_ why _item_5678_ is recommended." As a response, the model may generate an explanation such as "The movie is top-notch." However, using IDs alone in the prompt could be unclear as to which aspects the model should discuss in the explanation. To address this problem, we can provide some item features \(f\) as hint words in the prompt, e.g., "acting". An example prompt \(p(u,i,f)\) for such a scenario could be "write an explanation for _user_1234_ about _item_5678_ based on the feature _acting_." Then, the LLM may generate an explanation such as "The acting in this movie is attractive." [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 222, 223, 224, 217, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 333, 34, 351, 36, 37, 38, 39, 311, 33, 34, 36, 37, 38, 39, 31, 33, 34, 36, 38, 39, 32, 34, 36, 39, 31, 34, 36, 39, 32, 35, 37, 39, 31, 35, 39, 31, 36, 39, 32, 37, 39, 31, 38, 39, 30, 31, 39, 32, 33, 34, 36, 39, 33, 34, 35, 36, 37, 39, 31, 38, 39, 32, 34, 37, 38, 39, 32, 35, 39, 33, 36, 39, 30, 31, 39, 32, 33, 37, 39, 33, 34, 38, 39, 35, 36, 37, 39, 38, 39, 30, 31, 39, 33, 32, 34, 39, 35, 37, 39, 32, 38, 39, 33, 30, 31, 39, 33, 32, 35, 39, 33, 36, 37, 38, 39, 30, 31, 39, 33, 32, 39, 33, 34, 39, 35, 36, 38, 39, 31, 37, 39, 32, 38, 39, 33, 34, 39, 35, 37, 39, 32, 39, 33, 36, 39, 37, 38, 39, 30, 31, 39, 32, 39, 33, 38, 39, 30, 31, 32, 33, 34, 39, 35, 36, 39, 37, 38, 39, 32, 39, 33, 38, 39, 30, 31, 34, 39, 35, 36, 37, 39, 38, 39, 31, 39, 32, 39, 33, 34, 39, 35, 36, 39, 37, 38, 39, 39, 30, 32, 39, 31, 39, 33, 34, 38, 39, 32, 35, 39, 34, 36, 37, 39, 38, 39, 30, 31, 39, 32, 39, 33, 35, 39, 31, 39, 34, 39, 35, 36, 37, 39, 38, 39, 30, 31, 39, 32, 39, 33, 34, 39, 35, 39, 36, 37, 39, 31, 38, 39, 30, 32, 39, 33, 39, 32, 34, 39, 35, 36, 37, 39, 38, 39, 30, 31, 39, 33, 32, 39, 33, 34, 39, 35, 37, 39, 31, 36, 39, 32, 37, 39, 33, 38, 39, 30, 33, 34, 39, 35, 39, 31, 39, 34, 39, 35, 36, 39, 37, 39, 32, 39, 33, 36, 39, 31, 39, 33, 35, 39, 36, 39, 37, 38, 39, 30, 31, 39, 34, 39, 35, 39, 32, 39 users' historical interactions, in a conversational environment users can freely state their preferences in natural language and even provide negative feedback, e.g., rejecting a recommendation. However, the research community is still in the process of reaching a consensus on how to formulate this task. [14, 15, 16] adopt two labels (i.e., "USER" and "SYSTEM") to mark the speaker of an utterance before feeding a dialogue session into the LLM for generating a response. [17] further employ a prompt constructor to summarize the input information, such as the user's query and historical conversation, but the technical details are not fully disclosed. [15] directly chat with ChatGPT, because they aim to establish principles for conversational recommendation, e.g., memory mechanism and repair mechanism, rather than developing new models. For evaluation, [21] point out the problem of current protocols. Specifically, although ChatGPT's chatting ability is undeniably impressive, its performance on existing metrics is not very good because they overly stress the matching between generated responses and annotated recommendations or utterances. ### Evaluation Protocols To evaluate the performance of LLM on these tasks, we can apply existing metrics. For rating prediction, root mean square error (RMSE) and mean absolute error (MAE) are commonly used. For the other two recommendation tasks, i.e., top-\(N\) recommendation and sequential recommendation, we can employ ranking-oriented metrics, such as normalized discounted cumulative gain (NDCG), precision, and recall. Besides offline evaluation, online A/B test can also be adopted since it is able to reflect users' actual interactions with recommended items. As to natural language generation tasks, including explanation generation, review generation, review summarization, and conversational recommendation, the quality of LLM's generation can be estimated with BLEU [13] in machine translation and ROUGE [15] in text summarization. Both metrics measure the degree of matching between text segments of the generated content and those of the ground-truth. Some other learning-based metrics can also be used, e.g., BERTScore [11]. However, as pointed out in [21], it can be problematic to over-emphasize the matching with annotated data. Also, there are other aspects beyond text similarity that cannot be reflected by BLEU or ROUGE. As an early attempt, [14] proposed several metrics such as feature coverage ratio and feature diversity that take into account the characteristics of explicit elements for the evaluation of explanations, but they are still rudimentary. Thus, more advanced and standard metrics need to be developed. In addition to automatic evaluation, we can also conduct human evaluation to measure LLM on these generation tasks. However, it requires researchers to properly design the questionnaire and the number of participants could be limited. ## 5 Challenges and Opportunities In this section, we discuss research challenges and opportunities for generative recommendation in the LLM era, especially those significant matters that need urgent care in the near future. ### Hallucination Hallucination [1] means that the content generated by an LLM may deviate from the facts. Hallucination is an important problem in LLM as well as their applications. In particular, for LLM-based RS, we need to guarantee that the items recommended to users really exist, otherwise it may cause user dissatisfaction and frustration, and even low user adoption of the system in real life. For example, a user may spend time traveling to a recommended restaurant, only to find out that such a restaurant does not exist at all. In high-stake recommendation domains such as drug recommendation, medical treatment recommendation, and financial investment recommendation, hallucinated recommendations may cause severe losses for users. There are two possible approaches to addressing the hallucination problem in LLM-based RS. One is to use meticulously designed item IDs for generation. For example, [15] create item IDs and organize the IDs of all items into a prefix tree structure, which is also called a trie structure. As a result, as long as the beam search generation process follows the root-to-leaf paths in the tree, the generated items will be guaranteed to be really existing items. The other method is to apply retrieval-augmentation over the LLM [1], i.e., conditioning LLM on retrieved items, so that the recommended items match those in the item database. Furthermore, the two methods, i.e., indexing and retrieval, can be integrated to address the hallucination problem effectively and efficiently. ### Bias and Fairness There can be two types of bias for LLM-based RS, which are _content bias_ and _recommendation bias_. The former refers to the bias that can be directly observed in the generated content. A typical example is gender bias. [21] find that machine-generated recommendation explanations for male users are usually longer than those for female users in the game domain. This problem may lie in the training data that are adapted from user reviews of games. In addition to recommendation data, LLM trained with a huge amount of human-generated data may reiterate or even reinforce the bias hidden in the training data. Taking linguistic bias as an example, [11] observe that LLM tend to use generic tokens when generating item titles to make them look fluent and linguistically sound, but lead to recommendations that are greatly different from users' preferred items. When adapted to downstream recommendation tasks, the bias should be mitigated or even completely removed so as to prevent the propagation of negative effects and to improve user experience. Regarding recommendation bias, [14] report that ChatGPT is prone to recommend news articles from news providers that it labeled as popular. [11] observe that the music recommendations made by ChatGPT for people with different demographic attributes (e.g., white v.s. African American) are different. Although the results look biased, they could also be a type of personalization since the music tastes of people under different cultural backgrounds could differ. Therefore, a question needs to be answered: _What is the boundary between bias and personalization?_[12] attempt to make LLM-based recommendation models fair with respect to sensitive attributes, such as age, marital status, and occupation, by distilling the bias into continuous prompts. As the bias and fairness issues are still open problems, more work should be done, e.g., from the perspective of fairness definition and bias mitigation for LLM-based RS. ### Transparency and Explainability Making recommendations transparent and explainable to users has always been an important problem for RS and AI in general [13]. Due to the huge size and complexity of LLM, explaining LLM-based recommendations has posed new challenges to the community. There are two types of explainability for LLM-based RS. One is to generate reasonable natural language explanations for recommended items, while the other is to really dive into the model and try to explain the internal working mechanism of LLM. While researchers have explored the first type of explainability for a while [11, 1, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], the second type of explainability has been largely unexplored. One possible method is to align the LLM such as its prompts with an explicit knowledge base such as a knowledge graph [1, 13], so that the model's decision making process is aligned with explicit paths in the knowledge graph for explanation. However, the direction is generally very preliminary and requires innovative methods and brave new ideas from the community. ### Controllability Controllability is an important problem for LLM, since we usually cannot precisely control the output of LLM. The lack of controllability may cause serious problems. For example, LLM may generate harassing content, fake content, or content that deviates from basic moral standards. For RS, the controllability issue is more complicated due to the various recommendation tasks or scenarios that require controllability [13, 14, 15]. For example, users may want to control the feature that an explanation talks about [11, 12, 13, 14], i.e., if a user cares about the "price" of a restaurant, then the explanation should talk about its price, while if the user is concerned about "distance", then the explanation should discuss the distance. Users may also want to control the features of recommended items, such as price level, color, and brand [13]. For example, the user may hope that LLM only recommend items that fall within a certain price range. Although these features can be included in the prompt to trigger LLM's generation, the recommendations provided by LLM may still fail to meet the user's requirements. Current research on the controllability of LLM-based recommendation mainly focuses on controlling the explanations [11, 12, 13, 14], while more research is urgently needed on controlling recommendations generated by LLM. ### Inference Efficiency As LLM contain a huge amount of parameters and RS are a latency-sensitive application, the efficiency of LLM-based recommendation models is vital. The training efficiency can be improved by either option tuning [15] or adapter tuning [14]. To reduce LLM's training time, [11] propose a task-alternative training strategy to deal with multiple recommendation tasks. Since the training efficiency of LLM can be improved in an offline environment and usually an LLM does not have to be retrained too frequently, it is not as important as the inference efficiency problem. [15] pre-compute the first few layers of an LLM and cache the results to improve its inference efficiency. However, this strategy may only be applicable to a specific LLM architecture that represents users and items with metadata. [11] observe that LLM's inference time can be slightly reduced when the discrete prompt is removed. In summary, there is still much room to further improve the inference efficiency of LLM-based recommendation models. ### Multimodal Recommendation In addition to text, data of other modalities can also be leveraged by LLM, as long as they can be represented as a sequence of tokens that can be integrated into textual sentences. [14] incorporate item images into an LLM to improve its performance on recommendation tasks. Regarding image generation, [14] generate visual explanations for recommendations based on a vision-language model, and [15] synthesize images for product design. In addition to images, videos and audios can also be generated in an auto-regressive way [13, 14], which makes LLM-based multimodal recommendation a promising direction. Furthermore, when there is no available item that caters to a user's interest in the item repository, the system can directly create new items. Meanwhile, model designers should guarantee the authenticity of machine-generated content to prevent users from having negative experiences, e.g., a picture of Hawaiian attraction captioned South Korea. ### Cold-start Recommendation As LLM have learned world knowledge during the pre-training stage, they are able to perform recommendation tasks even if they are not fine-tuned on recommendation-specific datasets. A typical example is ChatGPT, which can be instructed to perform various recommendation tasks as discussed in the previous section [13]. The underlying reason is that users' preferences and items' attributes can be expressed in natural language. As a result, LLM-based recommendation models have the potential to mitigate the well-known cold-start problem in recommendation when there is limited or even no interaction regarding new users or new items. Although the interaction data is insufficient, we may still make use of their metadata for recommendation, such as user demographic information and item description information. ## 6 Conclusions In this survey, we have reviewed the recent progress of LLM-based generative recommendation and provided a general formulation for each generative recommendation task according to relevant research. To encourage researchers to explore this promising direction, we have elaborated on its advantages compared to traditional RS, generalized the definition of IDs, and summarized various ID creation methods. We have also pointed out several future prospects that might be worth in-depth exploration. We anticipate a future where LLM and RS would be nicely integrated to create personalized services of high quality for users in a variety of situations.
2302.03432
SimCon Loss with Multiple Views for Text Supervised Semantic Segmentation
Learning to segment images purely by relying on the image-text alignment from web data can lead to sub-optimal performance due to noise in the data. The noise comes from the samples where the associated text does not correlate with the image's visual content. Instead of purely relying on the alignment from the noisy data, this paper proposes a novel loss function termed SimCon, which accounts for intra-modal similarities to determine the appropriate set of positive samples to align. Further, using multiple views of the image (created synthetically) for training and combining the SimCon loss with it makes the training more robust. This version of the loss is termed MV-SimCon. The empirical results demonstrate that using the proposed loss function leads to consistent improvements on zero-shot, text supervised semantic segmentation and outperforms state-of-the-art by $+3.0\%$, $+3.3\%$ and $+6.9\%$ on PASCAL VOC, PASCAL Context and MSCOCO, respectively. With test time augmentations, we set a new record by improving these results further to $58.7\%$, $26.6\%$, and $33.3\%$ on PASCAL VOC, PASCAL Context, and MSCOCO, respectively. In addition, using the proposed loss function leads to robust training and faster convergence.
Yash Patel, Yusheng Xie, Yi Zhu, Srikar Appalaraju, R. Manmatha
2023-02-07T12:36:35Z
http://arxiv.org/abs/2302.03432v1
# SimCon Loss with Multiple Views for Text Supervised Semantic Segmentation ###### Abstract Learning to segment images purely by relying on the image-text alignment from web data can lead to sub-optimal performance due to noise in the data. The noise comes from the samples where the associated text does not correlate with the image's visual content. Instead of purely relying on the alignment from the noisy data, this paper proposes a novel loss function termed SimCon, which accounts for intra-modal similarities to determine the appropriate set of positive samples to align. Further, using multiple views of the image (created synthetically) for training and combining the SimCon loss with it makes the training more robust. This version of the loss is termed MV-SimCon. The empirical results demonstrate that using the proposed loss function leads to consistent improvements on zero-shot, text supervised semantic segmentation and outperforms state-of-the-art by \(+3.0\%\), \(+3.3\%\) and \(+6.9\%\) on PASCAL VOC, PASCAL Context and MSCOCO, respectively. With test time augmentations, we set a new record by improving these results further to \(58.7\%\), \(26.6\%\), and \(33.3\%\) on PASCAL VOC, PASCAL Context, and MSCOCO, respectively. In addition, using the proposed loss function leads to robust training and faster convergence. 1 Footnote 1: The research was conducted during Y. Patel’s internship at AWS. Footnote 2: footnotetext: Corresponding author. ## 1 Introduction The use of data from the web for training visual and language models has been effective for learning visual representations that are useful for downstream tasks. Learning the visual and textual encoders jointly via projecting images and texts to the same learned embedding space allows their direct comparison. It helps in developing open-set and zero-shot classification models [1, 9, 28, 40, 52, 53, 61, 78, 79, 84]. This success has prompted researchers to investigate the use of web data for learning object-level representations for tasks involving dense predictions such as semantic segmentation [75], without fine-tuning on any dense supervision. Whether the focus is on learning global or object-level representations, these approaches rely on the alignment between the images and the co-occurring text for supervision. The cross-modal alignment for zero-shot image classification or segmentation models is performed via contrastive learning using InfoNCE [68, 23] loss function, which maximizes the mutual information between the image and its Figure 1: **Top**: In the above image-text samples, the text corresponding to the first image does not contain any information about its visual content. The text for the second image correctly describes the visual content of both the first and the second images. **Bottom**: If we use the InfoNCE loss, only the image and its paired text will be pulled together (green edges), while all other images and captions will be pushed apart (red edges). In SimCon loss, both the first and images will be paired with the text from the second image and pushed away from the first piece of text. The loss function learns this automatically. Best viewed in color. matched text. The loss is computed by using each batch sample as an anchor. In the embedding space, for an anchor image (text), the embedding vector of the corresponding text (image) is pulled closer. In contrast, the embedding vectors of texts (images) from other samples are pushed apart. While this objective has been useful [28, 53, 75], it is prone to noise in the training data, which is typical for samples from the web. Often the text associated with an image does not describe its visual content, may miss the information for objects in the background, or could be ambiguous. As shown in Fig. 1, the caption associated with the first image does not describe the visual content, whereas the caption for the second image describes the visual content of both the first and the second image. Using the InfoNCE loss function for these samples will push apart the embedding vectors of the first image and the second text, leading to sub-optimal representations. This work focuses on learning a model for text supervised zero-shot semantic segmentation without using any weakly supervised or dense annotations. This paper builds upon the only existing baseline for the task, _i.e_., GroupViT [75], which suffers from the aforementioned noisy training. To mitigate the issue, this paper proposes a new intra-modal similarity aware contrastive loss function termed _SimCon_. As shown in Fig. 1, the computation of SimCon starts by computing the intra-modal image-to-image and text-to-text similarities. For an image as an anchor, a set of positive image samples are assigned if the intra-modal image-to-image similarity is higher than a threshold. For the anchor image, the SimCon loss then pulls closer the embedding vectors of the corresponding text, positive image samples, and their corresponding texts and pushes apart the embedding vectors of the remaining images and texts. Similarly, the positives for a text as an anchor are determined based on the intra-modal text-to-text similarities, and the SimCon loss pulls the embedding vectors of the corresponding image, positive text samples, and their corresponding images and pushes apart the rest of the text and image embedding vectors. For visual representation learning, several approaches also use InfoNCE to pull together the embedding vectors from different synthetically generated views of the same image and push apart the embedding vectors from different images. They have shown to be useful for both self-supervised [4, 5, 10, 11, 25] and supervised learning [29]. While SimCon loss, as shown in Fig. 1 already accounts for intra-modal relations using the intra-modal similarities, it is methodically extended to account for multiple views of the image, and the setup is termed _MV-SimCon_ for brevity. With these systematic improvements, this paper sets a new state-of-the-art for zero-shot semantic segmentation that trains without manually annotated data. The contributions of this paper are as follows: * A novel _SimCon_ loss with multiple views is introduced to mitigate the issue of noisy image-text pair training. * The proposed loss is robust across different data distributions during training and scales with the amount of training data and batch size. * Extensive empirical results demonstrate the superiority of _MV-SimCon_ in terms of faster convergence and better zero-shot segmentation performance. ## 2 Related Work **Semantic segmentation without dense supervision.** Semantic segmentation is a dense prediction task that assigns a semantic label to each pixel. Most methods rely on annotated data to achieve decent segmentation results [7, 8, 12, 31, 36, 64, 88, 86, 12, 85]. Given costly pixel-wise annotations, there have been several attempts to do unsupervised semantic segmentation. One way is to learn representations for each pixel and then perform clustering to obtain dense semantic labels. Earlier work [27, 47] demonstrates the possibility of clustering on small-scale datasets. To facilitate object discovery, recent work has started to incorporate more prior information [13, 17, 69] or better image representations [82, 89, 24]. Yet the performance still lags its supervised counterparts. Another way is to use language supervision as a weak signal. Several recent papers [30, 41, 54, 76, 87] leverage the CLIP model [53] to enable open-set semantic segmentation, but they also require dense supervision. OpenSeg [20] goes one step further by using image-level supervision to learn with class-agnostic mask annotation. Finally, GroupViT [75] shows that semantic segmentation can be done by training a model on web data image-text pairs without mask annotations. In this work, we instantiate our framework using GroupViT but change the training objectives from InfoNCE to the proposed MV-SimCon. The goal is to mitigate the noisy image-text supervision in GroupViT training. **Learning visual representations from web data.** Web data as a source of supervision has been a promising direction to learn visual representations [21, 22, 34, 49, 50]. With the help of metadata like tags and alt-text, the labeling cost of such datasets can be reduced significantly, which leads to cheaper large-scale datasets. In order to study the effect of data in the deep learning era, YFCC100M [66], JFT300M [65], JFT3B [83], IG3.5B [43] and IG65M [19] were collected and studied. As expected, larger datasets help to learn better visual representations and lead to state-of-the-art results for various vision tasks. With the rise of multi-modality learning, there is a recent trend of using image-text pairs from the web as supervision [28, 40, 53, 70, 79]. Thanks to the larger datasets [58] and larger transformer models [9, 83], these trained models have capabilities, such as zero-shot prediction. **Contrastive loss objectives.** Contrastive losses have been applied to a wide variety of data from several domains, e.g., computer vision [4, 10, 11, 25], natural language processing [18, 48], speech and audio [57, 68] and multi-modal [28, 53]. These losses can be used as long as the anchor, positives, and negatives are well-defined. Such losses have widely been studied in the open-set image retrieval literature [45], where the deep embedding is trained on a set of classes, and the retrieval evaluation is performed on unseen classes from the same distribution. The simplest pairwise loss function is the contrastive loss [23], also known as InfoNCE, where the embeddings of the relevant pair of samples are pulled as close as possible, and the non-relevant ones are pushed apart. The triplet loss [63, 71] mimics a ranking objective more closely by training on a triplet pair of an anchor, a positive and a negative sample. Since optimization over all possible combination of samples is not tractable, much attention has been paid to finding informative pairs via sampling [2, 39, 46, 51, 55, 62, 72, 90, 3, 56]. All the above-mentioned loss functions involving approximation of the evaluation metric or sampling have been studied in a uni-modal, supervised setup with class-balance sampling, which is not feasible with the multi-modal data from the web without any semantic labels. Therefore, the standard InfoNCE [23] has been widely adopted to date due to its simplicity. Our work takes motivation from the image retrieval approaches and attempts to find a better sampling strategy for multi-modal contrastive learning, in the absence of semantic labels. We incorporate intra-modal similarities and multiple views to determine adequate positive samples in the noisy data from the web. **Vision-language training with noisy data.** Recent approaches, designed mainly for cross-modal retrieval such as ALBEF [33], TCL [77], CodeBook [15], and BLIP [32] also attempt to mitigate noise during training in various ways. While [28, 53, 75] use separate text and image encoders, [15, 33, 32, 77] also use a multi-modal encoder. For noise mitigation, [15, 33, 77] use distillation through an exponentially moving average momentum encoder, whereas [32] curates the training data using additional captioning and filtering models. These approaches are computationally expensive as they require additional models, have only been investigated for tasks requiring global predictions, such as cross-modal retrieval, where their efficacy is marginal. ## 3 Method This section revisits GroupViT as a baseline to describe its model architecture and training objectives (Sec. 3.1). Then we introduce our improved contrastive losses by bringing intra-modal similarity (SimCon) (Sec. 3.2) and multi-view (MV-SimCon) into the picture (Sec. 3.3). ### Preliminary **GroupViT.** GroupViT [75] is the first work to explore zero-shot transfer from text supervision alone to semantic segmentation without using any pixel-wise labels. The basic idea is to bring back the grouping mechanism [80, 81, 60] into deep transformer networks in a bottom-up manner. Through a hierarchical grouping process, the model learns to grow image regions into progressively larger arbitrary-shaped segments. To be specific, GroupViT consists of a vision encoder \(f_{\theta}\) and a text encoder \(g_{\phi}\). The vision encoder is a vision transformer (ViT) with group tokens and grouping block. Given a batch of images and the corresponding texts \(\{(I_{i},T_{i})\}\), where \(i\) is the index in the batch size, the batch is sampled from a collection of multi-modal data. During feed-forward, the image is first split into non-overlapping patches and linearly projected into a latent space, which are termed as segment tokens. The segment tokens and the learnable grouping tokens are then fed to the transformer layers. After a set of transformer layers, the segment tokens and the grouping tokens are passed to a grouping block. Within the grouping block, segment tokens are assigned to groups and merged together for further processing. The assignment is done by computing the similarities between the segment tokens and the group tokens and using a differentiable Gumbel-softmax assignment [42, 26]. The merging combines all the segment tokens belonging to the same group and is performed via a weighted sum. In this way, the group tokens can thus learn to aggregate information globally from all segment tokens. The set of transformer layers followed by a grouping block constitutes a stage. Stacking two such stages stacked together gives the final vision encoder. For training, a global representation of the image \(\{\mathbf{z}_{i}^{I}\}\in\mathbb{R}^{d}\) is obtained by average pooling the final segment tokens, followed by \(L_{2}\)-norm. The text encoder is a transformer model and is the same as in [53] and the final normalized text embeddings are denoted as \(\{\mathbf{z}_{i}^{T}\}\in\mathbb{R}^{d}\). **Notations.** Usually the similarity between any two embedding vectors is computed by the dot product between them and is denoted by \(s(\mathbf{z}_{i}^{I},\mathbf{z}_{i}^{T})=\mathbf{z}_{i}^{I}\cdot\mathbf{z}_{ i}^{T}\). Within the batch \(B=\{(\mathbf{z}_{i}^{I},\mathbf{z}_{i}^{T})\}\), the similarity between all images and texts is computed and is stored in a \(|B|\times|B|\) dimensional matrix \(\mathbf{S}^{IT}\). For brevity, \(\mathbf{S}_{ij}^{IT}\) is the similarity between an image with index \(i\) and text with index \(j\). Similarly a matrix \(\mathbf{S}^{II}\) contains the image-to-image similarities and \(\mathbf{S}^{TT}\) text-to-text similarities. The computation of InfoNCE loss involves the temperature controlled exponent of the similarities,and it is represented as \(\mathbf{E}_{ij}^{IT}=\exp(\mathbf{z}_{i}^{I}\cdot\mathbf{z}_{j}^{T}/\tau)= \exp(\mathbf{S}_{ij}^{IT}/\tau)\). Here, \(\tau\) is a learnable temperature parameter initialized with a value of \(0.07\)[53, 75]. Similarly the exponent term between two images is represented as \(\mathbf{E}_{ij}^{II}\) and between two pieces of text as \(\mathbf{E}_{ij}^{TT}\). **InfoNCE loss.** With a global representation for both image and text modalities, GroupViT uses InfoNCE loss function that matches an image to the corresponding text and vice versa. InfoNCE loss pulls the representations of the corresponding image and text pairs closer and pushes the representations of non-matching (according to data) text samples apart. This image-text alignment loss jointly trains the visual and the textual encoders, and may be expressed for image-to-text matching as: \[\mathcal{L}_{\text{NCE}}(I,T)=-\frac{1}{|B|}\sum_{i=1}^{|B|}\log\frac{\mathbf{ E}_{ii}^{IT}}{\sum_{j=1}^{|B|}\mathbf{E}_{ij}^{IT}} \tag{1}\] and similarly for text-to-image matching as: \[\mathcal{L}_{\text{NCE}}(T,I)=-\frac{1}{|B|}\sum_{i=1}^{|B|}\log\frac{\mathbf{ E}_{ii}^{TI}}{\sum_{j=1}^{|B|}\mathbf{E}_{ij}^{TI}} \tag{2}\] where \(\mathbf{E}^{TI}\) is the transpose of \(\mathbf{E}^{IT}\). The overall training loss of GroupViT is a linear combination of Eq. (1) and (2). **Shortcomings of InfoNCE.** Given an image and its corresponding aligned text (according to the groundtruth), InfoNCE loss pulls the image embedding closer to its corresponding text embedding and pushes the other text embeddings away. As shown in Fig. 1, this objective falls short in accounting for noise in the training data where the corresponding text does not contain any or partial information about the visual content of the image. Additionally, InfoNCE does not account for any intra-modal relations. The proposed SimCon loss mitigates this issue as an image is not only positively matched to the corresponding text, but also to additional images and texts determined via the intra-modal similarities. Furthermore, the InfoNCE loss function does not account for any relations across different views of the image which has shown to be useful for visual representation learning [4, 5, 10, 11, 25] and is available without any supervision, as these views are synthetically generated by applying augmentations. To add a further training signal, the SimCon loss is systematically extended to use multiple views of the images, termed MV-SimCon. Where the images in each view are positively matched to the text following the SimCon objective and an additional cosine distance loss [10] is used to match the images from different views. ### SimCon Loss The proposed SimCon loss jointly accounts for intra-modal and cross-modal relations. For an image anchor, the representations of other similar images, texts of these similar images and the corresponding text to the anchor are pulled closer while the rest of the image and text representations are pushed apart. An overview of the process is shown in Fig. 2. The SimCon loss for image-to-text alignment may be expressed as: \[\mathcal{L}_{\text{SimCon}}(I,T,\mathbf{P}^{I})\] \[\quad=-\frac{1}{|B|}\sum_{i=1}^{|B|}\frac{1}{|\mathbf{P}_{i}^{I} |}\sum_{p\in\mathbf{P}_{i}^{I}}\log\frac{\mathbf{E}_{ip}^{IT}+\mathbf{E}_{ip} ^{II}}{\sum_{j=1}^{|B|}\mathbf{E}_{ij}^{IT}+\sum_{j=1}^{|B|}\mathbf{E}_{ij}^{ II}} \tag{3}\] Figure 2: **SimCon Overview.** During training, the sampled images \(I\) are passed through the GroupViT model \(f_{\theta}\), and the segment tokens are averaged and normalized to obtain the embedding \(\mathbf{z}^{I}\). The texts \(T\) are passed through the text encoder \(g_{\phi}\) to obtain the text embedding \(\mathbf{z}^{T}\). The intra-modal image-to-image \(\mathbf{S}^{II}\) and text-to-text \(\mathbf{S}^{TT}\) similarities are computed via the cosine distance. The set of positives \(\mathbf{P}^{I}\) and \(\mathbf{P}^{T}\) are determined using the intra-modal similarities by finding the samples with a similarity higher than the threshold \(\lambda\). This is achieved by passing \(\mathbf{S}^{II}-\lambda\) and \(\mathbf{S}^{TT}-\lambda\) through a Heaviside step function \(H\), the output 1 determines positives and \(0\) for negatives. The SimCon loss defined in Eq. (3) and (4) is computed on a joint similarity matrix containing both the intra-modal and cross-modal similarities with the positive and negative relations between the pairs governed by \(\mathbf{P}^{I}\) and \(\mathbf{P}^{T}\). Blue shows image modality, yellow shows text modality and green shows cross-modal. MV-SimCon follows a similar pipeline with multiple views as elaborated in Sec. 3.3. and for text-to-image alignment as: \[\begin{split}\mathcal{L}_{\text{SimCon}}(T,I,\mathbf{P}^{T})\\ =-\frac{1}{|B|}\sum_{i=1}^{|B|}\frac{1}{|\mathbf{P}_{i}^{T}|}\sum_ {p\in\mathbf{P}_{i}^{T}}\log\frac{\mathbf{E}_{ip}^{TI}+\mathbf{E}_{ip}^{TT}}{ \sum_{j=1}^{|B|}\mathbf{E}_{ij}^{TI}+\sum_{j=1}^{|B|}\mathbf{E}_{ij}^{TT}} \end{split} \tag{4}\] Here \(\mathbf{P}_{i}^{I}\) is the set of images that are similar to the image anchor at \(i\) and \(\mathbf{P}_{i}^{T}\) is the set of texts that are similar to the text anchor at \(i\). As shown in Fig. 2, \(\mathbf{P}^{I}\) and \(\mathbf{P}^{T}\) are obtained from the intra-modal similarities as: \[\begin{split}\mathbf{P}^{I}&=H(\mathbf{S}^{II}- \lambda)\\ \mathbf{P}^{T}&=H(\mathbf{S}^{TT}-\lambda)\end{split} \tag{5}\] where \(H\) is a Heaviside step function with \(H(x)=1\) if \(x>=0\), otherwise \(H(x)=0\), thus the values in \(\mathbf{P}^{I}\) and \(\mathbf{P}^{T}\) are binary. \(\lambda\) is a threshold hyper-parameter on the intra-modal similarities. For an image anchor \(\mathbf{z}_{i}^{I}\), the positive samples are the ones which have the intra-modal similarity higher than the threshold, _i.e_., where \(\mathbf{P}_{i}^{I}=1\). ### MV-SimCon: SimCon with Multiple Views The proposed MV-SimCon loss, or SimCon loss with multiple views, enforces consistency across multiple image views obtained via data augmentation. Let \(I_{1}\) and \(I_{2}\) be the two views of an image, then the MV-SimCon loss for image to text alignment may be expressed as: \[\begin{split}\mathcal{L}_{\text{MV-SimCon}}(I,T,\mathbf{P}_{J}^ {I})&=\mathcal{L}_{\text{SimCon}}(I_{1},T,\mathbf{P}_{J}^{I})\\ &\quad+\mathcal{L}_{\text{SimCon}}(I_{2},T,\mathbf{P}_{J}^{I}) \end{split} \tag{6}\] and for text to image alignment as: \[\begin{split}\mathcal{L}_{\text{MV-SimCon}}(T,I,\mathbf{P}^{T}) =\mathcal{L}_{\text{SimCon}}(T,I_{1},\mathbf{P}^{T})\\ +\mathcal{L}_{\text{SimCon}}(T,I_{2},\mathbf{P}^{T})\end{split} \tag{7}\] where \(\mathbf{P}_{J}^{I}\) is the set of images that are similar to the anchor image in either view and \(\mathbf{P}^{T}\) remains the same as in SimCon loss as no text augmentations are used. \(\mathbf{P}_{J}^{I}\) for the MV-SimCon loss may be expressed as: \[\mathbf{P}_{J}^{I}=H(\max(\mathbf{S}^{I_{1}I_{1}},\mathbf{S}^{I_{2}I_{2}})-\lambda) \tag{8}\] where \(\mathbf{S}^{I_{1}I_{1}}\) are the intra-modal similarities within the first view of the images and \(\mathbf{S}^{I_{2}I_{2}}\) within the second view. Note that there are two possibilities, the first is to independently compute the image positives in each view and the second is to compute them jointly across the views. The joint computation of the image positives leads to more number of positives for each image and empirically leads to better performance as shown in the ablations Sec. 4.4. So far, the MV-SimCon loss aligns the images in both views to the appropriate texts following the SimCon loss. However, the images in the two views are still not connected. To connect them to each other and for an additional training signal, a negative cosine similarity loss [10] is used between the two views of the image: \[\mathcal{L}_{\text{NCS}}(I_{1},I_{2})\!=\!-\frac{1}{|B|}\sum_{i=1}^{|B|}\frac{ 1}{2}p(\mathbf{z}_{i}^{I_{1}})\!\cdot\!\text{sg}(\mathbf{z}_{i}^{I_{2}})\!+ \!\frac{1}{2}p(\mathbf{z}_{i}^{I_{2}})\!\cdot\!\text{sg}(\mathbf{z}_{i}^{I_{ 1}})\! \tag{9}\] where _sg_ is the stop-gradient operation and \(p\) is a projection head [10]. The overall objective with the MV-SimCon loss is governed by a linear combination of Eq. (6), (7) and (9), \[\begin{split}\mathcal{L}_{\text{final}}&=\mathcal{L }_{\text{MV-SimCon}}(I,T,\mathbf{P}_{J}^{I})\\ &\quad+\mathcal{L}_{\text{MV-SimCon}}(T,I,\mathbf{P}^{T})+ \mathcal{L}_{\text{NCS}}(I_{1},I_{2})\end{split} \tag{10}\] The design choices in MV-SimCon are methodically made based on the empirical evidence as studied in Sec. 4.4. ## 4 Experiments ### Training and Evaluation Datasets **Training datasets.** The experiments use Google's Conceptual Captions GCC3M [59], GCC12M [6], RedCaps12M [14] and filtered YFCC14M [66]. The exact number of samples for these datasets in our version, along with the number of samples in GroupViT [75] implementation, are reported in Tab. 1. Note that the images in these datasets are hosted on a range of sources on the web, where the links change or expire with time. Therefore, the number of samples in our version of the dataset is lower than those in [75]. To investigate the efficacy of the proposed method, we experiment with different numbers of training samples, starting with \(3\) million from GCC3M to \(41\) million with all the mentioned datasets combined. Further, we also experiment with different distributions at a similar scale by comparing models trained on GCC12M with ones trained on RedCaps12M. **Evaluation datasets.** The proposed approach is evaluated for the task of zero-shot transfer to semantic segmentation on the validation sets of PASCAL VOC [16], PASCAL Context [44], and Microsoft COCO [35]. They each contain \(20\), \(59\), and \(80\) foreground classes, respectively, with an additional background class. For COCO, following GroupVIT [75], the instance segmentation masks from the same class are combined to obtain segmentation masks. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **Dataset** & **Avg.** & **\#Samples** & **\#Samples** & **\% diff.** \\ & **Length** & **Ours** & **[**75**]** & \\ \hline \hline GCC3M [59] & 10.5 & \(2.857\)M & \(2.891\)M & \(-1.2\%\) \\ GCC12M [6] & 22.4 & \(10.696\)M & \(1.156\)M & \(-4.1\%\) \\ Redcaps12M [14] & 11.8 & \(11.835\)M & \(11.866\)M & \(-0.3\%\) \\ YFCC14M [66] & 38.4 & \(14.611\)M & \(14.615\)M & \(-0.3\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: Datasets used for the training along with the number of samples in our and in GroupViT’s [75] version of the datasets. ### Implementation Details **Training.** Each model is trained on the specified dataset with a batch size of \(2048\) for \(30\) epochs with AdamW optimizer [38]. An initial learning rate of \(4e^{-6}\) is linearly warmed up to a maximum learning rate of \(1.6e^{-3}\) in the first \(2\) epochs. Following warmup, the learning rate is decayed via the cosine schedule [37]. All the experiments use \(8\) NVIDIA-A100 GPUs with \(40\)GB of memory each. The threshold on intra-modal similarity values for determining positive samples in SimCon and MV-SimCon (\(\lambda\) in Eq. (5) and Eq. (8)) is initialized with \(0.95\) and is decayed using a step schedule by \(0.05\) after the 2nd and the 15th epochs. **Differences with GroupViT.** As noted in Tab. 1, our version of these web-based datasets have fewer samples. Furthermore, due to hardware constraints, all the experiments in this paper are conducted with a batch size of \(2048\), whereas GroupViT [75] uses a larger batch size of \(4096\). In vision-language pre-training tasks with contrastive learning, the use of larger batch sizes has been shown to give better results [84]. The effect of batch size on the proposed MV-SimCon is studied in Sec. 4.4 and the supplementary. With these unavoidable differences in the setup, we compare the proposed approach with our reproduction of GroupViT [75] results in the exact same setup [74]. **Discussion on Multi-label loss.** In addition to InfoNCE loss, GroupViT [75] uses a multi-label contrastive loss function for training. Where the nouns are extracted from the text and fed to a randomly sampled prompt template to construct additional text samples. In the multi-label contrastive loss objective, an image should not only align to the text from the data but also with the auxiliary texts. The use of the multi-label loss function increases the computational requirements as the auxiliary texts are passed through the text encoder. In its default configuration, this generated three auxiliary texts for each sample which restricts the training to datasets that contain long enough texts to extract nouns from, eliminating datasets such as RedCaps12M [14] due to its short average token length as shown in Tab. 1. In our experiments for the baseline, using multi-label loss increased the computational and GPU memory costs, and either gave similar or worse results. As an example, when trained on GCC12M using InfoNCE loss alone, the model attains mIoU of \(41.4\%\) on PASCAL VOC, whereas a model trained with InfoNCE and multi-label loss attains \(41.1\%\) (reported in [75]). Additionally, when trained on \(27\) million samples from GCC3M, GCC12M and YFCC14M, the model achieves a mIoU of \(50.3\%\) on PASCAL VOC, whereas the performance degrades to \(47.5\%\) when the multi-label loss is used. For these reasons, the multi-label loss is not used in our reproduction of the baseline. ### Evaluation **Independent training datasets.** The results by training the model independently on either GCC3M [59], RedCaps12M [14] or GCC12M [6] are shown in Tab. 2, where improvements are observed with the use of every pre-training \begin{table} \begin{tabular}{l l|l l l|l} \hline \hline **Loss Function** & **Training Data** & **PASCAL VOC** & **PASCAL Context** & **COCO** & **Average** \\ \hline \hline InfoNCE [75] & CC3M & \(16.0\) & \(7.20\) & \(6.50\) & \(9.90\) \\ SimCon (Ours) & CC3M & \(30.4+14.4\%\) & \(15.1+7.90\%\) & \(12.2+5.70\%\) & \(19.2+9.30\%\) \\ MV-SimCon (Ours) & CC3M & \(35.0+19.0\%\) & \(17.1+9.90\%\) & \(13.4+6.90\%\) & \(21.8+11.9\%\) \\ \hline InfoNCE [75] & R12M & \(19.1\) & \(11.0\) & \(8.9\) & \(13.0\) \\ SimCon (Ours) & R12M & \(37.9+18.8\%\) & \(18.1+7.10\%\) & \(19.5+10.6\%\) & \(25.2+12.2\%\) \\ MV-SimCon (Ours) & R12M & \(40.7+21.6\%\) & \(19.1+8.10\%\) & \(21.6+12.7\%\) & \(27.1+14.1\%\) \\ \hline InfoNCE [75] & CC12M & \(41.4\) & \(19.6\) & \(20.5\) & \(27.1\) \\ SimCon (Ours) & CC12M & \(47.1+5.70\%\) & \(21.3+1.70\%\) & \(22.6+2.10\%\) & \(30.3+3.20\%\) \\ MV-SimCon (Ours) & CC12M & \(48.9+7.50\%\) & \(23.0+3.40\%\) & \(23.8+3.30\%\) & \(31.9+4.80\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: Zero-shot semantic segmentation results on PASCAL-VOC [16], PASCAL Context [44], and COCO [35] measured with mask mIoU (%) with different training loss functions. Each model is trained independently either on GCC3M [59], RedCaps12M [14], or GCC12M [6] dataset with the same setup. Absolute improvements (%) over the baseline [75] are shown in blue. Figure 3: Zero-shot semantic segmentation results on PASCAL VOC [16] measured with mask mIoU (%) after each training epoch. dataset and across all evaluation datasets. The proposed SimCon loss function shows a substantial improvement ranging from an average gain of \(3.2\%\) to \(9.3\%\). The proposed MV-SimCon setup demonstrates additional improvements on top of SimCon. Note that most of the improvements come from using SimCon with MV-SimCon adding an additional \(0.4\%\) to \(2.6\%\) to the average performance. While InfoNCE achieves an average performance of \(27.1\%\) mIoU when training on GCC12M, it only attains an average performance of \(13.0\%\) mIoU when training on RedCaps12M, which is of a similar scale in terms of the number of samples. This result shows that training with the InfoNCE loss is highly sensitive to data distribution. On the other hand, the proposed MV-SimCon attains \(31.9\%\) mIoU average performance when trained on GCC12M and \(27.1\%\) mIoU average performance when trained on RedCaps12M, which demonstrates the robustness of MV-SimCon across different data distributions at the same scale. The performance on PASCAL VOC [16] measured after each training epoch is shown in Fig. 3. MV-SimCon converges faster than SimCon, while both SimCon and MV-SimCon improve and converge faster than InfoNCE. When training on GCC3M, SimCon and MV-SimCon outperform the final performance of InfoNCE (after \(30\) epochs) after only \(7\) and \(3\) epochs of training, respectively. Similar observations were made while training on RedCaps12M. On GCC12M, MV-SimCon improves the fastest. **Combining training datasets.** Results by training the model on different combinations of training datasets, along with comparisons to fully supervised transfer, are shown in Tab. 3. For these experiments, GCC3M and GCC12M are always used and are combined with either RedCaps12M or YFCC14M or both. Under the same training setup, _i.e_., the same batch size and training dataset version, the proposed MV-SimCon consistently outperforms InfoNCE loss on all evaluation datasets. Note that increasing the number of training samples from \(29\) million to \(41\) million for InfoNCE marginally improves the results on PASCAL VOC and degrades the results on PASCAL Context and COCO. Furthermore, the performance on all evaluation datasets is worse, with \(27\) million in training samples. On the other hand, MV-SimCon is more robust to the domain distribution at a comparable number of training samples, and the improvements hold with an increased number of training samples. In comparison to the models trained by the authors of GroupViT [75] with a higher batch size and more training samples in each dataset, the proposed MV-SimCon demonstrates higher scores with smaller batch size and less data. The last two rows on Tab. 3 show results with test time augmentations and higher resolution for inference. Additional results and details are in the supplementary. **Qualitative Results.** Visualization of semantic segmentation predictions from different models is shown in Fig. 4. The models trained with \(41\) million samples perform better than those trained with \(12\) million samples for both loss functions. With the same number of training samples, the grouping with MV-SimCon is better and does not have an overly enlarged mask or holes in the mask. Further, the model trained with InfoNCE misses certain semantic classes completely, an issue which is mitigated to some extent with MV-SimCon. More visualizations, including failure cases, are presented in the supplementary. ### Effect of design choices in MV-SimCon **Ablation.** The effect of design choices in MV-SimCon are summarized in Tab. 4. The experiments were conducted by training on GCC3M and evaluating the models on PASCAL VOC. It can be seen that most of the improvements come \begin{table} \begin{tabular}{c|c c|c c c|c c c} \hline \hline **Model** & **Arch.** & **Pre-training** & **Supervision** & **Zero** & **TTA** & **PASCAL** & **PASCAL** & **COCO** \\ & & **Dataset** & & **shot** & & **VOC** & **Context** & \\ \hline \hline DeiT [67] & ViT & ImageNet (1.2M) & class & ✗ & ✗ & \(53.0\) & \(35.9\) & - \\ \hline DINO [5] & ViT & ImageNet (1.2M) & self & ✗ & ✗ & \(39.1\) & \(20.4\) & - \\ DINO [5] & ViT & CC3 + CC12 + Y14 (29M) & self & ✗ & ✗ & \(37.6\) & \(22.8\) & - \\ MoCo [25] & ViT & ImageNet (1.2M) & self & ✗ & ✗ & \(34.3\) & \(21.3\) & - \\ MoCo [25] & ViT & CC3 + CC12 + Y14 (29M) & self & ✗ & ✗ & \(36.1\) & \(23.0\) & - \\ \hline InfoNCE [75] & GroupViT & CC3 + CC12 + R12 (27M) & text & ✓ & ✗ & \(50.8\) & \(23.7\) & \(27.5\) \\ InfoNCE + Multi-label [75] & GroupViT & CC3 + CC12 + Y14 (29M) & text & ✓ & ✗ & \(52.3\) & \(22.4\) & \(24.3\) \\ \hline \multicolumn{1}{c|}{\multirow{-3}{*}{ \begin{tabular}{} \end{tabular} } } & GroupViT & CC3 + CC12 + R12 (27M) & text & ✓ & ✗ & \(44.7\) & \(20.0\) & \(23.4\) \\ InfoNCE [75] & GroupViT & CC3 + CC12 + Y14 (29M) & text & ✓ & ✗ & \(50.3\) & \(21.7\) & \(24.6\) \\ InfoNCE [75] & GroupViT & CC3 + CC12 + R12 + Y14 (41M) & text & ✓ & ✗ & \(50.5\) & \(20.9\) & \(23.0\) \\ \hline MV-SimCon\({}^{\dagger}\) & GroupViT & CC3 + CC12 + R12 (27M) & text & ✓ & ✗ & \(52.3+7.60\) & \(\mathbf{24.5}+4.50\%\) & \(27.7+4.30\%\) \\ MV-SimCon\({}^{\dagger}\) & GroupViT & CC3 + CC12 + Y14 (29M) & text & ✓ & ✗ & \(52.4+2.10\%\) & \(22.2+0.50\%\) & \(26.6+2.00\%\) \\ MV-SimCon\({}^{\dagger}\) & GroupViT & CC3 + CC12 + R12 + Y14 (41M) & text & ✓ & ✗ & \(\mathbf{53.5}+3.00\%\) & \(24.2+3.30\%\) & \(\mathbf{29.9}+6.90\%\) \\ \hline InfoNCE\({}^{\dagger}\) [75] & GroupViT & CC3 + CC12 + R12 + Y14 (41M) & text & ✓ & ✓ & \(\mathbf{53.2}\) & \(22.7\) & \(\mathbf{24.8}\) \\ MV-SimCon\({}^{\dagger}\) & GroupViT & CC3 + CC12 + R12 + Y14 (41M) & text & ✓ & ✓ & \(\mathbf{58.7}+5.50\%\) & \(\mathbf{26.6}+3.90\%\) & \(\mathbf{33.3}+8.50\%\) \\ \hline \hline \end{tabular} \end{table} Table 3: Mask mIoU (%) on PASCAL-VOC [16], PASCAL Context [44] and COCO [35] datasets. Comparisons between zero-shot and fully supervised transfer. Zero-shot “✓” indicates transfer to semantic segmentation without any fine-tuning. Absolute improvements (%) over the baseline [75] are shown in blue. The authors of this paper train all models that are marked with a \(\dagger\) with the same data and batch size. In gray are the models trained by the authors of [75] with different versions of the data and a higher batch size as noted in Sec. 4.2. TTA “✓” indicates the use of test time augmentations for inference such as flip, multiple scales, and evaluation at higher resolution. from using SimCon loss (row-1 vs. row-2). Naively adding multiple views to SimCon, by aligning the images in each view to the text improves the results marginally (row-2 vs. row-3). Adding the negative cosine similarity (NCS) loss between the two views of the image (Eq. (9)) improves the results further by \(1.2\%\) (row-3 vs. row-4). Adding joint computation of the image positives (Eq. (8)) improves the results by \(2.7\%\) (row-4 vs. row-5). Note that joint image positives refer to the setup when the positive samples for images are assigned if the intra-modal similarity is higher than the threshold in either view. **Effect of batch size.** Results with varying the batch size are shown in Fig. 5. Increasing the batch size improves the results. As noted in Sec. 4.2, our main experiments are with a batch size of \(2048\), whereas the [75] uses \(4096\). This study justifies the lower results for the baseline [75] in our experiments than the ones reported by the authors. Based on the trend in Fig. 5, using a higher batch size will improve performance for both our approach and the baseline. However, all the comparisons made in our experiments are fair and were conducted with the same batch size and data. ## 5 Conclusions A novel loss function termed SimCon is proposed, where an image (text) sample should not only align to the corresponding text (image) but also with the text from samples that are similar in the visual (textual) space. Training is further made robust by combining the SimCon loss with multiple views in the visual domain. The empirical results demonstrate that the use of the proposed MV-SimCon loss function leads to SOTA results on zero-shot semantic seg \begin{table} \begin{tabular}{c|c|c|c|c} \hline **SimCon** & **Multiple** & **NCS** & **Joint image** & **PASCAL** \\ & **views** & & **positives** & **VOC** \\ \hline \hline \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(16.0\) \\ \(\checkmark\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(30.4\) \\ \(\checkmark\) & \(\checkmark\) & \(\mathcal{X}\) & \(31.1\) \\ \(\checkmark\) & \(\checkmark\) & \(\mathcal{X}\) & \(32.3\) \\ \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(35.0\) \\ \hline \end{tabular} \end{table} Table 4: Effect of the design choices in MV-SimCon. All the experiments are conducted by training on GCC3M dataset. Figure 4: Qualitative results with models trained on \(12\) and \(41\) million samples. The \(12\)M setup is trained on GCC12M [6] dataset, and the \(41\)M setup is trained on a combination of all datasets in Tab. 1. Best viewed in color and by zooming in to see the predicted class. Figure 5: Effect of batch size on GCC3M. x-axis is the batch size and y-axis is the mIoU (in %) on PASCAL VOC dataset. mentation, along with faster convergence.
2310.13430
HRTF Interpolation using a Spherical Neural Process Meta-Learner
Several individualization methods have recently been proposed to estimate a subject's Head-Related Transfer Function (HRTF) using convenient input modalities such as anthropometric measurements or pinnae photographs. There exists a need for adaptively correcting the estimation error committed by such methods using a few data point samples from the subject's HRTF, acquired using acoustic measurements or perceptual feedback. To this end, we introduce a Convolutional Conditional Neural Process meta-learner specialized in HRTF error interpolation. In particular, the model includes a Spherical Convolutional Neural Network component to accommodate the spherical geometry of HRTF data. It also exploits potential symmetries between the HRTF's left and right channels about the median axis. In this work, we evaluate the proposed model's performance purely on time-aligned spectrum interpolation grounds under a simplified setup where a generic population-mean HRTF forms the initial estimates prior to corrections instead of individualized ones. The trained model achieves up to 3 dB relative error reduction compared to state-of-the-art interpolation methods despite being trained using only 85 subjects. This improvement translates up to nearly a halving of the data point count required to achieve comparable accuracy, in particular from 50 to 28 points to reach an average of -20 dB relative error per interpolated feature. Moreover, we show that the trained model provides well-calibrated uncertainty estimates. Accordingly, such estimates can inform the sequential decision problem of acquiring as few correcting HRTF data points as needed to meet a desired level of HRTF individualization accuracy.
Etienne Thuillier, Craig Jin, Vesa Välimäki
2023-10-20T11:41:54Z
http://arxiv.org/abs/2310.13430v1
# HRTF Interpolation using a Spherical Neural Process Meta-Learner ###### Abstract Several individualization methods have recently been proposed to estimate a subject's Head-Related Transfer Function (HRTF) using convenient input modalities such as anthropometric measurements or pinnae photographs. There exists a need for adaptively correcting the estimation error committed by such methods using a few data point samples from the subject's HRTF, acquired using acoustic measurements or perceptual feedback. To this end, we introduce a Convolutional Neural Process meta-learner specialized in HRTF error interpolation. In particular, the model includes a Spherical Convolutional Neural Network component to accommodate the spherical geometry of HRTF data. It also exploits potential symmetries between the HRTF's left and right channels about the median axis. In this work, we evaluate the proposed model's performance purely on time-aligned spectrum interpolation grounds under a simplified setup where a generic population-mean HRTF forms the initial estimates prior to corrections instead of individualized ones. The trained model achieves up to 3 dB relative error reduction compared to state-of-the-art interpolation methods despite being trained using only 85 subjects. This improvement translates up to nearly a halving of the data point count required to achieve comparable accuracy, in particular from 50 to 28 points to reach an average of -20 dB relative error per interpolated feature. Moreover, we show that the trained model provides well-calibrated uncertainty estimates. Accordingly, such estimates can inform the sequential decision problem of acquiring as few correcting HRTF data points as needed to meet a desired level of HRTF individualization accuracy. Audio systems, representation learning, spatial audio, uncertainty. ## I Introduction Recent adoption of augmented and virtual reality interfaces has pushed the need for immersive spatial audio rendering solutions that scale to mass market [1, 2, 3, 4]. The Head Related Transfer Function (HRTF) is a key component of current systems: it simulates the effect of the subject's body on the acoustic transmission channels between the subject's ears and sound sources as a function of their locations around the subject [5]. Crucially, the HRTF is a function of the subject's morphology and is specific to each individual. Studies have shown that spatial audio percepts deteriorate when a generic HRTF is used for all subjects in a population compared to using individualized HRTF estimates [5, 6]. In this work, we propose the first HRTF interpolation method that provides well-calibrated uncertainty estimates. The method also demonstrates significantly improved interpolation accuracy with regards to the state of the art. ### _Prior Art_ A recent review paper classifies HRTF individualization techniques into four categories defined by the source of HRTF information: acoustic measurements, numerical simulation, anthropometric data, and perceptual feedback [6]. As an alternative approach useful to our discussion, we classify below individualization techniques into two broad classes according to the way in which the subject's individualized HRTF is represented. A first class of methods represents the subject's individualized HRTF in a non-parametric fashion with a sparse set of observed HRTF data points. In such approaches, interpolation methods are applied downstream to provide HRTF filter estimates at specified directions of arrival between the observed locations. Typically, the observations are collected using acoustic measurements [7], but recommender systems have also been proposed for composing the sparse set with HRTF filters derived from a pre-existing database and according to perceptual feedback obtained from the user [8]. Improvements in interpolation methods result in sparser set of observations becoming sufficient for meeting a required accuracy threshold, thereby accelerating the individualized HRTF acquisition process. Early methods include barycentric interpolation [9], natural neighbour interpolation [10], spherical harmonic [11], thin-plate spherical spline interpolation [12] and Gaussian process regression [13]. More recently, pre-processing has been shown to significantly reduce the required density of HRTF measurements needed to meet a given interpolation accuracy requirement [14, 15, 16, 17]. Neural-network regressor models have also been proposed [18, 19], including a spherical convolutional neural network performing interpolation from a relatively dense equiangular grid counting 120 data points [20]. Related works includes HRTF upsampling approaches using generative models [21]. However, such models currently provide improvements in the sparsest regimes only. A second class of methods parametrizes individualized HRTFs using low-dimensional latent-space representations embodied by fixed-length vectors of adjustable coefficients. Various approaches have been proposed to predict the coefficients of the representation including use of anthropometric measurement [22, 23], pinnae photographs [24], HRTF observations [18], perceptual feedback [25] or combinations thereof [26]. This provide a convenient means for promptly estimating a subject's HRTF from one or several input modalities. However, a common design compromise facing techniques in this class lies in providing a representation that is compact enough that prediction is facilitated, while retaining sufficient expressiveness that HRTF variability across the population is faithfully represented. Due to the fixed dimensionality of latent representations in particular, and unlike the non-parametric interpolation methods described above, the expressiveness of the model does not scale with additional data points provided to it. More importantly, any resulting change to the HRTF representation is in this case global, such that any resulting local improvement is susceptible to adversely affect the representation elsewhere. This contrasts with non-parametric cases which provide representations that are local by construction. ### _Problem_ There is a need for adaptively refining the individualized HRTF estimate provided by a parametric method until a pre-defined criterion of suitability is achieved, for example a user performance metric threshold under a listening test experiment. To this end, we advocate for a hybrid approach to HRTF individualization in which the parametric estimate is corrected by integrating a few observations of the subject's HRTF using an interpolation method. Under this approach, the HRTF refinement problem can be framed as a sequential decision problem: that of acquiring as few correcting HRTF data points as needed to meet the performance requirement, using measurements or perceptual feedback. Such problem would benefit from using an accurate interpolation method that also provides well-calibrated uncertainty estimates. When suitably calibrated, uncertainty estimates can indeed be used to inform the choice of the next location to observe. Under a perceptual feedback acquisition scheme, they can additionally inform the selection of proposal HRTF filters to be submitted as queries to the subject. Finally, there also exists a need within augmented reality settings, for matching the rendered sound field with the user's surrounding acoustic environment. Such a problem could also be addressed using the suggested approach by adaptively refining the Binaural Room Transfer Function instead of the HRTF. ### _Solution_ To facilitate the approach mentioned above, we introduce a novel model that we name Spherical Convolutional Conditional Neural Process (SConvCNP). The proposed model is a Convolutional Conditional Neural Process (ConvCNP) meta-learner [27] specialized in HRTF error interpolation. The model accommodates the spherical geometry of HRTF data. To this end, it includes a Spherical Convolutional Neural Network component [28, 29] which executes rotation-equivariant feature transforms. It also exploits the approximate symmetry between the HRTF's left and right channels about the median axis. To the authors' best knowledge, this work is the first application of a Neural Process model to spherical data. Such a model learns a functional representation of the set of observed HRTF data points that preserves spatial structure and can be addressed at any location on the unit sphere. Furthermore, the representation is learned using rotation equivariant mappings which ensures the same transformation is applied with shared parameters everywhere on the sphere, irrespective of feature location. These aspects allow for learning local interpolations of the HRTF features in an sample-effective fashion. Moreover, the possibility, afforded by the model, to address any location on the unit sphere provides native compatibility for training on any HRTF databases irrespective of its data point grid layout. This work implements and tests the SConvCNP model in a simplified experimental setup. Firstly, the interpolation is applied on the HRTF spectrum after time-alignment [14, 16, 17], leaving pure delay interpolation as future work for brevity. Secondly, a generic population-mean time-aligned spectrum is used as generic estimate for all subjects before correction, instead of individualized time-aligned spectra. This allows to evaluate the merits of the model purely from an interpolation performance standpoint, leaving the application to individualized HRTF correction as future work. The model is shown to achieve up to 3 dB of relative error reduction compared to state-of-the-art interpolation methods. This translates to nearly a halving of the required data to achieve a comparable level of accuracy. Moreover, our model is shown to provide well-calibrated uncertainty estimates. This paper is organized as follows. Sec. II provides background on the ConvCNP model and its meta-training procedure. Sec. III introduces the SConvCNP model, defines the interpolation tasks on which the model is trained, and proposes baseline and metrics for evaluating the model's performance both in terms of interpolation accuracy and uncertainty calibration. Sec. IV presents and discusses the experimental results. Sec. V concludes this paper. ## II Background In this section, we provide a technical review of the ConvCNP model and its meta-training procedure as background for the introduction of the SConvCNP model in Sec. III. ### _ConvCNP Architecture_ Neural Processes form a class of deep neural networks operating on sets to model stochastic processes [30, 31]. In neural process models, a set of observed location-feature data point pairs \(\left\{(x_{c},y_{c})\right\}_{c=1}^{C}\) at the input informs a predictive distribution provided at the output for unseen values \(y_{t}\) at target locations \(x_{t}\), much in the same fashion as in Gaussian Processes [32]. In particular, the elements of the input set are subsumed into a representation embedding, which allows for handling sets of different sizes and ensures invariance in the ordering of set elements [33]. Recently, functional representation embeddings have been proposed that preserve the spatial structure in the input set and are addressable at any location coordinates \(x_{t}\). These functional embeddings enable constructing translation equivariant neural process models, as appropriate when modeling stationary data [27, 34, 35]. The ConvCNP is an example of such model [27]. A description of its architecture is given in the current section. A typical example of ConvCNP model architecture is given in the block diagram of Fig. 1. The model includes a first set convolution (block _SetConv_) which maps a set of observed data points \(\left\{(x_{c},y_{c})\right\}_{c=1}^{C}\) into a functional representation [27, 35] \[r=\text{SetConv}\left(\left\{(x_{c},y_{c})\right\}_{c=1}^{C}\right), \tag{1}\] which, assuming a multiplicity of one for data set elements [27], returns a vector-valued point-wise representation \[r(x)=\left(\sum_{c=1}^{C}K(x_{c},x),\ \frac{\sum_{c=1}^{C}y_{c}K(x_{c},x)}{ \sum_{c=1}^{C}K(x_{c},x)}\right), \tag{2}\] at any specified location \(x\). In the above expression, \(K:\mathcal{X}\times\mathcal{X}\rightarrow\mathds{R}\) denotes a positive definite kernel with learnable parameter(s), for example, a Gaussian kernel in the case of planar data such as images [36]. Accordingly, the first channel of functional embedding \(r\) before discretization, forms a kernel density of the observed locations: the result of a convolution between filter \(K(x_{c},\cdot)\) and a sum of unit-weighted Dirac distributions centered at locations \(\left\{x_{c}\right\}_{c=1}^{C}\). The second channel forms an interpolant of the observed data points \(\left\{(x_{c},y_{c})\right\}_{c=1}^{C}\) following the Nadaraya-Watson kernel regression method [37]. In ConvCNP models, translation equivariant representation learning is performed downstream of the set convolution using a Convolutional Neural Network (CNN). As pictured in Fig. 1, representation \(r\) is first discretized following a grid \((x_{g})_{g\in\mathcal{G}}\) of regularly-spaced coordinates. Assuming two-dimensional planar data for example: \(\mathcal{G}=\left\{1,\ldots,G\right\}\times\left\{1,\ldots,G\right\}\), in which \(G\) denotes the number of samples of the grid in each dimension. A second set convolution converts--at least implicitly--the learned representation at the output of the CNN back to a functional one [35], denoted \(q\) in the diagram. Crucially, \(q\)'s second channel1 forms an interpolant of the learned representation \(\left\{(x_{g},z_{g})\right\}_{g\in\mathcal{G}}\) following (2). Footnote 1: The first channel, representative of density, is less informative at this stage than in the case of the first set convolution and can optionally be discarded. Given the above, \(q\) forms a learned functional representation of the input set which is spatially-structured and addressable at any user-specified target location \(x_{t}\). In the example of Fig. 1, the resulting point-wise representation \(q_{t}\) is decoded using a feed-forward neural network (MLP) decoder, which maps the target location \(x_{t}\) to mean \(\mu_{t}\) and standard deviation \(\sigma_{t}\) values specifying a predictive distribution for the target features \(y_{t}\) at that location: \[p\left(y_{t}\ \left|\ x_{t},\ \left\{(x_{c},y_{c})\right\}_{c=1}^{C}\right) \approx\mathcal{N}\left(y_{t};\ \mu_{t},\sigma_{t}^{2}\right),\] where we assume uni-variate features for simplicity and \(\mathcal{N}\) denotes the normal distribution. ### _Meta-training_ Features and advantages of the model are best understood under the lens of the meta-learning framework [38]. Under this perspective, the observed data-points \(\left\{(x_{c},y_{c})\right\}_{c=1}^{C}\) form a task-specific train set and the ConvCNP model forms a meta-learning algorithm mapping the set to a trained discriminator \(\text{MLP}\circ q\) that, given query location \(x_{t}\), returns a predictive distribution \(\mathcal{N}(y_{t};\mu_{t},\sigma_{t}^{2})\). In particular, the learned functional embedding \(q\) of train set \(\left\{(x_{c},y_{c})\right\}_{c=1}^{C}\) parametrizes said discriminator \(\text{MLP}\circ q\) such that the set's data point locations \(\left\{x_{c}\right\}_{c=1}^{C}\) can be leveraged to provide well-calibrated uncertainty estimates \(\sigma_{t}\). This is unlike common learning methods which generally discard such information and, as a consequence, provide trained models that cannot recognize when queried far from the points of the train set. While it remains possible for these models to quantify the uncertainty resulting from noise present in the labels ("data uncertainty", "aleatoric uncertainty"), the uncertainty in the choice of model and the value of its parameters ("model uncertainty", "epistemic uncertainty"), in particular as it relates to the train set used to optimize parameter values, is typically unaccounted for. In contrast, ConvCNP models are shown to provide well-behaved uncertainty estimates for stationary data [27]. This results in part from translation equivariance. Indeed, this ensures the model's outputs can be computed as function of distance to the context set's data points but not as a function of the data points' coordinate values themselves. Model optimization within the meta-learning framework is carried out on a set of learning tasks (the meta-training set), each defined by a context (train) set and target (test) set pair. In particular, ConvCNP models can be trained following the maximum-likelihood objective [30] \[\max_{\theta}\sum_{(\mathcal{C},\mathcal{T})\in\mathcal{M}}\ \sum_{(x,y)\in \mathcal{T}}\log p(y\left|x,\mathcal{C};\ \theta\right), \tag{3}\] where \(\theta\) denotes the coefficient vector of the model's learnable parameters, \(\mathcal{M}=\left\{(\mathcal{C}_{m},\mathcal{T}_{m})\right\}_{m=1}^{M}\) denotes the meta-training set, \(\mathcal{C}_{m}=\left\{(x_{c},y_{c})\right\}_{c=1}^{C}\) forms a context (train) set, and \(\mathcal{T}_{m}=\left\{(x_{t},y_{t})\right\}_{t=1}^{C}\) forms a target (test) set. Additional validation and test meta-sets composed of held-out data are used to perform model selection and evaluate generalization performance. Fig. 1: Schematic block diagram of a typical ConvCNP model’s architecture. ## III Novel Model and Method In this section, we introduce the SConvCNP model and define the interpolation tasks on which the model is trained. Furthermore, we select the baselines and metrics for the purpose of evaluating the model's performance both in terms of interpolation accuracy and uncertainty calibration. ### _Interpolation Task_ When applied to the problem of HRTF interpolation, each task \((\mathcal{C},\mathcal{T})\) composing the meta-training set \(\mathcal{M}\) consists in interpolating a given subject's HRTF to specified unseen (target, test) locations given a set of observed (context, train) HRTF data points acquired from the subject. A detailed description of the specific interpolation task studied in this work follows. Consider the following time-alignment factorization of the HRTF spectrum [14, 16, 17]: \[h(x)=\left(\mathrm{e}^{\mathrm{i}\frac{2\pi\mathrm{i}}{N}\tau(x)}\right)_{n=0} ^{N/2}\odot m(x), \tag{4}\] where * \(\odot\) denotes the Hadamard (element-wise) product, * \(x\in\mathcal{S}^{2}=\left\{x\in\mathds{R}^{3}\mid\|x\|_{2}=1\right\}\) denotes the sound source direction represented in cartesian coordinates on the unit sphere, * N denotes the filter tap count of the Head-Related Impulse Response (HRIR), * \(\tau:\mathcal{S}^{2}\rightarrow[0,\infty)^{2}\) returns the pure delay values for both ears at specified location \(x\), * \(m:\mathcal{S}^{2}\rightarrow\mathds{C}^{(N/2+1)\times 2}\) returns the positive frequency side of the time-aligned HRTF spectrum for both ears at specified location \(x\), * \(h:\mathcal{S}^{2}\rightarrow\mathds{C}^{(N/2+1)\times 2}\) returns the positive frequency side of the HRTF spectrum for both ears at specified location \(x\). Under this factorization, the time-aligned spectrum is composed of the minimum-phase and nonlinear phase all-pass components of the HRTF [39]. Inspection of (4) reveals that interpolating the pure delay \(\tau\) and the time-aligned spectrum \(m\) is in principle less challenging than interpolating spectrum \(h\) directly. Indeed, the exponential factor in Equation (4) maps pure delay values on the unit sphere to complex values, which real and imaginary parts ripple on the surface of \(\mathcal{S}^{2}\) following the spatial variations of pure delay \(\tau\), at a rate proportional to normalized frequency \(n/N\). Consequently, this exponential factor significantly contributes to the irregularity of the HRTF spectrum, especially in the higher portion of the frequency range. In effect, pure delay and aligned spectrum components have been shown to require spherical harmonic representations of greatly reduced order compared to the non-processed spectra for comparable reconstruction accuracy [14, 16, 17, 40]. In this work, we employ simulated HRTFs from the HUTUBS database without changes to its coordinate system, which places the origin at the center of the subject's head [41]. We extract the time-aligned spectrum \(m\) by factoring out the pure-delay exponential term out of (4) for each data point of the HRTF set individually. In particular, the pure delay is estimated in a preliminary step as the power-weighted average of excess group delay [39]. More specifically, the weighted-average is computed using frequency bins lying within the 0 to 1.1 kHz frequency range. This avoids sharp group delay jumps occurring around zeros of the HRTF spectrum in the upper frequency range [39]. When applied to the simulated HRTFs of the HUTUBS database, this approach provides pure delay values that are spatially smooth. We apply this time-alignment method to down-sampled versions of the binaural filters from 44.1 to 33.075 kHz. This reduces the HRIR tap count from \(N=256\) to \(N=192\), thereby lowering memory requirements for running the model. For brevity, we limit the experiments of this work to the interpolation of the time-aligned spectrum \(m\) and leave the comparatively less challenging problem of interpolating the pure delay \(\tau\) as future work. More specifically, we aim to interpolate the time-aligned spectrum centered around the population-mean. Accordingly, the \(i^{\text{th}}\) data point entering the composition of context or target set \(\mathcal{C},\mathcal{T}\) is given for a particular subject \(s\) by \[\left(x_{i},\ y_{i}^{(s)}\right)=\left(x_{i},\ m_{i}^{(s)}-\bar{m}_{i}\right),\] where \[\bar{m}_{i}=\frac{1}{S}\sum_{s=1}^{S}m_{i}^{(s)}, \tag{5}\] denotes the time-alined spectrum mean taken across the \(S\) subjects of the train set and \(m_{i}^{(s)}=m^{(s)}(x_{i})\) denotes the value of time-alined spectrum specific to subject \(s\) at location \(x_{i}\). Each task \((\mathcal{C},\mathcal{T})\) in the train/validate/test meta-set splits is composed using the HRIR filters from a single individual's set in the HUTUBS database [41]. In particular, the context sets \(\mathcal{C}=\left\{(x_{c},y_{c})\right\}_{c=1}^{C}\) are of varying size and comprise from zero to a hundred data points sampled on the unit sphere according to an approximately-uniform-grid layout. In practice, one such approximately-uniform grid is prepared beforehand for each possible sample count. For each generated task \((\mathcal{C},\mathcal{T})\), one of these grids is randomly drawn, thereby selecting both the number of context data point samples and their relative locations on the unit sphere. Following this, a randomly-determined three-dimensional rotation of the grid is conducted to produce the final set of sampled coordinates on the unit sphere. Finaly, the HRTF set data points closest to the coordinates of the rotated grid are elected to form the context set \(\mathcal{C}\). The remaining data points of the HRTF set are used to form the target set \(\mathcal{T}\). In order to augment the meta-train set, the uniform grid is replaced by an irregular grid with identical data point count half of the time during training. In particular, the coordinates of the irregular grid are in this case drawn independently following a uniform density across the surface of the sphere. Furthermore, the data points of the task \((\mathcal{C},\mathcal{T})\) are mirrored about the median plane half of the time. This augments the meta-train set with variants of the original subjects presenting permuted ears. Given that the simulated HRTF sets from the HUTUBS database comprise 1730 data points per subject, the approach described above provides a great number of interpolation tasks. In practice, each task \((\mathcal{C},\mathcal{T})\) is generated in real time within the train loop. This results in a meta-training set \(\mathcal{M}\) of considerable size from relatively few subjects. A summary of the HUTUBS subjects split among the meta-train, meta-validation and meta-test set is given in Table I. In this split, subjects 88 and 96 are discarded since they form duplicates of subjects 22 and 1 respectively [42]. ### _SConvCNP model_ The ConvCNP model was originally introduced with applications on planar data, such as images [27]. Accordingly, we adapt it to the spherical geometry of HRTF data and to the approximate symmetry between the left and right channels of the HRTF about the median plane. A detailed description of the resulting SConvCNP model is provided in this section. Assuming a subject's morphology is perfectly symmetric about the median plane, the right HRTF channel would be perfectly recoverable from the left, thereby reducing the effective dimensionality of the HRTF feature space by a factor of two. In practice however, subjects are only approximately symmetric. Nevertheless, allowing observed feature values from one channel to inform the values in the opposite channel at the mirrored location should facilitate HRTF interpolation. The SConvCNP ensures this by mirroring the right channel of the data points about the median plane. As shown in Fig. 2, the context set is decomposed (in the "split" block) into two channel-specific context sets at the input of the first discretized set convolution block. In particular, the coordinates perpendicular to the median plane are flipped ("flip" block) in the right channel's context set. The set convolution processes each context set in sequence with shared parameters and the two resulting tensors are concatenated along the channel dimension downstream ("concatenate" block). Fig. 2 also shows that the mirroring operation is executed a second time for the right channel, upstream of the second discretized set convolution. This recovers proper left-right filter channel pairings at the output. A significant aspect of the interpolation task described in Sec. III-A lies in the spherical geometry of the time-aligned spectrum features to be interpolated. Specifically, each data point location takes value on the unit sphere. Accordingly, specialized set convolutions adapted to this spherical geometry are implemented in the SConvCNP model, as pictured in Fig. 2. In this work, we use a spherical Gaussian kernel [36]: \[K(x_{1},x_{2})=\mathrm{e}^{-2\beta(1-x_{1}\cdot x_{2})}, \tag{6}\] where \(x_{1},\ x_{2}\in\left\{x\in\mathbb{R}^{3}\ |\ \|x\|_{2}=1\right\}\), \(\cdot\) represents the dot product and the precision parameter \(\beta\in(0,\infty)\) is learned. As pictured in Fig. 2, the first set convolution block carries out a dedicated spherical set convolution for each frequency bin, ensuring a specific precision parameter \(\beta\) is learned at each frequency. In contrast, the second set convolution block performs a single discretized set convolution operation repeatedly with a single learned precision parameter \(\beta\) shared across all channel-frequency pairs. Furthermore, the density channel at the output of the second set convolution is discarded ("discard density channel" block). To further accommodate the spherical geometry of HRTF data, we substitute planar convolutional layers in the CNN component with recently proposed spherical ones [29, 28]. Correspondingly, rotation equivariance is achieved in place of translation equivariance. We based our implementation on publicly-available code provided for spin-weighted spherical convolution [43]2. In particular, we recover Esteves' simple zonal filter convolution [28] as a special case discarding all spin directions but the null-valued one. In the resulting layer, the convolution operation is carried out in Spherical Harmonic (SH) space by matrix-multiplication of the input features with the layer's filter coefficients. In practice, the SH representation of the filter is interpolated directly from a few number of Fig. 2: Schematic block diagram of the SConvCNP model. Refer to Table II for tensor dimensions. learnable SH coefficients. This provides localized zonal filters while simultaneously avoiding the cost of the forward SH transform for that part of the operation [28]. In principle, the frequency dimension could be treated as an additional channel dimension. In this work, we propose to implement (single dimension) planar convolution in this dimension in order to promote the meta-learner's sample efficiency. This results in a three-dimensional hybrid planar-spherical convolution, with one axis for the frequency bins and two for the sound source direction. As reported in table II, the receptive fields has dimensions \(G\times G\times(N/2+1)\) throughout all planar-spherical convolutional layers, where \(G\) denotes the number of equiangular samples in each azimuth and elevation directions. In classical fashion, the spherical CNN component of the model is composed of residual blocks arranged in a sequence. As pictured in Fig. 3, each block follows a single-layer pre-activation architecture [44]. As common in residual architectures, a "resize" layer is positioned at the input of the spherical CNN. Firstly, this block converts the complex-valued input tensor into an equivalent float-valued tensor, by concatenating real and imaginary parts along the channel dimension. Moreover, it scales the number of channels to the specified count M used in the residual blocks of the spherical CNN. The point-wise MLP component of the SConvCNP model is also composed of single-layer pre-activation residual blocks as represented in Fig. 3. Each block is implemented using a point-wise convolution layer for sharing parameters across frequency bins. The model comprises a final layer that resizes and splits the channel dimension to provide a complex-valued predictive mean tensor \(\mu_{t}\) and an unconstrained complex-valued standard deviation tensor \(\sigma_{t}^{\prime}\). Furthoremore, this layer provides the predictive standard deviation tensor \(\sigma_{t}\) using a \(\mathrm{risen\,softplus}\) non-linearity forcing the real and imaginary parts of the unconstrained standard deviation coefficients to positive values [45, 27]: \[\sigma_{t}=\mathrm{risen\_softplus}\left(\mathrm{Re}\left(\sigma_{t}^{\prime} \right)\right)+\mathrm{i\,risen\_softplus}\left(\mathrm{Im}\left(\sigma_{t}^{ \prime}\right)\right),\] where \[\mathrm{risen\_softplus}\left(\nu\right)=\sigma_{\text{floor}}+\left(1-\sigma_ {\text{floor}}\right)\log\left(1+\mathrm{e}^{\nu}\right),\] and \(\sigma_{\text{floor}}\in\left(0,\infty\right)\) is small. This yields the following conditional probability density estimate for the target features \(y_{t}\): \[\begin{split}& p\left(y_{t}\;\left|\;x_{t},\;\left\{\left(x_{c}, y_{c}\right)\right\}_{c=1}^{C}\right)\approx\ldots\\ &\mathcal{N}\left(y_{t}^{\mathrm{Re}};\;\mu_{t}^{\mathrm{Re}},\; \mathrm{diag}\left(\sigma_{t}^{\mathrm{Re}}\right)^{2}\right)\mathcal{N}\left(y _{t}^{\mathrm{Re}};\;\mu_{t}^{\mathrm{Im}},\;\mathrm{diag}\left(\sigma_{t}^{ \mathrm{Im}}\right)^{2}\right),\end{split}\] where \(\mathcal{N}\) denotes the multivariate normal distribution, \(y_{t}^{\mathrm{Re}}=\mathrm{flatten}\left(\mathrm{Re}\left(y_{t}\right)\right)\), \(\mu_{t}^{\mathrm{Re}}=\mathrm{flatten}\left(\mathrm{Re}\left(\mu_{t}\right)\right)\), \(\sigma_{t}^{\mathrm{Re}}=\mathrm{flatten}\left(\mathrm{Re}\left(\sigma_{t} \right)\right)\), \(y_{t}^{\mathrm{Im}}=\mathrm{flatten}\left(\mathrm{Im}\left(y_{t}\right)\right)\), \(\mu_{t}^{\mathrm{Im}}=\mathrm{flatten}\left(\mathrm{Im}\left(\mu_{t}\right)\right)\), \(\sigma_{t}^{\mathrm{Im}}=\mathrm{flatten}\left(\mathrm{Im}\left(\sigma_{t} \right)\right)\), and flatten re-shapes the tensor provided as argument into a vector. ### _Interpolation Accuracy_ In this work, we compare the performance of the SConvCNP model to Gaussian process regressor, thin-plate spherical spline and barycentric interpolation baselines. Interpolation is carried out for all three methods on the SConvCNP model's input features. Similarly to [12], the thin-plate spherical spline method is implemented following Whaba [46], using second-order splines and without smoothing. Gaussian process regression is conducted on a per-frequency basis similarly to Luo et al. [13]. At each frequency, we define a covariance function for the real part and one for the imaginary part of the bin using the spherical Gaussian kernel from Equation (6). The observational noise of the model is fixed with a value of 1e-4. The remaining meta-parameters, in particular the precision parameters from the spherical Gaussian kernels, are fitted on 340 tasks from the meta-train set under the log marginal likelihood objective [32]. Meta-parameter values are maintained fixed upon evaluation on the meta-test set. In classical fashion, the barycentric interpolation baseline provides each interpolated feature \(\hat{y}\) as a convex combination of the values \(\left\{y_{i}\right\}_{i=1}^{3}\) found at the observed data points defining the smallest spherical triangle enclosing the target point location \(x\), i.e.: \[\hat{y}=\sum_{i=1}^{3}b_{i}y_{i},\] where \(b_{i}\) denotes the barycentric coordinate of the target location \(x\) associated with the \(i^{\text{th}}\) vertex of said spherical triangle. In classical fashion, the barycentric coordinates \(b_{i}\) are computed as ratios of spherical triangle areas, each computed as the sum of the spherical angles. Fig. 3: Single-layer pre-activation residual blocks used to compose the spherical CNN (left) and MLP (right) components of the SConvCNP model of Fig. 2. Refer to Table II for dimension of tensors. We also compare the SConvCNP model's HRTF magnitude interpolation performance specifically, to that of a publicly-available implementation3 of the natural-neighbors interpolation method [47, 10]. In particular, we apply this implementation directly on the HRTF spectrum after downsampling to 33.075 kHz but without any time-alignment pre-processing. More specifically, we run the implementation provided for the NAT-PH variant, which carries out interpolation on the magnitude and phase of the HRTF as described in [10]. Footnote 3: [https://github.com/AudioGroupColonge/SUpDEq](https://github.com/AudioGroupColonge/SUpDEq) Candidate methods are compared using common metrics computed on a per-feature basis, including the relative error (LRE) \[\text{LRE}\left(m_{f,e},\hat{m}_{f,e}\right)=20\log_{10}\left|\frac{\hat{m}_{f, e}-m_{f,e}}{m_{f,e}}\right|, \tag{7}\] and the log-magnitude distance (LMD) \[\text{LMD}\left(m_{f,e},\hat{m}_{f,e}\right)=\left|20\log_{10}\left|\frac{\hat {m}_{f,e}}{m_{f,e}}\right|\right|, \tag{8}\] where in a slight departure of notation, \(\hat{m}\) and \(m\) denote here the predicted point-wise time-aligned HRTF spectrum value and the ground truth value respectively, \(f\) indexes over the frequency bin, and \(e\) indexes over the left and right ears. For completeness, we also report log-spectral distortion (LSD) which is given in prior work as follows for a whole binaural filter [18]: \[\text{LSD}\left(m,\hat{m}\right)=\frac{1}{2}\sum_{e=1}^{2}\sqrt{\frac{1}{(N/2 +1)}\sum_{f=1}^{N/2+1}\left(20\log_{10}\left|\frac{\hat{m}_{f,e}}{m_{f,e}} \right|\right)^{2}}. \tag{9}\] ### _Uncertainty Calibration_ Several methods have been proposed for assessing a regressor's ability to gauge the uncertainty it provides alongside its point-wise predictions [48, 49, 50]. In particular, Levi et al. introduce a particular definition of uncertainty calibration according to which the model is calibrated if, in expectation over the data-generating distribution, the predicted variance it provides matches the squared error it commits upon carrying out the point-wise prediction [50]. In principle, this condition must hold across all possible values for the predicted variance. In practice, an approximate but tractable verification of this condition can be conducted for a limited number of variance values using a data set of finite size [50]. In such an approach, the resulting set of predicted variance and squared error pairs are divided into equally-sized groups forming non-overlapping contiguous interval divisions of the predicted variance axis. The expectation over the data-generating distribution is approximated within each group as the sample mean of squared error values in the group. The resulting mean squared error (MSE) values obtained for all groups are plotted as a function of the groups' respective mean predicted variance values (MPV). This allows for assessing the degree of miss-calibration. In particular, overconfident models produce a MPV versus MSE curve exceeding the identity line. Underconfident ones produce a curve lying under it. Miss-calibration can be summarized by a single-scalar mean-aggregate of the calibration error [50]. In this work we propose to use the following mean calibration distance (MCD) metric: \[\frac{1}{D}\sum_{i=1}^{D}\left|10\log_{10}\frac{\text{MSE}_{i}}{\text{MPV}_{ i}}\right|, \tag{10}\] where \(D\) denotes the number of divisions of the predicted variance axis. ## IV Results This section summarizes the meta-test set performance of a selected SConvCNP model configuration, which, among other candidates, achieved, after early stopping, near-best meta-validation set performance in both mean relative error level and mean calibration distance metrics according to (7) and (10) respectively. All candidate configurations were trained with a batch size of 8. Both meta-test and meta-validation sets comprised 340 tasks. The selected configuration's spherical CNN and point-wise MLP components both comprise five residual blocks with \(M=128\) channels each. The spherical convolution is implemented using a 64\(\times\)64 equiangular grid (\(G=64\)). Each planar-spherical filter is composed of 7 taps of SH representations interpolated from 16 learnable SH coefficients each [28]. The standard deviation floor \(\sigma_{\text{floor}}\) value is 1e-4 in the selected model. Fig. 4 provides an example of HRTF interpolation task using the SConvCNP candidate. In particular, this example is given for the FABIAN head and torso simulator (subject 1 of the HUTUBS dataset) and a specific draw of 20 context point locations represented in the top diagram of the figure (black markers). The diagram further marks the location of three target locations each with a distinct color code. The prediction provided by the SConvCNP model at each target location is reported in a corresponding row of the figure. The left plot of each row compares the predictive distribution to the ground truth. The right plot represents the associated log-magnitude spectra, namely point-wise estimate \(\hat{m}=\mu_{t}+\bar{m}_{t}\) and ground truth \(m=y_{t}+\bar{m}_{t}\) where \(\bar{m}_{t}\) denotes the population mean as defined in (5). The 50th percentile (median) and the 95% confidence intervals appearing in the log-magnitude plots are simulated estimates, computed from a population of samples randomly drawn according to the predictive distribution provided by the SConvCNP model. As pictured, the model's predictive distribution lies in good agreement with ground truth features \(y_{t}\). In particular, the ground truth generally falls within the 95% confidence interval (\(\pm 2\sigma_{t}\) range, grey region) around the predictive mean in all three target location cases. Moreover, the predictive mean \(\mu_{t}\) of the model (full black line) shows significant correlation with the ground truth \(y_{t}\) in both the ipsilateral direction case (green marker) and frontal direction case (blue marker). Furthermore, the model's prediction is more uncertain when the target point (ipsilateral direction, green marker) lies further away from context data points than in close vicinity (frontal direction, blue marker). This suggests the model's predictive distribution effectively captures model uncertainty. In contrast, the predictive mean is much less correlated with the ground truth in the contralateral direction case (red marker) despite the target direction being close to a context data point as indicated by the diagram at the top of Fig. 4. In particular, the predictive mean is practically agnostic above the 5-kHz mark, with a near-zero value throughout, and the standard deviation extends significantly outwards from the abscissa to capture variations in ground truth value. This is not unexpected as interpolation is a harder problem in the contralateral region, where the HRTF is spatially more intricate such that correlations between data points would occur within small distances only. Given this, the magnitude spectrum estimate \(|\hat{m}_{t}|=|\mu_{t}+\bar{m}_{t}|\) significantly undershoots the ground truth (right plot), while the transformed distribution's median (50% percentile) better predicts the power spectrum of the filter in this case (red curve). Fig. 5 depicts the solution provided by the gaussian process regressor baseline for the frontal and contralateral target directions of Fig. 4's task. Contrary to SConvCNP, the Gaussian process' predictive distribution does not capture the variability of the ground truth features along the frequency axis in either case (blue or red curves). Crucially, the uncertainty estimates in each case are of similar value at any given frequency. This is expected since the Gaussian process' predictive uncertainty solution is a function of distance to context data points [32], which is similar for both target locations given in the example. In this baseline specifically, the modeled degree of correlation between feature values at distinct locations on the unit sphere is tuned by the precision meta-parameter of the covariance function, and is hence equal in the contralateral and ipsilateral regions. This contrasts starkly with the SConvCNP model's ability to take account of observed feature values to provide well-behaved uncertainty estimates in regions exhibiting different degrees of spatial variability. The magnitude and phase responses of time-aligned HRTF spectrum is represented on HUTUB's data point grid for the 58th frequency bin in Fig. 6. As pictured, the SConvCNP model's mean estimate \(\hat{m}_{t}=\mu_{t}+\bar{m}_{t}\) closely matches the ground truth \(m_{t}=y_{t}+\bar{m}_{t}\) both in terms of magnitude and phase (top and middle plots). More precisely, the predictive mean solution's error generally lies under the -15 dB threshold relative to ground truth outside low-magnitude areas on the unit sphere (lower left plot). Furthermore, the predictive uncertainty provided by the model seems generally consistent with the observed error (lower right plot). A sample efficiency comparison of candidate methods is provided in Fig. 7 and 9. In particular, the graphs of these figures report error scores as a function of the number of context data points provided to the interpolation method candidates. Fig. 7 includes plots for the LRE, LMD and LSD metrics as defined in (7), (8) and (9) respectively. Fig. 9 provides further detail for the LRE metric specifically. In this figure, three additional LRE plots are provided for the ipsilateral, median, and contralateral HRTF regions defined in Fig. 8. Error levels are provided in each plot of Fig. 9 Fig. 4: Time-aligned HRTF spectrum interpolation task example presenting twenty context data points sampled from subject 1 of the HUTUBS database. First row: diagram marking the locations of the context data points (black) as well as three target locations (colored). Left plots: ground truth residual time-aligned HRTF spectrum (colored) and corresponding predictive distribution (black, grey) provided by the SConvCNP for the left channel at the target locations. Right plots: corresponding log-magnitude HRTF spectrum. Fig. 5: Left car channel Gaussian Process solution at the frontal (top) and contralateral (bottom) directions for the interpolation task of Fig. 4. for three distinct frequency bands: 0-5 kHz, 5-10 kHz, and 10-15 kHz. The natural neighbor method is intentionally omitted from LRE plots of Figs 7 and 9 as this candidate can only be meaningfully compared on magnitude-error-metric grounds since it interpolates the HRTF spectrum without the time-alignement pre-processing. In both Figs. 7 and 9, each curve represents an average error score value taken across tasks, directions, left/right ear channels and, the case being, frequency bins. In particular, the average was taken for each reported count over a set of 340 randomly drawn meta-test tasks. As expected, all candidates exhibit monotonically decreasing error scores with increased sample count. This observation holds for all error metrics considered in Fig. 7 and for all regions and frequency intervals considered in Fig. 9. Moreover, the SConvCNP model presents significantly lower relative error level compared to the thin-plate spherical spline method, which forms the best baseline: up to 3 dB globally (top left plot of Fig. 7) and up to 4.5 dB in the 0-5 kHz range (contalateral region, bottom right plot of Fig. 9). This improvement translates to nearly a halving of required measurement count to meet an error specification level. For example, meeting an -20 dB average relative error requires approximately 50 measurements using the thin-plate spherical spline method while approximately 28 is sufficient on average using the SConvCNP model. Similar observations can be made in the case of both the LMD and LSD metrics pictured in Fig. 7. Fig. 10 provides a summary of log-magnitude distance level as a function of frequency in the specific case of context sets numbering 40 context data points. The three plots of the figure detail the error levels specific to each region defined in Fig. 8. The SConvCNP candidate significantly outperforms all baselines in the 0-14 kHz across all regions, except in the contralateral region (top-right plot) where the natural neighbour matches and then outperforms the proposed model from the 7.5 kHz mark onwards. In agreement with the results of Fig. 9, the improvements brought by the SConvCNP model are most significant beyond the 6 kHz mark in the frontal and ipsilateral region, while it is most significant under 6 kHz in the contralateral region. In particular, the SConvCNP model provides an improvement of up to 0.8 dB compared to the best baseline at any frequency, as found in the ipsilateral region around the 9.2 kHz mark. Miscalibration of the trained SConvCNP model is summarized in Fig. 11. In this figure, the calibration of the trained SConvCNP model is evaluated over a meta-test set of 340 randomly generated tasks using \(D=16\) divisions. In particular, the predicted variance and squared error pairs observed at the output the model are pooled across interpolation tasks, data point locations, frequency bins, left/right channels, and Fig. 8: Definition of HRTF regions used in generating the plots of Fig. 9 and 10. The boundaries separating the regions lies at \(\pm 18.1^{\circ}\) lateral angle from the median plane, which distributes the HRTF directions of the HUTUBS grid in approximately equal proportions amongst the three specified regions. Fig. 6: Left ear channel time-aligned HRTF spectrum as a function of sound source direction \(x_{t}\) for the interpolation task of Fig. 4. Plots are provided for the 58th frequency bin of the spectrum. Top and middle: log-magnitude and phase for ground truth \(m_{t}=y_{t}+\tilde{m_{t}}\) (left) and SConvCNP model’s predictive mean \(\tilde{m_{t}}=\mu_{t}+\tilde{m_{t}}\) (right) of the time-aligned HRTF spectrum. Bottom left: relative error committed by the SConvCNP model. Bottom right: SConvCNP predictive uncertainty relative to ground truth magnitude in dechels. Fig. 7: Average time-aligned HRTF spectrum interpolation error across output features in the 0-15.5 kHz range and meta-test set’s interpolation tasks as a function of context data point count. The proposed method (SConvCNP) improves upon all baselines on all three evaluation metrics. Upper-left: relative error level according to (7). Lower-left: log-magnitude distance according to (8). Lower-right: log-spectral distortion according to (9). real/imaginary parts to form the MSE versus MPV curve shown in Fig. 11. As pictured in the left plot of Fig. 11, the mean predicted variance closely matches the mean square error. In particular, the resulting curve lies neither significantly above or below the identity line. Hence, the trained SConvCNP model is neither markedly over-confident or under-confident in its predictions. More precisely, the rightmost plot reveals that the effective squared error lies, in expectation and for all but most uncertain predictions, within 1.0 dB of the predicted variance. This level of miscalibration is moderate when put in contrast to the range of \(\sim\)9 dB relative error reduction that is achieved when acquiring data points from a count of 5 to a count of 40 as pictured in the top-left plot of Fig. 7. Accordingly, we conclude that the model's uncertainty estimates \(\sigma_{t}\) can usefully inform the problem of acquiring additional HRTF data points to improve HRTF individualization upon a pre-existing HRTF estimate. ## V Conclusion In this work we introduced the first HRTF interpolation method providing well-calibrated uncertainty estimates. We showed the method proved sample efficient on the time-aligned HRTF spectrum interpolation task. In particular, meta-training was carried-out successfully using a modest data set of 85 subjects. Furthermore, the interpolators returned by the proposed meta-learning model were shown to require up to nearly half the number of context data point count compared to state-of-the-art interpolation methods at comparable accuracy level. Contrary to the Gaussian process regression baseline, they also showed well-calibrated uncertainty estimates. The proposed model's time and space complexity severely limits its applicability towards real-time interpolation and audio rendering setups. However it can readily be used for offline up-sampling of sparse HRTF sets. Furthermore, a promising application lies in facilitating the sequential decision problem of acquiring as few correcting HRTF data-points as needed to achieve a required degree of HRTF individualization accuracy. In particular, the provided uncertainty estimates could be used to identify the location at which obtaining a new measurement would, in expectation, maximally reduce the model's uncertainty. Furthermore, the predictive distribution could be used to compare the probability of HRTF query candidates conditioned on the data points already acquired for the subject so as to select the most relevant ones to be submitted for perceptual feedback evaluation from the subject. Treatment of such sequential decision problem is left as future research work. Other future development avenues include evaluation of the model's ability to correct HRTF estimates provided by state-of-the-art parametric individualization methods instead of the train set population mean used under the limited scope of this work. Finally, we intend to prepare and provide publicly-available versions of the model and code.
2308.09622
Is context all you need? Scaling Neural Sign Language Translation to Large Domains of Discourse
Sign Language Translation (SLT) is a challenging task that aims to generate spoken language sentences from sign language videos, both of which have different grammar and word/gloss order. From a Neural Machine Translation (NMT) perspective, the straightforward way of training translation models is to use sign language phrase-spoken language sentence pairs. However, human interpreters heavily rely on the context to understand the conveyed information, especially for sign language interpretation, where the vocabulary size may be significantly smaller than their spoken language equivalent. Taking direct inspiration from how humans translate, we propose a novel multi-modal transformer architecture that tackles the translation task in a context-aware manner, as a human would. We use the context from previous sequences and confident predictions to disambiguate weaker visual cues. To achieve this we use complementary transformer encoders, namely: (1) A Video Encoder, that captures the low-level video features at the frame-level, (2) A Spotting Encoder, that models the recognized sign glosses in the video, and (3) A Context Encoder, which captures the context of the preceding sign sequences. We combine the information coming from these encoders in a final transformer decoder to generate spoken language translations. We evaluate our approach on the recently published large-scale BOBSL dataset, which contains ~1.2M sequences, and on the SRF dataset, which was part of the WMT-SLT 2022 challenge. We report significant improvements on state-of-the-art translation performance using contextual information, nearly doubling the reported BLEU-4 scores of baseline approaches.
Ozge Mercanoglu Sincan, Necati Cihan Camgoz, Richard Bowden
2023-08-18T15:27:22Z
http://arxiv.org/abs/2308.09622v1
# Is context all you need? Scaling Neural Sign Language Translation to Large Domains of Discourse ###### Abstract Sign Language Translation (SLT) is a challenging task that aims to generate spoken language sentences from sign language videos, both of which have different grammar and word/gloss order. From a Neural Machine Translation (NMT) perspective, the straightforward way of training translation models is to use sign language phrase-spoken language sentence pairs. However, human interpreters heavily rely on the context to understand the conveyed information, especially for sign language interpretation, where the vocabulary size may be significantly smaller than their spoken language equivalent. Taking direct inspiration from how humans translate, we propose a novel multi-modal transformer architecture that tackles the translation task in a context-aware manner, as a human would. We use the context from previous sequences and confident predictions to disambiguate weaker visual cues. To achieve this we use complementary transformer encoders, namely: (1) A Video Encoder, that captures the low-level video features at the frame-level, (2) A Spotting Encoder, that models the recognized sign glosses in the video, and (3) A Context Encoder, which captures the context of the preceding sign sequences. We combine the information coming from these encoders in a final transformer decoder to generate spoken language translations. We evaluate our approach on the recently published large-scale BOBSL dataset, which contains \(\sim\)1.2M sequences, and on the SRF dataset, which was part of the WMT-SLT 2022 challenge. We report significant improvements on state-of-the-art translation performance using contextual information, nearly doubling the reported BLEU-4 scores of baseline approaches. ## 1 Introduction Sign languages are visual languages and the primary languages of Deaf communities. They are languages in their own right, as rich as any spoken language, and can vary considerably between countries with strong dialect differences within a country [16]. They have their own lexicons and grammatical constructs, thus converting between sign and spoken language is a translation problem. Sign Language Recognition (SLR) [44] and Sign Language Translation (SLT) are active research areas within computer vision [5, 10, 54, 57]. While SLR focuses on the recognition of signs within a video, SLT aims to generate meaningful spoken language interpretation of a given signed phrase or vice versa. In our work, we focus on the former part of SLT, namely translating continuous sign videos into spoken language sentences. Automatic SLT is a challenging problem for a number of reasons. Firstly, as stated, sign languages have their own grammar, they are not translated simply word-by-word by replacing words with signs [49]. Secondly sign languages contain many channels that are used in combination i.e. hand articulation, facial expression, and body posture are all used in combination and their use can vary depending on context. For example, the hand shape of a sign may change depending on the context. A good example of this would be the verb '_to give_'. The verb '_give_' is directional and the direction of the motion is subject to the placement of ob Figure 1: An overview of the proposed multi-modal sign language translation architecture. jects and use of space in front of the signer. But the hand shape can also change depending on the type of object being given. Thirdly, motions can be subtle or fast and this leads to motion blur. Finally, many sign can also look very similar. All of these factors make it difficult to recognize the sign that is being performed without the context in which it is used. Human interpretation or translation of sign languages heavily relies on context, as it is fundamental to all language understanding. Consider the use of homophones in spoken language. An active listener has no issues in the disambiguation of homophones despite the fact there are no auditory cues to help. This is because we are able to use the context to disambiguate the meaning of the homophone. However, much of the SLT work to date has neglected such context, focusing largely on sentence pairs. In fact, most machine translation datasets shuffle the order of sentences, making it impossible to utilize the context from the previous sentences. In this work, we propose a novel sign language translation architecture that incorporates important contextual information. It combines weak visual cues from a 3D convolutional backbone with strong cues from the context and sparse sign spottings. An overview of the approach can be seen in Figure 1. We evaluate our approach on the largest available sign language dataset, BOBSL [3], which covers a wide domain of various topics. We obtain significant performance improvements by incorporating context and automatic spottings (1.27 vs. 2.88 in BLEU-4). We also evaluate our approach on the WMT-SLT 2022 challenge data, specifically the SRF partition, and surpass the reported performance of all challenge participants. The contributions of this paper can be summarized as: * We propose a novel multi-modal transformer network that incorporates the context of the prior information and automatic spottings. * We conduct extensive experiments to examine the effects of different approaches to capturing context. * Our approach achieves state-of-the-art translation performance on two datasets, namely BOBSL, the largest publicly available sign language translation dataset, and the WMT-SLT 2022 challenge data. The remainder of the paper is organized as follows: In Section 2, we summarize the related work. In Section 3, we describe our proposed sign language translation network. In Section 4, we provide information about the datasets we use and provide model training details. Section 5 presents the experimental results of the proposed method and we conclude the paper in Section 6. ## 2 Related Work **Sign Language Recognition (SLR)** has seen consistent research effort from the computer vision community for decades [12, 48, 56]. The advances in models and techniques, also the release of recent isolated [3, 22, 30, 46], and continuous [23, 28] SLR datasets have led to significant improvements in the accuracy and robustness of sign language recognition systems. SLR can be grouped into two sub-problem; isolated and continuous SLR. While the isolated SLR videos contain only a single sign, continuous SLR videos contain multiple sign sequences. After the emergence of 2D convolutional neural networks (CNNs), 2D CNNs were quickly applied to model the visual appearance in SLR [35, 39, 40, 46]. Sequence models such as the recurrent neural network (RNN) [40], long short-term memory (LSTM) [46], hidden markov model (HMM) [51] have all been used to encode temporal information. Following 2D CNNs, 3D CNNs were developed and have achieved state-of-the-art performance on a wide range of computer vision tasks, including sign language recognition [2, 22, 26, 30]. In addition to images, researchers have also used other input modalities for SLR, such as depth, skeleton, optical flow, and motion history image (MHI) to improve recognition accuracy [21, 25, 35, 46, 47]. Some studies also introduced the use of different cues such as cropped hands and faces [11, 18], or an attention mechanism [13, 47] to obtain better discriminative features. These advances in the field of isolated SLR have also been applied to continuous SLR. Since continuous SLR videos contain multiple co-articulated signs, it is a more challenging problem. The explicit alignment between the video sequence and gloss sequence generally does not exist. In order to tackle this problem, Connectionist Temporal Classification (CTC) [17] is widely used [11, 20, 43, 59]. **Sign Language Translation (SLT)** is still in its infancy due to the lack of large-scale sign language translation datasets. While machine translation datasets for spoken languages contain many millions of sentence pairs such as 22.5M for English-French, and 4.5M for English-German pairs (WMT shared tasks [4]), the first public SLT dataset PHOENIX14-T [5], which was released in 2018, had only 8K sentences and its domain of discourse was limited to weather forecast. The authors handle the SLT as a Neural Machine Translation (NMT) problem and proposed the first end-to-end SLT model by combining CNNs with the attention-based encoder-decoder network with RNNs. One of the most significant advances in NMT was the introduction of the Transformer network by Vaswani et. al [52], which is based solely on attention mechanisms and waives recurrent networks, for the sequence transduction problem. Camgoz et al. [7] applied transformer architecture to the sign language translation problem. In recent years, transformers have become popular in SLT [15, 54, 55, 57]. Some studies tackle SLT with a two-stage approach, i.e., in the first part glosses are recognized from sign videos (Sign2Gloss), and then glosses are mapped into a spoken language sentence (Gloss2Text) [5, 55]. On the other hand, some studies deal with an end-to-end solution that predicts the spoken language sentence from sign video inputs [7, 57]. Zhou et al. [57] proposed a two-stage approach, but unlike others, their approach is based on back-translation. They convert spoken language text to sign sequences with both text-to-gloss and gloss-to-sign steps to generate synthetic data. They used the synthetic samples as additional data and trained an end-to-end SLT method based on the transformer. Zhou et. al [58] and Camgoz et. al [6] also utilized multiple cues for the SLT task, such as hands and face. To the best of our knowledge, and perhaps surprisingly, using context has not been exploited in the literature. However, Papastratis et. al [36] did use the previous sentence to initialize the hidden state of a BLSTM for predictions of the next video sequence to improve recognition accuracy in a continuous SLR. They obtain slightly better results when the context-aware gloss predictions were fed into the transformer for SLT. **Datasets:** PHOENIX14-T [5] became the most commonly used dataset in the literature. The performance on this dataset is generally satisfactory to provide a usable translation, e.g., Chen et. al [10] obtained 28.39 in terms of BLEU-4 score. However, due to its limited domain of discourse, models trained on PHOENIX14-T have little real-world applicability. To address this, researchers released several datasets in recent years [3, 8, 33]. The largest to date is BOBSL [3], a broadcast interpretation-based large-scale British Sign Language (BSL) dataset. Their SLT baseline is based on the transformer network and obtains only 1.0 in terms of BLEU-4. Recently, Swiss German Sign Language (DSGS) broadcast datasets were introduced in the first SLT-WMT shared task [33], where all the submissions scored under 0.56 in terms of BLEU-4. Yin et. al [54] collect the first multi-lingual dataset for multiple sign language translations and proposed the first multi-lingual SLT model. Although significant progress has been made in the area of SLT, there is still room for further improvement. ## 3 Method Most sign translation datasets and especially those based on broadcast interpretation [3, 5, 33], contain a set of consecutive sign phrase videos \((V_{1},...,V_{M})\) and spoken language sentences \((S_{1},...,S_{N})\). In some datasets, such as Phoenix2014T [5], sign phrase videos and their spoken language translations are paired and the order of the pairs are shuffled and distributed between training and evaluation sets. Unfortunately, this destroys the context of the sentence. Datasets like BOBSL [3] release the video and sentence sets with only weak alignment. Although this is generally regarded as a weakness, making subsequent learning from the data more challenging, it has a fundamental advantage that we make use of in this work: it allows the use of context to improve the translation. Given an input video \(V=(x_{1},x_{2},...,x_{T})\) with \(T\) frames, the aim of a sign language translation is to learn the conditional probability \(p(S|V)\) in order to generate a spoken language sequence \(S=(w_{1},w_{2},...,w_{U})\) with \(U\) words. We propose to take advantage of the contextual information that comes from the preceding context, \(S_{C}=(S_{n-1},S_{n-2},S_{n-3},...)\). We also make use of sparse sign spottings, \(Sp=(g_{1},...,g_{K})\), automatically recognized from the current video \(V\) using a state-of-the-art model. Thus, we extend the classical translation formalization to one of learning the conditional probability \(p(S|V,S_{C},Sp)\). This conditioning allows weak and ambiguous visual cues in \(V\) to be disambiguated based on context. Our translation network is based on a transformer architecture and contains three separate encoders, \(E_{v},E_{c},E_{s}\) for each of the different input cues, i.e., video, context, and spottings, and a multimodal decoder, \(D\), which learns the mapping between all input source representations and the target spoken language sentence. A detailed overview of our model is shown in Figure 2. ### Embedding Layers Following the classic neural machine translation methods, we first project source and target sequences to a dense continuous space via embedding layers. In order to represent video sequences, we utilize pretrained CNNs. For linguistic concepts that originate from written text in the form of the preceding and target spoken language sentences and spotted sign glosses, we use word embedding layers. **Sign Embedding:** To convert a given video, \(V\), to its feature representation, we use the I3D model [9] as a backbone due to its recent success in sign recognition tasks. We first divide the videos into smaller video clips, \(c_{t}=(x_{t},...,x_{t+L-1})\) of size \(L\). In our experiments we use a window size of \(L=16\) to obtain the sign video embedding: \[f_{t}=\mathrm{SignEmbedding}(c_{t}) \tag{1}\] We stride \(SignEmbedding\) over the full video \(V\) with the step size of \(4\), thus yielding a final feature set of \(f_{1:\frac{T-L}{4}+1}\). We considered two types of features as the output of our sign embedding layer, namely a) 1024-dimensional representation that is extracted from the last layer before classification, and b) C-dimensional class probabilities after the softmax activation function. We conduct experiments using both of these feature representations in our translation pipeline and compare their performance. **Feature Embedding:** To avoid biases caused by dimensionality we project the extracted feature representations into the same size denser space using a linear layer. We also employ layer normalization to transform them to be of the same scale. This feature embedding operation can be formalized as: \[\hat{f}_{t}=\mathrm{FeatureEmbedding}(f_{t}) \tag{2}\] **Word Embedding:** We first tokenize our spoken language sentences using a pretrained BERT model [14]. More specifically we employ the _BERT-base-cased_ and _BERT-base-german-cased_ from the Huggingface's Transformer library [53], which uses WordPiece tokenization. The word embedding layer is shared between all the text input cues, such as spottings, context sentences, and shifted target sentences. **Positional Encoding:** In order to provide sequential order information to our networks we use the standard positional encoding method as proposed in [52] in the form of shifted sine and cosine waves. This is added after the feature and word embedding layers. This positional encoding can be formalized as: \[\bar{f}_{t}=\mathrm{PositionalEncoding}(\hat{f}_{t}) \tag{3}\] ### Translation Network After embedding layers, positionally encoded features and word vectors are sent to the transformer encoders. Our encoders have a stack of two identical layers each of which has a multi-headed self-attention and a fully connected feed-forward layer. Each of these two sub-layers is followed by a residual connection and layer normalization. **Video-Encoder:** The video encoder network, \(E_{v}\), takes the positionally encoded feature vectors \(\bar{f}_{1}.\frac{T-L}{4}+_{1}\) that come from the feature embedding layer and produces a spatial-temporal representation \(h^{v}_{1:\frac{T-L}{4}+1}\) that captures the motion and content of the video. **Context-Encoder:** The context encoder, \(E_{c}\), takes positionally encoded-word embedding results from preceding context, \(S_{C}\), and produces representations, \(h^{c}\), which capture the context in which the currently signed phrase is performed. **Spotting-Encoder:** The spotting encoder network, \(E_{s}\), takes positionally encoded spotting embeddings \(Sp\) and produces representations \(h^{s}\) that correspond to confident but sparse sign detections that have been spotted in the current video, \(V\), that we are attempting to translate. See Section 4.2 for details of the sign spotting technique [32] we used in our experiments. Figure 2: A detailed overview of the proposed multi-modal sign language translation architecture. **Decoder:** After encoding each input modality, the output of the encoder layers \(h^{c},h^{v},h^{s}\), and the positionally encoded and shifted spoken sentence is then sent to the transformer decoder \(D\). We extend the classical transformer decoder architecture [52] by introducing several encoder-decoder attention layers, which combine and enrich the representations coming from complementary cues of information. The network flow can be formalized as: \[\begin{array}{l}h^{c}=\mathrm{ContextEncoder}(S_{C})\\ h^{v}=\mathrm{VideoEncoder}(V)\\ h^{s}=\mathrm{SpottingEncoder}(Sp)\\ S^{*}=\mathrm{Decoder}(h^{c},h^{v},h^{s},S^{\prime})\end{array} \tag{4}\] where \(S^{\prime}\) and \(S^{*}\) correspond to the shifted and predicted target sentences, respectively. In words, firstly, the word embeddings extracted from the shifted spoken language embedding \(S^{\prime}\) are passed to the masked self-attention layer. Then, our first encoder-decoder attention layer takes outputs of the masked self-attention and context encoder, \(h^{c}\). The output of the context encoder-decoder attention is sent to the video encoder-decoder attention to be used as a query, while the key and the value come from the video encoder, \(h^{v}\). In a similar way, the spotting encoder-decoder attention performs attention operations over \(h^{s}\) and the previous layer. Finally, the last representation of the transformer decoder is projected to the space of the target vocabulary using a linear layer to predict the target spoken language sentence \(S^{*}\), one word at a time. We train our network using cross-entropy loss as proposed in [52], by comparing the predicted target sentence \(S^{*}\) against the ground truth sentence \(S\) at the word level. ## 4 Dataset and Implementation Details ### Datasets **SRF** is a Swiss German Sign Language (DSGS) dataset that was recently released for the WMT-SLT 2022 challenge [33] as one of the training corpora. It contains daily news and weather forecast broadcast. It includes 16 hours of sign footage, divided into 29 episodes, performed by three signers. In total 7,071 subtitles were manually aligned by Deaf annotators. Separate development and test sets were provided in the WMT-SLT. We use the SRF dataset for training and used the official development and test sets for the evaluation of the model to be able to compare our approach against the methods presented in the challenge. **BOBSL**[3] is a large-scale British Sign Language (BSL) dataset that consists of BSL-interpreted BBC broadcast footage covering a wide range of topics. The dataset has an approximate duration of 1,400 hours and contains around 1.2M sentences. While the training and validation set's subtitles are audio-aligned, the test data is manually aligned and contains 20,870 sentences with a vocabulary size of 13,641. ### Sign Spotter Momeni et al. [32] released automatically extracted dense annotations for the BOBSL dataset. We use these annotations as the spotting input on the BOBSL experiments. The key idea is that a set of video clips with a particular sign must have a correlation at the time when the sign is performed. Taking inspiration from [32], we create similar automatic dense annotations for the SRF dataset by correlating the I3D features and examplar subtitles. To do this, firstly we lemmatize and lowercase each word in the subtitle sentences and extract a vocabulary list. German language has compound words by concatenating two words. In order to reduce the number of singletons in the vocabulary list, we use the compound-split library [1]. Then, for each word \(w\), we take a reference video clip \(V_{0}\) that contains \(w\) in its subtitle sentence. We choose random \(N=9\) positive video examples \(V_{1},V_{2},..,V_{N}\) that contain the word \(w\) in their subtitles, and \(3*N\) negative video examples that do not contain \(w\) to avoid annotating non-lexical signs. We compute the cosine similarities between reference and examplar video features. We apply a voting scheme among the videos with cosine similarity above 0.5 to find the location of the given word in the reference video. ### Implementation Details **Sign Embeddings:** For full-body video inputs, we pre-train two different I3D models which we call BSL-I3D and DGS-I3D on two different sign language datasets, namely BOBSL [3] and MeineDGS [29]. While training the BSL-I3D model we use the annotations released with the dataset [3] which has a vocabulary size of 2,281. For MeineDGS we use the linguistic annotation available with the dataset. In order to obtain a similar size vocabulary of 2,301 classes, we choose classes that have more than 12 occurrences. We resize the input images to \(224\times 224\) and follow the training instructions of [3] with some small modifications; we use the Swish activation function instead of ReLU and change the learning rate scheduler to reduce on a plateau. We also use label smoothing of 0.1 in order to help reduce overfitting. **Training and Network Details:** Our model is implemented using PyTorch [38]. We use the Adam [27] optimizer with an initial learning rate of \(3\times 10^{-4}\) (\(\beta_{1}\) = 0.9, \(\beta_{2}\) = 0.999, \(\epsilon\) = 10\({}^{-8}\) ) with a batch size 16 on SRF; and learning rate \(6\times 10^{-4}\) with batch size 64 on the BOBSL dataset. We reduce the learning rate by a factor of 0.7, if the BLEU-4 score does not increase for five epochs. This step continues until the learning rate drops below \(10^{-5}\). For transformer encoders and the decoder, we use two layers with 8 heads. We conduct an ablation study to choose the size of the hidden layers and the feed-forward layers (in section 5.1). We choose 512 and 1024, respectively. We use 0.1 for the dropout rate. During training, we use a greedy search to evaluate translation performance on the development set. At inference, we evaluated both a greedy search and a beam search (decoding size of 2 and 3) for our video-to-text approach. However, we did not observe a significant improvement in scores. Therefore, we provide greedy search performances on both validation and test set. **Metrics:** We use BLEU-1, BLEU-4 [37], ROUGE [31], and CHRF [41] scores, which are commonly used metrics for machine translation, to evaluate the performance of our model. As ROUGE, we use ROUGE-L F1 score; as BLEU score we use the sacreBLUE [42] implementation. ## 5 Experiment Results We run our experiments in an end-to-end manner on two recently released sign language datasets, namely the SRF partition of WMT-SLT [33] and BOBSL [3], which is the largest sign language dataset available. For each dataset, we train baseline models that have one encoder and one decoder, and take only one input source, i.e., the preceding context (using the preceding spoken sentence or preceding spotings), current spotting, or video. We name our single modality models as _Context-to-Text_, _Spot-to-Text_, _Video-to-Text_, respectively. Then, we investigate the impact of integrating context information to the _Spot-to-Text_ or _Video-to-Text_ approaches by adding a context encoder and using a dual-mode transformer decoder with the related encoder-decoder attention layers. Finally, we investigate using all sources simultaneously to gain more information. We combine all three sources using three separate encoders and a decoder. We name our final model as _Context+Video+Spot-to-Text_. ### Experiments on SRF partition of WMT-SLT **Video-to-Text:** We evaluate our _Video-to-Text_ model which takes only the video source and tries to generate spoken language in an end-to-end manner. First, we conduct ablations studies on the SRF partition of the WMT-SLT dataset using different types of input channels for the _Video-to-Text_ model. We run our experiments with different numbers of hidden size (HS) and feed-forward (FF) units, with \(64\times 128\), \(128\times 256\), \(256\times 512\), \(512\times 1024\), \(512\times 2048\). We obtain similar results with \(512\times 1024\) and \(512\times 2048\), where \(512\times 1024\) is slightly better. Therefore, for the rest of our experiments, we use \(512\times 1024\) parameters for HS\(\times\)FF. Table 1 shows our ablation experiments against the baseline [34] on the WMT-SLT development set. We repeat each experiment 3 times and report the mean and standard deviation of scores. All our experiments outperform the baseline. We do not observe any significant difference between the BSL or DGS-pretrained I3D model on the WMT-SLT. Our best score, obtained using BSL-I3D features, was 1.51 in terms of BLEU-4. On the other hand, class probabilities obtain lower BLEU scores than feature embeddings. Therefore we use BSL-I3D features going forward for our video encoder. Table 2 shows the comparison of our approach against the participants of the WMT-SLT shared task [33]. All approaches are based on Transformer architectures [52]. Similar to our _Video-to-Text_, MSMUNICH [15] also uses an I3D model for feature extraction and obtained the highest score of 0.56 in BLEU-4. While they use an I3D model pretrained on BSL-1K [2], we pretrained our I3D on the BOBSL [3] which provides better feature representation and a slight improvement. **Context-to-Text:** Here, we are testing how well a network can guess the content of a sentence given the context of the preceding sentence. To do this, we need ordered data. Although the development and test data of the SRF partition of WMT-SLT consists of segments extracted from several episodes, the segments contain consecutive numbers for each episode. Therefore, we used sorted segments to evaluate our _Context-to-Text_ model. As can be seen in Table 2, _Context-to-Text_, which takes only the previous sentence as a source, performs worse than our _Video-to-Text_. However, its BLEU-4 performance is still superior, and CHRF performance is competitive to the literature, which verifies that contextual information provides important cues for translation tasks. **Context+Video-to-Text:** Next, we combine context and video sources by including a context encoder, a video en \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Size** & **BLEU-1** & **BLEU-4** & **CHRF** \\ \hline Baseline [34] & & - & 0.58 & - \\ \hline BSL-P & 2281 & \(14.26\pm 0.47\) & \(1.01\pm 0.2\) & \(17.0\pm 0.17\) \\ DGS-P & 2301 & \(14.6\pm 0.55\) & \(1.03\pm 0.08\) & \(17.03\pm 0.47\) \\ BSL-F & 1024 & \(15.86\pm 0.2\) & \(1.23\pm 0.25\) & \(17.27\pm 0.15\) \\ DGS-F & 1024 & \(15.14\pm 0.44\) & \(1.17\pm 0.08\) & \(17.13\pm 0.12\) \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of different features for SLT on WMT-SLT development set. BSL-F: BSL-I3D features, DGS-F: DGS-I3D features, BSL-P: BSL-I3D class probabilities, DGS-P: DGS-I3D class probabilities. \begin{table} \begin{tabular}{l|c c c c} \hline \hline & **BLEU-1** & **BLEU-4** & **CHRF** & **ROUGE** \\ \hline MSMUNICH [15] & - & 0.56 & 17.4 & - \\ SLT-UPC [50] & - & 0.5 & 12.3 & - \\ SLTATIC [45] & - & 0.25 & 19.2 & - \\ Baseline [33] & - & 0.12 & 5.5 & - \\ DFKI-MLT [19] & - & 0.11 & 6.8 & - \\ NJUPT-MTT & - & 0.10 & 14.6 & - \\ DFKI-SLT [24] & - & 0.08 & 18.2 & - \\ \hline Ours & & & & \\ - Video-to-Text & 14.43 & 0.81 & 18.18 & 5.60 \\ - Context-to-Text & 12.80 & 0.69 & 14.48 & 3.73 \\ - Context+Video-to-Text & 14.33 & 1.00 & 18.12 & 6.00 \\ \hline \hline \multicolumn{2}{l}{ Spot-to-Text} & 22.11 & 1.87 & 22.23 & 11.17 \\ - Context+Video+Spot-to-Text & 31.36 & 3.93 & 24.69 & 17.65 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with the literature on the full WMT-SLT test set. coder, and a decoder, which we call _Context+Video-to-Text_. Incorporating context information besides video features improved our translation results as we expected. **Spot-to-Text:** In the literature, ground truth sign glosses are used to train a text-to-text translation model to create an upper bound for end-to-end translation [5]. Motivated by this, we created spottings as described in 4.2 using our BSL-I3D model. The trained _Spot-to-Text_ model achieves significantly better translation performance compared to other single-modality architectures. **Context+Video+Spot-to-Text:** Finally, we integrate automatically created spottings as input to the spotting encoder. The performance gain is significant when compared to _Context+Video-to-Text_ and _Spot-to-Text_, showing the benefits of the incorporation of complementary information cues. However, this result should be taken as an upper bound on performance as the spotting approach requires a prior over the spoken word occurrence. This artificially inflates the performance but as can be seen, the potential benefits of accurate spotting on translation are significant. ### Experiments on BOBSL **Context-to-Text:** We evaluate training _Context-to-Text_ with two different types of data on the BOBSL; a) preceding sentences and b) preceding spottings. As can be seen in Table 3, using only the preceding text data leads to poor translation. Firstly, we experiment with increasing the context by providing more preceding text. While using more sentences provides a slight improvement in terms of the BLEU-4 and CHRF scores on the validation set, it did not help in the other scores or on the test set. In the experiments with preceding spottings, we experiment with different numbers of spottings. We use the spottings from up to 3 previous sentences since we do not see a significant improvement when we include more prior sentences in the previous experiments. We obtain better results when we use the spottings of 3 prior sentences, but limit the maximum number of spottings to just 10. **Spot-to-Text:** We utilized the sign spottings [32] of the BOBSL to evaluate our _Spot-to-Text_ model. We train our model using all automatic annotations without any thresholding, which obtains 21.63 and 2.21 for BLEU-1 and BLEU-4 on the test set, respectively. **Video-to-Text:** In [3], the authors provide an SLT baseline that is trained on a subset of the BOBSL training set. They created their new training set for sign language translation by filtering the sentences that contain high-confidence automatic spottings. They selected words that occur at least 50 times in the training set and constructed sentences by filtering according to this vocabulary. They also discard sentences with over 30 words, yielding 274K sentences. To provide a comparison, we first train our _Video-to-Text_ network on this subset. However, transformers tend to get better results with more data. Therefore, we \begin{table} \begin{tabular}{l|c c c c|c c c c} \hline & \multicolumn{4}{c}{**Val**} & \multicolumn{4}{c}{**Test**} \\ & **BLEU-1** & **BLEU-4** & **ROUGE** & **CHRF** & **BLEU-1** & **BLEU-4** & **ROUGE** & **CHRF** \\ \hline Albenie et. al [3] & - & - & - & - & 12.78 & 1.00 & 10.16 & - \\ \hline **Video-to-Text** & & & & & & & & \\ - trained with 274K & 15.15 & 1.02 & 12.71 & 19.7 & 12.68 & 0.83 & 8.32 & 17.9 \\ - trained with 1M & 18.8 & 1.28 & 7.91 & 17.7 & 17.71 & 1.27 & 8.9 & 18.8 \\ \hline **Context+Video-to-Text** & & & & & & & & \\ - 1 preceding sentence & 20.18 & 1.53 & 9.13 & 18.2 & 19.11 & 1.51 & 9.94 & 19.3 \\ - 2 preceding sentence & 19.14 & 1.52 & 8.97 & 18.0 & 18.15 & 1.41 & 9.56 & 18.9 \\ - 3 preceding sentence & 20.05 & 1.56 & 9.08 & 18.1 & 18.82 & 1.48 & 9.64 & 19.1 \\ - Max 10 spottings & 20.84 & 1.71 & 10.03 & 18.21 & 19.05 & 1.50 & 9.95 & 18.94 \\ \hline **Context+Video+Spot-to-Text** & & & & & & & & \\ -with 1 preceding sentence & 25.06 & 2.73 & 11.12 & 22.6 & 24.07 & 2.81 & 12.07 & 23.7 \\ -with max 10 spottings & 25.94 & 3.07 & 12.27 & 23.69 & 24.29 & 2.88 & 12.41 & 24.53 \\ \hline \end{tabular} \end{table} Table 4: Impact of the integrating context and spottings information to video-to-text approaches on the BOBSL dataset. \begin{table} \begin{tabular}{l|c c c c|c c c c} \hline & \multicolumn{4}{c}{**Val**} & \multicolumn{4}{c}{**Test**} \\ & **BLEU-1** & **BLEU-4** & **ROUGE** & **CHRF** & **BLEU-1** & **BLEU-4** & **ROUGE** & **CHRF** \\ \hline **Context-to-Text** & & & & & & & & \\ - 1 preceding sentence & 13.21 & 0.52 & 5.11 & 10.2 & 13.34 & 0.42 & 5.54 & 10.4 \\ - 3 preceding sentence & 13.32 & 0.51 & 5.36 & 10.2 & 13.0 & 0.43 & 5.59 & 10.5 \\ - Max 10 spottings & 13.77 & 0.74 & 6.33 & 10.88 & 12.90 & 0.60 & 6.01 & 10.76 \\ - Max 20 spottings & 13.88 & 0.75 & 6.36 & 10.9 & 12.96 & 0.56 & 6.07 & 10.66 \\ \hline **Spot-to-Text** & 21.97 & 2.25 & 8.52 & 19.4 & 21.63 & 2.21 & 9.45 & 19.7 \\ \hline **Context+Spot-to-Text** & 22.77 & 2.56 & 9.98 & 19.9 & 21.68 & 2.43 & 10.0 & 19.72 \\ \hline \end{tabular} \end{table} Table 3: Performance of our text-to-text models on the BOBSL dataset. also train our model using all sentences as in the _"version v1.2"_ of the BOBSL dataset for which the training set contains about 1M sentences. In this experiment, our BLEU-4 score increased to 1.27 from 0.83 as seen in Table 4. **Context+Spot-to-Text:** Firstly, we combine context and spotting sources by having a context encoder, a spotting encoder, and a decoder, which we call _Context+Spot-to-Text_. We set the maximum number of spottings to 10. _Context+Spot-to-Text_ achieved better results than _Spot-to-Text_ (2.21 vs 2.43 BLEU-4 score in the test set). **Context+Video-to-Text:** Then, we evaluate the _Context+Video-to-Text_ model. We use all training videos and all validation videos in our multi-modal experiments. As can be seen in Table 4, when using prior spoken text as input for context-encoder, our _Context+Video-to-Text_ model achieves a significant improvement over our _Video-to-Text_ model on both the manually aligned test set (1.27 vs. 1.51 BLEU-4, and 12.68 vs. 19.11 BLEU-1) and validation set (1.28 vs. 1.53 BLEU-4, and 18.8 vs. 20.18 BLEU-1). We also investigate using a different number of preceding sentences. Similar to _Context-to-Text_ experiments, increasing the number of preceding sentences does not improve the translation quality. On the other hand, we experiment with the preceding spottings for the context-encoder. Although we obtain a much better result in the validation set (1.53 vs. 1.71 BLEU-4), we get similar results in the test set. This shows that using either the preceding sentences or preceding spottings provides similar context and helps to provide better translation. **Context+Video+Spot-to-Text :** Finally, we train our transformer using all modalities. Our final approach is able to surpass all previous models and obtains state-of-the-art on the BOBSL dataset test set, with 2.81 for BLEU-4 and 24.07 for BLEU-1. **Qualitative results:** In this section, we share translations produced by the proposed model using different modalities and discuss our qualitative findings. As shown in Table 5, we compare our _Video-to-Text_, _Context+Video-to-Text_ and _Context+Video+Spot-to-Text_ to better analyze the contribution of using the preceding context and current spottings. The results show that although translation quality is not perfect, context information helps us to get closer to the true meaning when compared to _Video-to-Text_. As shown in the first example, the ground truth translation is _"He lost nearly 200 sheep during the prolonged heavy snow in April."_. While _Video-to-Text_ model is able to infer only _"sheep"_ correctly, _Context+Video-to-Text_ model produces _"Two sheep have been killed by the weather."_, which is a closer meaning. ## 6 Conclusion In this paper, we have proposed a novel multi-modal transformer architecture for context-aware sign language translation. Our approach utilizes complementary transformer encoders, including a spotting and video encoder for modeling the current sign phrase and a context encoder for capturing the context of preceding sign sequences. These encoders are then combined in a final transformer decoder to generate spoken language translations. We evaluate our approach on two sign language datasets with large domains of discourse and obtain state-of-the-art results by doubling the BLEU-4 score. We hope this work will encourage the exploration of new model ideas on large-scale sign language translation. A future direction may include exploring the leverage of context, such as to alleviate the local ambiguity for similar signs, or to improve spottings performance. ## Acknowledgement This work was supported by the EPSRC project ExtTOL (EP/R03298X/1), SNSF project 'SMILE II' (CRSII5 193686), European Union's Horizon2020 programme ('EASIER' grant agreement 101016982) and the Innousisse IICT Flagship (PFFS-21-47). This work reflects only the authors view and the Commission is not responsible for any use that may be made of the information it contains. \begin{table} \begin{tabular}{l|l} \hline **Ex\#1** GT: & He lost nearly 200 sheep during the prolonged heavy snow in April. \\ V2T: & The sheep are rounded up and the autumn begins to drift away. \\ (C+V)2T: & The two sheep have been killed, and the two have been killed by the weather. \\ (C+V+S)2T & And the sheep are in the middle of April, and they’re all farmed in the winter. \\ \hline **Ex\#2** GT: & You can see it’s quite a different shape... \\ V2T: & It’s a very different story. \\ (C+V)2T: & It’s a very different shape. \\ (C+V+S)2T & It’s a different shape. \\ \hline **Ex\#3** GT: & With the crops on the farm, summer is a busy time of year with harvest just around the corner. \\ V2T: & It’s a real dramatic change in the night and it’s a real labour of love. \\ C+V2T: & It’s a very busy time of year, but it’s a very busy time of year. \\ (C+V+S)2T: & During the summer, the farm is busy grazing and the farm is busy harvesting. \\ \hline \end{tabular} \end{table} Table 5: Qualitative results of the proposed method on the BOBSL. V2T: _Video-to-Text_, (C+V)2T: _Context+Video-to-Text_, (C+V+S)2T : _Context+Video+Spot-to-Text_.
2302.13621
Deformations of corank $1$ frontals
We develop a Thom-Mather theory of frontals analogous to Ishikawa's theory of deformations of Legendrian singularities but at the frontal level, avoiding the use of the contact setting. In particular, we define concepts like frontal stability, versality of frontal unfoldings or frontal codimension. We prove several characterizations of stability, including a frontal Mather-Gaffney criterion, and of versality. We then define the method of reduction with which we show how to construct frontal versal unfoldings of plane curves and show how to construct stable unfoldings of corank 1 frontals with isolated instability which are not necessarily versal. We prove a frontal version of Mond's conjecture in dimension 1. Finally, we classify stable frontal multigerms and give a complete classification of corank 1 stable frontals from $\mathbb C^3$ to $\mathbb C^4$.
C. Muñoz-Cabello, J. J. Nuño-Ballesteros, R. Oset Sinha
2023-02-27T09:45:46Z
http://arxiv.org/abs/2302.13621v1
# Deformations of corank \(1\) frontals ###### Abstract. We develop a Thom-Mather theory of frontals analogous to Ishikawa's theory of deformations of Legendrian singularities but at the frontal level, avoiding the use of the contact setting. In particular, we define concepts like frontal stability, versality of frontal unfoldings or frontal codimension. We prove several characterizations of stability, including a frontal Mather-Gaffney criterion, and of versality. We then define the method of reduction with which we show how to construct frontal versal unfoldings of plane curves and show how to construct stable unfoldings of corank \(1\) frontals with isolated instability which are not necessarily versal. We prove a frontal version of Mond's conjecture in dimension \(1\). Finally, we classify stable frontal multigerms and give a complete classification of corank \(1\) stable frontals from \(\mathbb{C}^{3}\) to \(\mathbb{C}^{4}\). Work of Juan J. Nuno-Ballesteros and R. Oset Sinha partially supported by Grant PID2021-124577NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of making Europe". In Section 3 we define the concept of frontal stability and versality. We define a frontal codimension and prove that a frontal is stable if and only if it has frontal codimension \(0\). We also give a characterisation of versality analogous to Mather's versality theorem. Section 4 gives a geometric criterion for stability, a frontal Mather-Gaffney criterion which states that a frontal is stable if and only if it has isolated instability. Sections 5 and 6 are devoted to show how to construct stable frontals as frontal versal unfoldings of plane curves or as a well defined sum of frontal unfoldings. We define the frontal reduction of an \(\mathscr{A}_{\!e}\)-versal unfolding of a plane curve and prove that it is, in fact, a versal frontal unfolding. As a by product we relate the frontal codimension of a plane curve with its \(\mathscr{A}_{\!e}\)-codimension and prove the frontal Mond conjecture (stated in [22]) in dimension \(1\), which says that the frontal codimension is less than or equal to the frontal Milnor number (the number of spheres in a stable deformation) with equality if the germ is quasi-homogeneous. We also give a method to construct stable unfoldings which are not necessarily versal. We then turn our attention to characterizing stability of frontal multigerms defining a frontal Kodaira-Spencer map which also yields a tangent space to the iso-singular locus (the manifold along which the frontal is trivial). Finally we use our methods to obtain a complete list of stable \(3\)-dimensional frontals in \(\mathbb{C}^{4}\). Note that generic wave fronts were classified by Arnol'd in [1] and, on the other hand, Ishikawa classified stable Legendrian maps (which may have different projected frontals), but, until now, a complete classification of stable frontals was only known for \(n=1\) ([1]) and \(n=2\) ([23]). For technical reasons in order to use Ishikawa's results we restrict ourselves to the case of frontals whose Legendrian lift has corank \(1\). ## 2. Frontal map-germs Let \(W\) be a smooth manifold of dimension \(2n+1\). A field of hyperplanes \(\Delta\) over \(W\) is a contact structure for \(W\) if, for all \(w\in W\), there exist an open neighbourhood \(U\subseteq W\) of \(w\) and a \(\sigma\in\Omega^{1}(U)\) such that 1. \(\operatorname{rk}\sigma_{w}=1\); 2. the fibre \(\Delta_{w}\) of \(\Delta\) at \(w\) is \(\ker\sigma_{w}\); 3. \((\sigma\wedge d\sigma\wedge\stackrel{{(n)}}{{\dots}}\wedge d \sigma)_{w}\neq 0\). We call \(\sigma\) the local contact form of \(W\), and define a **contact manifold** as a pair \((W,\Delta)\), where \(\Delta\) is a contact structure on \(W\). Given a smooth manifold \(Z\) of dimension \(n+1\), a submersion \(\pi\colon W\to Z\) is a **Legendrian fibration** for \((W,\Delta)\) if, for all \(w\in W\), \[(d\pi_{w})^{-1}(T_{\pi(w)}Z)\subseteq\ker\sigma_{w}.\] **Example 2.1**.: _Let \(W=PT^{*}\mathbb{K}^{n+1}\) be the projectivised cotangent bundle of \(\mathbb{K}^{n+1}\), and \((z,[\omega])\in W\). The differential \(1\)-form_ \[\alpha=\omega_{1}\,dz^{1}+\dots+\omega_{n+1}\,dz^{n+1}\] _defines a contact structure on \(W\). The projection \(W\to\mathbb{K}^{n+1}\) given by \((z,[\omega])\mapsto z\) is a Legendrian fibration under this contact structure._ **Definition 2.2**.: _Let \(\pi\colon W\to Z\), \(\pi^{\prime}\colon W^{\prime}\to Z^{\prime}\) be Legendrian fibrations. A diffeomorphism \(\Psi\colon W\to W^{\prime}\) between contact manifolds is_ 1. \(a\) _contactomorphism_, if_ \(\Delta^{\prime}=d\Psi(\Delta)\)_;_ 2. \(a\) _Legendrian diffeomorphism_ _if it is a contactomorphism and there exists a diffeomorphism_ \(\psi\colon Z\to Z^{\prime}\) _such that_ \(\psi\circ\pi=\pi^{\prime}\circ\Psi\)_._ _We say \(W\) is contactomorphic to \(W^{\prime}\) if there is a contactomorphism \(\Psi\colon W\to W^{\prime}\)._ A well-known result by Darboux states that any two contact manifolds \(W,W^{\prime}\) of the same dimension admit a local diffeomorphism \(\Psi\colon W\to W^{\prime}\) such that \(\Delta^{\prime}=d\Psi(\Delta)\) (see e.g. [27], SS20.1). In particular, if \(\dim W=2n+1\), \(W\) is locally contactomorphic to the contact manifold described in Example 2.1; therefore, we can restrict oruselves to the setting given in Example 2.1. Let \(N\subseteq\mathbb{K}^{n+1}\) be an open subset. A mapping \(F\colon N\to PT^{*}\mathbb{K}^{n+1}\) is **integral** if \(F^{*}\alpha=0\). **Definition 2.3**.: _A smooth mapping \(f\colon N^{n}\to Z^{n+1}\) is **frontal** if there exist an integral mapping \(F\colon N\to W\) and a Legendrian fibration \(\pi\colon W\to Z\) such that \(f=\pi\circ F\). If \(F\) is an immersion, we say \(f\) is a **wave front**. Similarly, a hypersurface \(X\subset Z\) is **frontal** (resp. a **wave front**) if there exists a frontal map (resp. wave front) \(f\colon N\to Z\) such that \(X=f(N)\)._ **Definition 2.4**.: _Let \(S\subset N\) be a finite set. A smooth multigerm \(f\colon(N,S)\to(Z,0)\) is **frontal** if it has a frontal representative \(f\colon N\to Z\). Given a hypersurface \(X\subset Z\), \((X,z)\) is a **frontal** hypersurface germ if there exists a frontal map germ \(f\colon(N,S)\to(Z,z)\) such that \((X,z)=f(N,S)\)._ Let \(F\colon N\to PT^{*}\mathbb{K}^{n+1}\) be an integral map and \(f=\pi\circ F\): there exist \(\nu_{1},\dots,\nu_{n+1}\in\mathscr{O}_{n}\) such that \[0=F^{*}\alpha=\sum_{i=1}^{n+1}\nu_{1}d(Z_{i}\circ f)=\sum_{i=1}^{n+1}\sum_{j=1 }^{n}\nu_{i}\frac{\partial f_{i}}{\partial x_{j}}\,dx^{j}, \tag{1}\] where \(Z_{1},\dots,Z_{n+1}\) are coordinates for \(\mathbb{K}^{n+1}\). Setting \(\nu=\nu_{1}\,dZ_{1}+\dots+\nu_{n+1}\,dZ_{n+1}\), this is equivalent to \(\nu(df\circ\xi)=0\) for all \(\xi\in\theta_{n}\). Since \(PT^{*}\mathbb{K}^{n+1}\) is a fibre bundle, we can find for each pair \((z,[\omega])\in PT^{*}\mathbb{K}^{n+1}\) an open neighbourhood \(Z\subset\mathbb{K}^{n+1}\) of \(z\) and an open \(U\subseteq\mathbb{K}P^{n+1}\) such that \(\pi^{-1}(Z)\cong Z\times U\). Therefore, \(F\) is contact equivalent to the mapping \(\tilde{f}(x)=(f(x),[\nu_{x}])\), known as the **Nash lift** of \(f\). If we assume that \(\Sigma(f)\) is nowhere dense in \(N\), the differential form \(\nu\) is uniquely determined by \(f\), giving us a one-to-one correspondence between \(f\) and \(\tilde{f}\). Such a frontal map is known as a **proper frontal** map (according to Ishikawa [15]). We also define the **integral corank** of a proper frontal as the corank of its Nash lift. For the rest of this article, we shall assume all frontal map germs are proper. Note that the notion of topological properness (i.e. the preimage of a compact subset is compact) is not used throughout this article. **Example 2.5**.: _Let \(f\colon(\mathbb{K}^{n},0)\to(\mathbb{K}^{n+1},0)\) be the smooth map germ given by_ \[f(x_{1},\dots,x_{n})=(x_{1}^{2},\dots,x_{n}^{2},2x_{1}^{p_{1}}+\dots+2x_{n}^{ p_{n}});\qquad\qquad p_{1},\dots,p_{n}>1\] _It is easy to see that \(f\) has corank \(n\) and the singular set \(\Sigma(f)\) is nowhere dense in \(\mathbb{K}^{n}\). Furthermore, the assumption that \(p_{1},\dots,p_{n}>1\) implies that the Jacobian ideal of \(f\) is generated by \(x_{1}x_{2}\dots x_{n}\), and thus it is a proper frontal map germ by Proposition 2.6 below. In particular, the differential \(1\)-form_ \[\nu_{(x_{1},\dots,x_{n})}=p_{1}x_{1}^{p_{1}-2}\,dX^{1}+\dots+p_{n}x_{n}^{p_{n }-2}\,dX^{n}-dX^{n+1},\] _verifies that \(\nu(df\circ\xi)=0\) for all \(\xi\in\theta_{n}\), and has corank equal to the number of \(p_{i}\) that are greater than \(3\). Therefore, the integral corank of \(f\) is also equal to the number of \(p_{i}\) greater than \(3\). In particular, \(f\) is a wave front when all \(p_{i}\) are equal to \(3\)._ **Proposition 2.6** ([15], Lemma 2.3).: _Let \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) be a map germ. If \(f\) is frontal, then the Jacobian ideal \(J_{f}\) of \(f\) is principal (i.e. it is generated by a single element). Conversely, if \(J_{f}\) is principal and \(\Sigma(f)\) is nowhere dense in \((\mathbb{K}^{n},S)\), then \(f\) is a proper frontal map germ._ If \(f\) has corank \(1\), we may choose local coordinates in the source and target such that \[f(x,y)=(x,p(x,y),q(x,y)); x\in\mathbb{K}^{n-1},\,y\in\mathbb{K} \tag{2}\] in which case \(J_{f}\) is the ideal generated by \(p_{y}\) and \(q_{y}\), and we recover the following criterion by Nuno-Ballesteros [23]: **Corollary 2.7**.: _Let \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) be a frontal map germ of corank \(1\), and choose coordinates in the source and target such that \(f\) is given as in Equation (2). Then \(f\) is a frontal map germ if and only if either \(p_{y}|q_{y}\) or \(q_{y}|p_{y}\)._ We shall say that \(f\) is in **prenormal form** if it is given as in Equation (2) with \(q_{y}=\mu p_{y}\) for some \(\mu\in\mathscr{O}_{n}\), in which case the Nash lift becomes \[\tilde{f}=\left(f,\frac{\partial q}{\partial x_{1}}-\mu\frac{\partial p}{ \partial x_{1}},\ldots,\frac{\partial q}{\partial x_{n-1}}-\mu\frac{\partial p }{\partial x_{n-1}},\mu\right) \tag{3}\] In particular, note that if \(\operatorname{ord}_{y}(q)=\operatorname{ord}_{y}(p)+1\), then \(\operatorname{ord}_{y}(\mu)=1\), and \(f\) is a wave front. ## 3. Lowering Legendrian equivalence The first strides in the classification of frontal mappings were done by Arnol'd and his colleagues in a series of articles published in the 1970s and 1980s. In his work, he established a notion of equivalence native to Legendrian maps (known as _Legendrian equivalence_) and developed a classification of all simple, stable wave fronts (see [27], Chapter 21). Ishikawa extended Arnol'd's theory of Legendrian equivalence to the broader class of integral mappings in [14], defining a notion of infinitesimal stability and showing that an integral map of corank at most \(1\) is Legendrian stable if and only if it is infinitesimally stable. He also showed that all Legendrian stable integral mappings of corank at most \(1\) belong to a special family called open Whitney umbrellas, giving a characterisation of stable umbrellas in terms of a certain \(\mathbb{K}\)-algebra \(Q\). The goal of this section is to formulate a notion of frontal stability and versality that does not require the use of contact geometry. **Remark 3.1**.: _Let \(f\colon(\mathbb{K}^{n},0)\to\mathbb{K}^{n+1}\) be a proper frontal map germ with Nash lift \(\tilde{f}=f\times[\nu]\). Since \([\nu]\) is an equivalence class in a projective space, there exists a \(1\leq i\leq n+1\) such that \(\nu_{i}\) is non-vanishing, so we can rewrite Equation (1) as_ \[d(Z_{i}\circ f)=-\frac{\nu_{1}}{\nu_{i}}\,d(Z_{1}\circ f)-\cdots-d(\widehat{Z_ {i}\circ f})-\cdots-\frac{\nu_{n+1}}{\nu_{i}}\,d(Z_{n+1}\circ f), \tag{4}\] _where the hat symbol denotes an ommited summand. We then define local coordinates \(X,Y,P\) on \(PT^{*}\mathbb{K}^{n+1}\) such that \(f_{i}=Y\circ f\) and_ \[f_{j}= X_{j}\circ f, P_{j}=\frac{\nu_{j}}{\nu_{i}} (j=1,\ldots,i-1);\] \[f_{j+1}= X_{j}\circ f, P_{j}=\frac{\nu_{j+1}}{\nu_{i}} (j=i,\ldots,n+1).\] _These are known as the **Darboux coordinates** of \(PT^{*}\mathbb{K}^{n+1}\). In particular, Equation (4) implies that the mapping \(X\circ f=(X_{1}\circ f,\ldots,X_{n}\circ f)\) shares the same singular set with \(f\)._ _Therefore, there exists a representative \(X\circ f\colon U\to V\) of \(X\circ f\) which is immersive outside of a nowhere dense subset \(K\) of \(U\)._ **Definition 3.2**.: _Let \(S,S^{\prime}\subset\mathbb{K}^{n}\) be finite sets. Two integral map germs_ \[F\colon(\mathbb{K}^{n},S)\to(PT^{*}\mathbb{K}^{n+1},w),\qquad\qquad F^{\prime} \colon(\mathbb{K}^{n},S^{\prime})\to(PT^{*}\mathbb{K}^{n+1},w^{\prime})\] _are **Legendre equivalent** if there exists a diffeomorphism \(\phi\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n},S^{\prime})\) and a Legendrian diffeomorphism \(\Psi\colon(PT^{*}\mathbb{K}^{n+1},w)\to(PT^{*}\mathbb{K}^{n+1},w^{\prime})\) such that \(F^{\prime}=\Psi\circ F\circ\phi^{-1}\)._ Arnol'd showed in [27], SS20.4 that a Legendrian diffeomorphism \(\Psi\colon W\to W^{\prime}\) is locally determined by a choice of Legendrian fibrations in the source and target, and a diffeomorphism \(\psi\) between the base spaces. Nonetheless, his proof was based on the fact that a Legendrian diffeomorphism preserves the fibres, and no explicit expression is given for \(\Psi\). **Theorem 3.3**.: _Given a diffeomorphism \(\psi\colon Z\to Z^{\prime}\), the mapping_ \[\begin{CD}T^{*}Z@>{}>{}>T^{*}Z^{\prime}\\ (z,\omega)@>{}>{}>(\psi(z),\omega\circ d\psi^{-1}_{\psi(z)})\end{CD}\] _induces a Legendrian diffeomorphism \(\Psi\colon(PT^{*}Z,\Delta)\to(PT^{*}Z^{\prime},\Delta^{\prime})\)._ Proof.: Let \((z,\omega)\in T^{*}Z\): since \(\psi\) is a diffeomorphism, \(\omega\circ d\psi^{-1}_{\psi(z)}\neq 0\) and \(\Psi\) is a well-defined diffeomorphism. Furthermore, it is clear that \[\pi^{\prime}\circ\Psi=\psi\circ\pi \tag{5}\] by construction. Therefore, we only need to show that \(d\Psi_{q}(\Delta_{q})=\Delta^{\prime}_{\Psi(q)}\). Let \(q=(z,[\omega])\) and \(v\in\Delta_{q}\). Since \(\pi\) is a submersion, \((\omega\circ d\pi_{q})(v)=0\), and it follows from (5) that \[(\omega\circ d\psi^{-1}_{\psi(z)}\circ d\pi^{\prime}_{\Psi(q)})[d\Psi_{q}(v)] =0\implies d\Psi_{q}(v)\in\Delta^{\prime}_{\Psi(q)}\] Conversely, let \(w\in\Delta^{\prime}_{\Psi(q)}\). Since \(\Psi\) is a diffeomorphism, there exists a unique \(v\in T_{q}PT^{*}Z\) such that \(w=d\Psi_{q}(v)\). By definition of \(\Delta^{\prime}\), we have \[(\omega\circ d\psi^{-1}_{\psi(z)}\circ d\pi^{\prime}_{\Psi(q)})(w)=0\] By (5), this implies that \((\omega\circ d\pi_{q})(v)=0\), from which follows that \(w\in d\Psi_{q}(\Delta_{q})\). **Remark 3.4**.: _Let \(\psi_{t}\colon(\mathbb{K}^{n+1},0)\to(\mathbb{K}^{n+1},0)\) be a smooth \(1\)-parameter family of diffeomorphisms. Given \(t\) in an open neighbourhood \(U\subseteq\mathbb{K}\) of \(0\), we know by Theorem 3.3 that we can lift \(\psi_{t}\) onto a Legendrian diffeomorphism \(\Psi_{t}\colon(PT^{*}\mathbb{K}^{n+1},w)\to(PT^{*}\mathbb{K}^{n+1},0)\). Since \(\pi\colon PT^{*}\mathbb{K}^{n+1}\to\mathbb{K}^{n+1}\) is a fibre bundle and \(\mathbb{K}^{n+1}\) is a paracompact Hausdorff space, \(\pi\) is a fibration (see [26], Corollary 2.7.14), so it verifies the homotopy lifting property. Therefore, the \(1\)-parameter family \(\Psi_{t}\) defined in this way is, indeed, a lift of the family \(\psi_{t}\)._ **Corollary 3.5**.: _Let \(f,g\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\):_ 1. _if_ \(f\) _is_ \(\mathscr{A}\)_-equivalent to_ \(g\) _and_ \(f\) _is frontal,_ \(g\) _is frontal;_ 2. _if_ \(f\) _and_ \(g\) _are frontal,_ \(\tilde{f}\) _is Legendrian equivalent to_ \(\tilde{g}\) _if and only if_ \(f\) _is_ \(\mathscr{A}\)_-equivalent to_ \(g\) Proof.: Assume that \(f\) is frontal: there exist an integral map germ \(F\colon(\mathbb{K}^{n},S)\to PT^{*}\mathbb{K}^{n+1}\) such that \(f=\pi\circ F\), where \(\pi\) is the canonical bundle projection. Now let \(\phi\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n},S)\), \(\psi\colon(\mathbb{K}^{n+1},0)\to(\mathbb{K}^{n+1},0)\) be diffeomorphisms such that \(g=\psi\circ f\circ\phi^{-1}\): by Theorem 3.3, we can lift \(\psi\) onto a Legendrian diffeomorphism \(\Psi\colon PT^{*}\mathbb{K}^{n+1}\to PT^{*}\mathbb{K}^{n+1}\). Therefore, the map \(G=\Psi\circ F\circ\phi^{-1}\) is an integral map such that \(\pi\circ G=g\), and \(g\) is frontal. This proves the first item. For the second item, the "only if" is proved in a similar fashion. For the "if", let \(\phi\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n},S)\) and \(\Psi\colon(PT^{*}\mathbb{K}^{n+1},w)\to(PT^{*}\mathbb{K}^{n+1},w)\) be diffeomorphisms such that \(\tilde{g}=\Psi\circ\tilde{f}\circ\phi^{-1}\), with \(\Psi\) Legendrian. By definition of Legendrian diffeomorphism, there exists a diffeomorphism \(\psi\colon(\mathbb{K}^{n+1},0)\to(\mathbb{K}^{n+1},0)\) such that \(\pi\circ\Psi=\psi\circ\pi\), from which follows that \[g=\pi\circ\tilde{g}=\pi\circ\Psi\circ\tilde{f}\circ\phi^{-1}=\psi\circ\pi \circ\tilde{f}\circ\phi^{-1}=\psi\circ f\circ\phi^{-1},\] proving the second item. ### Unfolding frontal map germs The theory of Legendrian equivalence describes homotopic deformations of a pair \((\pi,F)\) via integral deformations, deformations \((F_{u})\) of \(F\) which are themselves integral for any fixed \(u\). Nonetheless, frontal deformations often fail to preserve the frontal nature across the parameter space, as showcased in Example 3.6 below. **Example 3.6**.: _Let \(\gamma\colon(\mathbb{K},0)\to(\mathbb{K}^{2},0)\) be the plane curve \(t\mapsto(t^{3},t^{4})\). The \(1\)-parameter deformation \(\gamma_{s}(t)=(t^{3}+st,t^{4})\) verifies that \(\gamma_{s}\) is frontal for all \(s\in\mathbb{K}\). If \(\omega\) is a \(1\)-form such that \(\omega(d\gamma_{s}\circ\partial t)=0\) for all \((t,s)\) in an open neighbourhood \(U\subset\mathbb{K}^{2}\) of \((0,0)\), a simple computation shows that \(\omega\) must be given in the form_ \[\omega_{(s,t)}=\alpha(t,s)(4t^{3}\,dX-(3t^{2}+s)\,dY)\] _for some \(\alpha\in\mathscr{O}_{2}\). Therefore, \(\gamma_{s}\) does not yield an integral deformation at \(s=0\)._ **Definition 3.7**.: _Let \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) be a frontal germ. An unfolding \(F\colon(\mathbb{K}^{n}\times\mathbb{K}^{d},S\times\{0\})\to(\mathbb{K}^{n+1} \times\mathbb{K}^{d},0)\) of \(f\) is **frontal** if it is frontal as a map germ._ **Theorem 3.8**.: _Let \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) be a frontal map germ. A \(d\)-parameter unfolding \(F=(f_{\lambda},\lambda)\) of \(f\) is frontal if and only if \(\tilde{f}_{\lambda}\) is an integral deformation of \(\tilde{f}\)._ Proof.: Let \(F\) be a frontal \(d\)-parameter unfolding for \(f\): there is a \(\nu\in\Omega^{1}(F)\) such that \(\nu(dF\circ\eta)=0\) for all \(\eta\in\theta_{n+d}\). If we set \(\nu_{0}=\nu|_{\lambda=0}\), we can write \[\nu_{(x,y,\lambda)}=(\nu_{0})_{(x,y)}+\sum_{j=1}^{d}\lambda_{j}(\nu_{j})_{(x,y,\lambda)}\] for some \(\nu_{1},\ldots,\nu_{j}\in(\mathbb{K}^{n},S)\to T^{*}\mathbb{K}^{n+1}\). Therefore, \(\nu\) may be regarded as a \(d\)-parameter deformation of \(\nu_{0}\) and the Nash lift of \(f_{\lambda}\), \[(x,y,\lambda)\mapsto(f_{\lambda}(x,y),[\nu_{(x,y,\lambda)}]) \tag{6}\] is an integral \(d\)-parameter deformation of \(f\times[\nu_{0}]\). Since \(f\times[\nu_{0}]\) is an integral map, \(\nu_{0}(df\circ\xi)=0\) for all \(\xi\in\theta_{n}\). Properness of \(f\) then implies that \(f\times[\nu_{0}]=\tilde{f}\), and thus the map germ (6) is an integral deformation of \(\tilde{f}\). Conversely, let \(\tilde{f}_{\lambda}\) be an integral deformation of \(\tilde{f}\). Taking coordinates \((u,\lambda)\) in the source and Darboux coordinates in the target, the integrability condition becomes \[\frac{\partial}{\partial u_{j}}(Y\circ f_{\lambda})=(P_{1}\circ\tilde{f}_{ \lambda})\frac{\partial}{\partial u_{j}}(X_{1}\circ f_{\lambda})+\cdots+(P_{n} \circ\tilde{f}_{\lambda})\frac{\partial}{\partial u_{j}}(X_{n}\circ f_{\lambda})\] for \(j=1,\ldots,n\). Consider the differential form \(\nu\in\Omega^{1}(F)\) given by \[\sum_{j=1}^{n}(P_{j}\circ\tilde{f}_{\lambda})\left(dX^{j}-\sum_{k=1}^{d}\frac{ \partial}{\partial\lambda_{k}}(X_{j}\circ f_{\lambda})\,d\lambda^{k}\right)-dY+ \sum_{k=1}^{d}\frac{\partial}{\partial\lambda_{k}}(Y\circ f_{\lambda})\,d \lambda^{k}\] Using the integrability condition above, we have \[\nu\left(dF\circ\frac{\partial}{\partial u_{i}}\right)= \sum_{j=1}^{n}(P_{j}\circ\tilde{f}_{\lambda})\frac{\partial(X_{j} \circ f_{\lambda})}{\partial u_{i}}-\frac{\partial(Y\circ f_{\lambda})}{ \partial u_{i}}=0;\] \[\nu\left(dF\circ\frac{\partial}{\partial\lambda_{i}}\right)= \sum_{j=1}^{n}(P_{j}\circ\tilde{f}_{\lambda})\left(\frac{\partial (X_{j}\circ\tilde{f}_{\lambda})}{\partial\lambda_{i}}-\frac{\partial(X_{j} \circ\tilde{f}_{\lambda})}{\partial\lambda_{i}}\right)-\] \[-\frac{\partial(Y\circ f_{\lambda})}{\partial\lambda_{i}}+\frac{( Y\circ f_{\lambda})}{\partial\lambda_{i}}=0\] Therefore, \(\nu(dF\circ\xi)=0\) for all \(\xi\in\theta_{n+d}\) and \(F\) is frontal. **Remark 3.9**.: _Properness of \(f\) is required for the "if" direction, since \(\widetilde{f_{u}}\) is not guaranteed to be a deformation of \(\tilde{f}\), even if it is integral. Nonetheless, the "only if" direction does not require properness._ The space of **infinitesimal integral deformations** of an integral \(\tilde{f}\), defined by Ishikawa in [14], is given by \[\theta_{I}(\tilde{f})=\{v_{0}(\tilde{f}_{t}):\tilde{f}_{t}\text{ integral },\tilde{f}_{0}=\tilde{f}\}; v_{0}(\tilde{f}_{t})=\left.\frac{d\tilde{f}_{t}}{dt}\right|_{t=0}.\] This space is linear when \(\tilde{f}\) has corank at most \(1\) ([14]), but it is known to have a conical structure in higher coranks. Counterexamples can be constructed using a similar procedure as in [11]. We also set \(T\mathscr{L}_{e}\tilde{f}\) as the subspace of \(\theta_{I}(\tilde{f})\) given by those \(\tilde{f}_{t}\) which are trivial Legendrian deformations of \(\tilde{f}\). **Definition 3.10**.: _Let \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) be a frontal map germ of integral corank at most \(1\). We define the space of **infinitesimal frontal deformations** of \(f\) as_ \[\mathscr{F}(f)=\{v_{0}(f_{t}):(t,f_{t})\text{ frontal},f_{0}=f\}.\] As shown in Theorem 3.12 below, \(\mathscr{F}(f)\) is the linear projection of \(\theta_{I}(\tilde{f})\). Therefore, if the integral corank of \(f\) is at most \(1\), \(\mathscr{F}(f)\) is \(\mathbb{K}\)-linear; for this reason, any results involving \(\mathscr{F}(f)\) will implicitly assume that \(f\) has integral corank at most \(1\). An alternative, direct proof is also given for corank \(1\) frontal map germs in Remark 5.12 below. **Lemma 3.11**.: _Given a frontal map germ \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\), \(T\mathscr{A}_{e}f\subseteq\mathscr{F}(f)\)._ Proof.: Let \(\phi_{t}\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n},S)\), \(\psi_{t}\colon(\mathbb{K}^{n+1},0)\to(\mathbb{K}^{n+1},0)\) be two smooth \(1\)-parameter families of diffeomorphisms and \(f_{t}=\psi_{t}\circ f\circ\phi_{t}^{-1}\). It is clear by construction that the vector field germ given by \(f_{t}\) is in \(T\mathscr{A}_{e}f\). By Theorem 3.3, we can lift \(\psi_{t}\) onto a smooth \(1\)-parameter family \(\Psi_{t}\) of Legendrian diffeomorphisms, in which case we can lift \(f_{t}\) onto an integral deformation \(\widetilde{f}_{t}=\Psi_{t}\circ\tilde{f}\circ\phi_{t}^{-1}\). Using Theorem 3.8, we then see that the unfolding \(F=(f_{t},t)\) is frontal. Therefore, the vector field germ given by \(f_{t}\) is in \(\mathscr{F}(f)\), and thus \(T\mathscr{A}_{e}f\subseteq\mathscr{F}(f)\) **Theorem 3.12**.: _Let \(f\colon(\mathbb{K}^{n},0)\to(\mathbb{K}^{n+1},0)\) be a frontal map germ and \(\pi\colon PT^{*}\mathbb{K}^{n+1}\to\mathbb{K}^{n+1}\) be the canonical bundle projection. The mapping \(t\pi\colon\theta_{I}(\tilde{f})\to\mathscr{F}(f)\) given by \(t\pi(\xi)=d\pi\circ\xi\) is a \(\mathbb{K}\)-linear isomorphism and induces an isomorphism_ \[\Pi\colon\frac{\mathscr{F}(f)}{T\mathscr{A}_{e}f}\longrightarrow\frac{\theta_ {I}(\tilde{f})}{T\mathscr{L}_{e}\tilde{f}}. \tag{7}\] Proof.: Let \(\xi\in\theta_{I}(\tilde{f})\) and \(\tilde{f}_{t}\) be an integral \(1\)-parameter deformation of \(\tilde{f}\) and \(\xi=v_{0}(\tilde{f}_{t})\): by Theorem 3.8, \(F(t,x)=(t,(\pi\circ\tilde{f}_{t})(x))\) is a frontal \(1\)-parameter unfolding of \(f\). Furthermore, using the chain rule, we see that \(v_{0}(\pi\circ\tilde{f}_{t})=t\pi[v_{0}(\tilde{f}_{t})]\), so \(t\pi[\theta_{I}(\tilde{f})]\subseteq\mathscr{F}(f)\) and \(t\pi\colon\theta_{I}(\tilde{f})\to\mathscr{F}(f)\) is well-defined. Conversely, let \(\xi\in\mathscr{F}(f)\) and \((t,f_{t})\) be a frontal \(1\)-parameter deformation of \(f\) with \(\xi=v_{0}(f_{t})\): by Theorem 3.8, we can lift \(f_{t}\) onto an integral \(1\)-parameter deformation \(\tilde{f}_{t}\) of \(\tilde{f}\). Using the chain rule, it then follows that \(\xi\in t\pi[\theta_{I}(\tilde{f})]\), so \(t\pi[\theta_{I}(\tilde{f})]=\mathscr{F}(f)\). We move onto injectivity. Let \(\tilde{f}_{t}(x)=\tilde{f}(x)+t\tilde{h}(x,t)\) be an integral \(1\)-parameter deformation of \(\tilde{f}\) with \((\pi\circ\tilde{f}_{t})(x)=f(x)+th(x,t)\). If we assume that \(\xi=v_{0}(\tilde{f}_{t})\in\ker t\pi\), then \[0=\left.\frac{df_{t}}{dt}\right|_{t=0}=[h(x,t)+th_{t}(x,t)]_{t=0}=h(x,0) \implies h(x,t)=tg(x,t).\] Our goal is to show that we can write \(\tilde{h}(x,t)=t\tilde{g}(x,t)\) for some \(\tilde{g}\), so that \(v_{0}(\tilde{f}_{t})=0\) and thus \(\ker t\pi=\{0\}\). Since \(\tilde{f}_{t}\) is an integral deformation of \(\tilde{f}\), it verifies the identity \[d(Y\circ f_{t})=\sum_{j=1}^{n}(P_{j}\circ\tilde{f}_{t})\,d(X_{j}\circ f_{t})\] Taking the coefficient of \(dx^{k}\) on both sides of the equation and simplifying yields \[t\frac{\partial(Y\circ g)}{\partial x_{k}}=\sum_{j=1}^{n}\left[(P_{j}\circ \tilde{h})\frac{\partial(X_{j}\circ f_{t})}{\partial x_{k}}+t(P_{j}\circ \tilde{f})\frac{\partial(X_{j}\circ g)}{\partial x_{k}}\right].\] Taking \(t=0\) gives us the homogeneous system of equations \[0=\sum_{j=1}^{n}\frac{\partial(X_{i}\circ f)}{\partial x_{k}}(x)(P_{j}\circ \tilde{h})(x,0)\] for \(k=1,\dots,n\). Using the observation from Remark 3.1 and the continuity of \(P_{1}\circ\tilde{h},\dots,P_{n}\circ\tilde{h}\), we conclude that \((P_{1}\circ\tilde{h})(x,0)=\dots=(P_{n}\circ\tilde{h})(x,0)=0\) and thus \(\tilde{h}(x,t)=t\tilde{g}(x,t)\). It only remains to show that \(t\pi(T\mathscr{L}_{e}\tilde{f})=T\mathscr{A}_{e}f\). Let \(\xi\in T\mathscr{L}_{e}\tilde{f}\): there exist \(1\)-parameter families \(\phi_{t}\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n},S)\), \(\Psi_{t}\colon(PT^{*}\mathbb{K}^{n+1},w)\to(PT^{*}\mathbb{K}^{n+1},w)\) of diffeomorphisms such that \(\xi=v_{0}(\Psi_{t}\circ\tilde{f}\circ\phi_{t}^{-1})\), with \(\Psi_{t}\) Legendrian. Since \(\Psi_{t}\) is Legendrian for all \(t\) in a neighbourhood \(U\subseteq\mathbb{K}\) of \(0\), there exists a \(1\)-parameter family \(\psi_{t}\colon(\mathbb{K}^{n+1},0)\to(\mathbb{K}^{n+1},0)\) of diffeomorphisms such that \(\pi\circ\Psi_{t}=\psi_{t}\circ\pi\) for all \(t\in U\). We then have that \(v_{0}(\psi_{t}\circ f\circ\phi_{t}^{-1})=t\pi[v_{0}(\Psi_{t}\circ\tilde{f}\circ \phi_{t}^{-1})]=t\pi(\xi)\), hence \(t\pi(\xi)\in T\mathscr{A}_{e}f\). Conversely, if \(\xi\in T\mathscr{A}_{e}f\), there exist \(1\)-parameter families \(\phi_{t}\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n},S)\), \(\psi_{t}\colon(\mathbb{K}^{n+1},0)\to(\mathbb{K}^{n+1},0)\) of diffeomorphisms such that \(\xi=v_{0}(\psi_{t}\circ f\circ\phi_{t}^{-1})\). Using Theorem 3.3, there exists a \(1\)-parameter family of Legendrian diffeomorphisms \(\Psi_{t}\colon(PT^{*}\mathbb{K}^{n+1},w)\to(PT^{*}\mathbb{K}^{n+1},w)\) such that \(\pi\circ\Psi_{t}=\psi_{t}\circ\pi\), and thus we can lift \(\xi\) onto \(v_{0}(\Psi_{t}\circ\tilde{f}\circ\phi_{t}^{-1})\in T\mathscr{L}_{e}\tilde{f}\), whose image via \(t\pi\) is \(\xi\) **Remark 3.13**.: _Let \(f\colon(\mathbb{K}^{n},0)\to(\mathbb{K}^{n+1},0)\) be a frontal map germ: Theorem 3.12 states that \(\mathscr{F}(f)=t\pi[\theta(\tilde{f})]\). Since \(\tilde{f}\) has corank \(1\), a resut by Ishikawa [14] states that_ \[\theta_{I}(\tilde{f})=\{\xi\in\theta(\tilde{f}):\xi^{*}\tilde{\alpha}=0\},\] _wherein \(\tilde{\alpha}\) denotes the natural lifting of the contact form in \(PT^{*}\mathbb{K}^{n+1}\). Taking Darboux coordinates in \(PT^{*}\mathbb{K}^{n+1}\),_ \[\xi\in\mathscr{F}(f)\iff d\xi_{n+1}-\sum_{i=1}^{n}(P_{i}\circ\tilde{f})\,d\xi_ {i}\in\mathscr{O}_{n}\,d(f^{*}\mathscr{O}_{n}) \tag{8}\] _In particular, if \(f\) has corank \(1\) and it is given in prenormal form, Equation (8) is equivalent to_ \[\frac{\partial\xi_{n+1}}{\partial y}-\sum_{j=1}^{n-1}P_{j}\frac{\partial\xi_{ j}}{\partial y}+\mu\frac{\partial\xi_{n}}{\partial y}\in\mathscr{O}_{n}\{p_{y}\},\] _where \(P_{1},\dots,P_{n-1}\) are given as in Equation (3)._ **Definition 3.14**.: _The **frontal codimension** of \(f\) is defined as the dimension of \(T^{1}_{\mathscr{F}_{e}}f=\mathscr{F}(f)/T\mathscr{A}_{e}f\). We say \(f\) is \(\mathscr{F}\)**-finite** or has **finite frontal codimension** if \(\dim T^{1}_{\mathscr{F}_{e}}f<\infty\)._ ### Frontal versality and stability In the previous subsection, we formulated the notions of integral deformation and Legendrian codimension purely in terms of frontal unfoldings. We now show that Ishikawa's results concerning the Legendrian stability and versality of pairs from [14] have a direct parallel in our theory of frontal deformations. **Definition 3.15**.: _A frontal map germ \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) is **stable as a frontal** or \(\mathscr{F}\)**-stable** if every frontal unfolding of \(f\) is \(\mathscr{A}\)-trivial._ **Corollary 3.16**.: _A frontal map germ \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) is stable as a frontal if and only if \(\tilde{f}\) is Legendrian stable._ Proof.: Assume \(f\) is stable as a frontal and let \(\tilde{f}_{u}\) be an integral deformation of \(\tilde{f}\): by Theorem 3.8, \(\tilde{f}_{u}\) defines a frontal unfolding \(F=(f_{u},u)\) of \(f\). Stability of \(f\) then implies that \(f_{u}\) is \(\mathscr{A}\)-equivalent to \(f\). By Corollary 3.5, this then implies that \(\tilde{f}_{u}\) is Legendrian equivalent to \(\tilde{f}\). Since the choice of \(\tilde{f}_{u}\) was arbitrary, we conclude \(\tilde{f}\) is Legendrian stable. The opposite direction is shown similarly. **Corollary 3.17**.: _A frontal map germ \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) is \(\mathscr{F}\)-stable if and only if its \(\mathscr{F}_{e}\)-codimension is \(0\)._ Proof.: Corollary 3.16 states that \(f\) is \(\mathscr{F}\)-stable if and only if its Nash lift \(\tilde{f}\) is Legendrian stable. Since \(f\) has corank at most \(1\), so does \(\tilde{f}\), and a result by Ishikawa [14] states that \(\tilde{f}\) is Legendrian stable for the bundle projection \(\pi\) if and only if \(\theta_{I}(\tilde{f})=T\mathscr{L}_{e}\tilde{f}\). However, it follows from Theorem 3.12 that this is equivalent to \(\mathscr{F}(f)=T\mathscr{A}_{e}f\). **Example 3.18**.: _The following frontal hypersurfaces are stable as frontals:_ 1. _Cusp:_ \(X^{2}-Y^{3}=0\)__ 2. _Folded Whitney umbrella:_ \(Z^{2}-X^{2}Y^{3}=0\)__ Let \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) be a frontal map germ with \(d\)-parameter unfolding \(F=(f_{u},u)\), not necessarily frontal. Recall that the **pullback** of \(F\) by \(h\colon(\mathbb{K}^{l},0)\to(\mathbb{K}^{d},0)\) is defined as the \(l\)-paramter unfolding \[(h^{*}F)(x,v)=(f_{h(v)}(x),v)\] **Definition 3.19**.: _Let \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) be a frontal map germ. A frontal unfolding \(F\) of \(f\) is \(\mathscr{F}\)**-versal** or **versal as a frontal** if, given any other frontal unfolding \(G\) of \(f\), there exist unfoldings \(T\colon(\mathbb{K}^{n+1}\times\mathbb{K}^{d},0)\to(\mathbb{K}^{n+1}\times \mathbb{K}^{d},0)\) and \(\Sigma\colon(\mathbb{K}^{n}\times\mathbb{K}^{d},S\times\{0\})\to(\mathbb{K}^{n} \times\mathbb{K}^{d},S\times\{0\})\) of the identity such that_ \[G=T\circ h^{*}F\circ\Sigma\] _for some map germ \(h\)._ **Lemma 3.20**.: _Given a frontal map germ \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\), a frontal unfolding \(F=(f_{u},u)\) is \(\mathscr{F}\)-versal if and only if \(\tilde{f}_{u}\) is a Legendre versal deformation of \(\tilde{f}\)._ Proof.: Assume \(F\) is a versal frontal unfolding of \(f\) and let \((\widetilde{g_{u}})\) be an \(s\)-parameter integral deformation of \(\tilde{f}\). Theorem 3.8 implies that the \(s\)-parameter unfolding \(G=(u,g_{u})\) is frontal. By versality of \(F\), there exist unfoldings \(\mathcal{T}\colon(\mathbb{K}^{n+1}\times\mathbb{K}^{d},0)\to(\mathbb{K}^{n+1} \times\mathbb{K}^{d},0)\), \(\mathcal{S}\colon(\mathbb{K}^{n}\times\mathbb{K}^{d},S\times\{0\})\to( \mathbb{K}^{n}\times\mathbb{K}^{d},S\times\{0\})\) of the identity map germ and a smooth map germ \(h\colon(\mathbb{K}^{s},0)\to(\mathbb{K}^{d},0)\) such that \(G=\mathcal{T}\circ h^{*}F\circ\mathcal{S}^{-1}\). Let \(f\colon N\to Z\) be a representative of \(f\) which is a proper frontal map, and \(F\colon\mathcal{N}\to\mathcal{Z}\) be a representative of \(F\) such that \(\mathcal{N}\subseteq N\times\mathbb{K}^{d}\). A simple computation shows that \(\Sigma(F)=\Sigma(f)\times\{0\}\); therefore, since \(\Sigma(f)\) is nowhere dense in \(N\), \(\Sigma(F)\) is nowhere dense in \(\mathcal{N}\) and \(F\) is a proper frontal map. Theorem 3.8 then states that \(f_{u}\) lifts into integral deformation of \(\tilde{f}\). Now consider representatives \(h^{*}F=(u,f_{h(u)})\colon\mathcal{N}_{1}\to\mathcal{Z}_{1}\), \(\mathcal{S}=(u,\sigma_{u})\colon\mathcal{N}_{1}\to\mathcal{N}_{2}\), \(\mathcal{T}=(u,\tau_{u})\colon\mathcal{Z}_{1}\to\mathcal{Z}_{2}\) and \(G\colon\mathcal{N}_{2}\to\mathcal{Z}_{2}\) such that \(G=\mathcal{T}\circ h^{*}F\circ\mathcal{S}^{-1}\) as mappings. Since \((\tau_{u})\) is a smooth \(d\)-parameter family of diffeomorphisms, we can lift it onto a \(d\)-parameter family of smooth Legendrian diffeomorphisms \(T_{u}\colon PT^{*}\mathcal{Z}_{1}\to PT^{*}\mathcal{Z}_{2}\). Therefore, \[\widetilde{g_{u}}=T_{u}\circ\widetilde{f_{h(u)}}\circ\sigma_{u}^{-1}\] and \(\widetilde{f_{u}}\) is a versal Legendrian deformation of \(\tilde{f}\). Conversely, let \(\tilde{f}_{u}\) be a versal integral deformation of \(\tilde{f}\) and \(G=(g_{u},u)\) be a frontal \(s\)-parameter unfolding of \(f\). Theorem 3.8 implies that the \(s\)-parameter deformation \(\widetilde{g_{u}}\) is integral. By versality of \(\tilde{f}_{u}\), there exist smooth families of diffeomorphisms \(T_{u}\colon(PT^{*}\mathbb{K}^{n+1},w)\to(\mathbb{K}^{n+1},w)\) and \(\sigma_{u}\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n},S)\) and a smooth map germ \(h\colon(\mathbb{K}^{s},0)\to(\mathbb{K}^{d},0)\) verifying the following: 1. \(T_{u}\) is a Legendrian diffeomorphism for all \(u\); 2. \(T_{0}\) and \(\sigma_{0}\) are the identity map germs; 3. \(\widetilde{g_{u}}=T_{u}\circ\tilde{f}_{h(u)}\circ\sigma_{u}\). By Item 1, we can find a smooth family of diffeomorphisms \(\tau_{u}\colon(\mathbb{K}^{n+1},0)\to(\mathbb{K}^{n+1},0)\) such that \(\pi\circ T_{u}=\tau_{u}\circ\pi\) and \(\tau_{0}\) is the identity map germ. It follows that \[\widetilde{g_{u}}=T_{u}\circ\tilde{f}_{h(u)}\circ\sigma_{u}\iff g_{u}=\tau_{u }\circ f_{h(u)}\circ\sigma_{u}.\] If we now consider the unfoldings \(\mathcal{T}=(\tau_{u},u)\) and \(\mathcal{S}=(\sigma_{u},u)\), we have \(G=\mathcal{T}\circ h^{*}F\circ\mathcal{S}\). We conclude that \(F\) is versal as a frontal. **Theorem 3.21** (Frontal versality theorem).: _Given a frontal map germ \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\),_ 1. \(f\) _admits a frontal versal unfolding if and only if it is_ \(\mathscr{F}\)_-finite;_ 2. _a frontal unfolding_ \(F(u,x)=(u,f_{u}(x))\) _of_ \(f\) _is versal as a frontal if and only if_ \[\mathscr{F}(f)=T\mathscr{A}_{e}f+\operatorname{Sp}_{\mathbb{K}}\{\dot{F}_{1}, \ldots,\dot{F}_{d}\}, \dot{F}_{j}=\left.\frac{\partial f_{u}}{\partial u_{j}}\right|_{u=0}.\] To show Theorem 3.21, we shall make use of **Theorem 3.22** (Ishikawa's Legendre versality theorem [14]).: _Given an integral \(\tilde{f}\colon(\mathbb{K}^{n},S)\to(PT^{*}\mathbb{K}^{n+1},w)\) of corank at most \(1\),_ 1. \(\tilde{f}\) _admits a versal Legendrian unfolding if and only if its Legendrian codimension is finite;_ 2. _a Legendrian unfolding_ \(\tilde{f}_{u}\) _of_ \(\tilde{f}\) _is versal if and only if_ (9) \[\theta_{I}(\tilde{f})=T\mathscr{L}_{e}\tilde{f}+\operatorname{Sp}_{\mathbb{K} }\left\{\left.\frac{\partial\tilde{f}_{u}}{\partial u_{1}}\right|_{u=0},\dots, \left.\frac{\partial\tilde{f}_{u}}{\partial u_{d}}\right|_{u=0}\right\}.\] Proof of Theorem 3.21.: By Lemma 3.20, a frontal unfolding \(F=(f_{u},u)\) of \(f\) is versal as a frontal if and only if the smooth family \(\widetilde{f_{u}}\) is a versal Legendre deformation of \(\tilde{f}\). In particular, it follows from Theorem 3.8 that \(\tilde{f}\) admits a versal Legendrian deformation if and only if \(f\) admits a versal frontal unfolding. This fact shall be used to prove both items. By Theorem 3.22, \(f\) admits a \(\mathscr{F}\)-versal unfolding if and only if \(\tilde{f}\) has finite Legendre codimension. However, it was proved in Theorem 3.12 that this is equivalent to \(f\) being \(\mathscr{F}\)-finite. This shows the first Item. We move onto the second Item. If \(F\) is \(\mathscr{F}\)-versal, \(\tilde{f}_{u}\) is a Legendre versal unfolding of \(\tilde{f}\) by Lemma 3.20 and Equation (9) holds. Computing the image via \(t\pi\) on both sides of Equation (9) and using Theorem 3.12, we get \[\mathscr{F}(f)=T\mathscr{A}_{e}\tilde{f}+t\pi\left[\operatorname{ Sp}_{\mathbb{K}}\left\{\left.\frac{\partial\tilde{f}_{u}}{\partial u_{1}} \right|_{u=0},\dots,\left.\frac{\partial\tilde{f}_{u}}{\partial u_{d}}\right|_ {u=0}\right\}\right]=\\ =T\mathscr{A}_{e}f+\operatorname{Sp}_{\mathbb{K}}\{\dot{F}_{1}, \dots,\dot{F}_{d}\}. \tag{10}\] Conversely, let us assume that (10) holds: using Theorem 3.12, we see that (9) holds as well. Therefore, \(F\) is versal as a frontal. This shows the second Item. ## 4. A geometric criterion for \(\mathscr{F}\)-finiteness The Mather-Gaffney criterion states that a smooth \(f\colon(\mathbb{C}^{n},S)\to(\mathbb{C}^{n+1},0)\) is \(\mathscr{A}\)-finite if and only if there is a finite representative \(f\colon N\to Z\) with isolated instability. For example, the generic singularities for \(n=2\) are transversal double points, with Whitney umbrellas and triple points in the accumulation (see e.g. [20] SS4.7). This implies that generic frontal singularities such as the folded Whitney umbrella (see Example 3.18) are not \(\mathscr{A}\)-finite, since it contains cuspidal edges near the origin. Nonetheless, cuspidal edges are generic within the subspace of frontal map germs \((\mathbb{C}^{2},S)\to(\mathbb{C}^{3},0)\) ([1]), which suggests the existence of a Mather-Gaffney-type criterion for frontal hypersurfaces. **Proposition 4.1**.: _A germ of analytic plane curve \(\gamma\colon(\mathbb{C},S)\to(\mathbb{C}^{2},0)\) is \(\mathscr{F}\)-finite if and only if it is \(\mathscr{A}\)-finite._ Proof.: If \(\gamma\) is \(\mathscr{A}\)-finite, it is clear that it is also \(\mathscr{F}\)-finite, since \[\mathscr{F}(\gamma)\subseteq\theta(\gamma)\implies\dim\frac{\mathscr{F}( \gamma)}{T\mathscr{A}_{e}\gamma}\leq\dim\frac{\theta(\gamma)}{T\mathscr{A}_{e }\gamma}<\infty\] Assume \(\gamma\) is \(\mathscr{F}\)-finite, and let \(\gamma\colon N\to Z\) be a representative of \(\gamma\). By the Curve Selection Lemma [2], \(\Sigma(\gamma)\) is an isolated subset in \(N\), so we can assume (by shrinking \(N\) if necessary) that \(\gamma(N\backslash S)\) is a smooth submanifold of \(Z\) and \(\gamma^{-1}(\{0\})=S\). By the Mather-Gaffney criterion, it then follows that \(\gamma\) is \(\mathscr{A}\)-finite, as stated. Given a frontal map \(f\colon N\to Z\) and \(z\in Z\), let \(f_{z}\colon(N,f^{-1}(z))\to(Z,z)\). We define \(\mathscr{F}(f)\) as the sheaf of \(\mathscr{O}_{Z}\)-modules given by the stalk \(\mathscr{F}(f)_{z}=\mathscr{F}(f_{z})\). We also set \(\theta_{N}\) (resp. \(\theta_{Z}\)) as the sheaf of vector fields on \(N\) (resp. \(Z\)) and the quotient sheaves \[\mathscr{T}^{1}_{\mathscr{R}_{e}}f =\frac{\mathscr{F}(f)}{tf(\theta_{N})}; \mathscr{T}^{1}_{\mathscr{F}_{e}}f =\frac{f_{*}\left(\mathscr{T}^{1}_{\mathscr{R}_{e}}f\right)}{\omega f(\theta_ {Z})};\] **Remark 4.2**.: _If \(f\) is finite, we can take coordinates in \(N\) and \(W\) such that \(\tilde{f}(x,y)=(x,f_{n}(x,y),\ldots,f_{2n+1}(x,y))\). By [13], we have the identity_ \[R_{\tilde{f}}:=\left\{\lambda\in\mathscr{O}_{N}\colon d\lambda\in\mathscr{O}_ {N}\,d\left(\tilde{f}^{*}\mathscr{O}_{W}\right)\right\}=\left(\frac{\partial} {\partial y}\right)^{-1}\mathscr{O}_{N}\left\{\frac{\partial\tilde{f}_{n}}{ \partial y},\ldots,\frac{\partial\tilde{f}_{2n+1}}{\partial y}\right\}\] _which is a \(\mathscr{O}_{N}\)-finite algebra by [12]. Since \(f\) is finite, \(R_{\tilde{f}}\) is \(\mathscr{O}_{Z}\)-finite._ **Proposition 4.3** ([14]).: _Let \(f\colon(\mathbb{C}^{n},S)\to(\mathbb{C}^{n+1},0)\) be a frontal map germ. If \(\tilde{f}\) is \(\mathscr{A}\)-equivalent to an analytic \(g\colon(\mathbb{C}^{n},S)\to(\mathbb{C}^{2n+1},0)\) (not necessarily integral) such that \(\operatorname{codim}_{\mathbb{C}}\Sigma(g)>1\),_ \[\frac{\theta_{I}(\tilde{f})}{T\mathscr{L}_{e}\tilde{f}}\cong_{\sigma_{Z}}\frac {R_{\tilde{f}}}{\mathscr{O}_{Z}\{1,\tilde{p}_{1},\ldots,\tilde{p}_{n}\}}\] _where \(\tilde{p}_{1},\ldots,\tilde{p}_{n}\) are the coordinates of \(\tilde{f}\) in the fibres of \(\pi\)._ **Remark 4.4**.: _Let \(f\) and \(\tilde{f}\) be given as in the statement above. If we assume that \(f\) has corank \(1\) and is given as in Equation (2), \(\Sigma(\tilde{f})=V(p_{y},\mu_{y})\)._ **Corollary 4.5**.: _Let \(f\colon(\mathbb{C}^{n},S)\to(\mathbb{C}^{n+1},0)\) be a frontal map germ. If \(f\) is finite and \(\operatorname{codim}V(p_{y},\lambda_{y})>1\), there is a representative \(f\colon N\to Z\) of \(f\) such that \(\mathscr{T}^{1}_{\mathscr{F}_{e}}f\) is a coherent sheaf._ Proof.: Using Proposition 4.3, we have \[\frac{R_{\tilde{f}_{w}}}{\mathscr{O}_{Z}\{1,\tilde{p}_{1},\ldots,\tilde{p}_{n }\}}\cong_{\sigma_{Z}}\frac{\theta_{I}(\tilde{f}_{w})}{T\mathscr{L}_{e}\tilde {f}_{w}}=(\mathscr{T}^{1}_{\mathscr{F}_{e}}f)_{\pi(w)}\] Since \(f\) is finite, \(R_{\tilde{f}_{w}}\) is \(\mathscr{O}_{Z,\pi(w)}\)-finite, as shown in Remark 4.2. Therefore, the stalk of \(\mathscr{T}^{1}_{\mathscr{F}_{e}}f\) at \(\pi(w)\) is finitely generated and \(\mathscr{T}^{1}_{\mathscr{F}_{e}}f\) is of finite type. Let \(V\subset Z\) be an open set and \(\beta\colon\mathscr{O}^{q}_{Z\mid V}\to\left(\mathscr{T}^{1}_{\mathscr{F}_{e} }f\right)_{\mid V}\) an epimorphism of \(\mathscr{O}_{Z}\)-modules. Since \(\mathscr{O}_{Z}\) is a Noetherian ring, every submodule of \(\mathscr{O}^{q}_{Z\mid V}\) is finitely generated. In particular, \(\ker\beta\) is finitely generated. We then conclude that \(\mathscr{T}^{1}_{\mathscr{F}_{e}}f\) is a coherent sheaf. **Theorem 4.6** (Mather-Gaffney criterion for frontal maps).: _Let \(f\colon(\mathbb{C}^{n},S)\to(\mathbb{C}^{n+1},0)\) be a frontal map germ. If \(f\) is finite and \(\operatorname{codim}_{\mathbb{C}}\Sigma(\tilde{f})>1\), \(f\) is \(\mathscr{F}\)-finite if and only if there exists a representative \(f\colon N^{\prime}\to Z^{\prime}\) of \(f\) such that the restriction \(f\colon N^{\prime}\backslash S\to Z^{\prime}\backslash\{0\}\) is locally \(\mathscr{F}\)-stable._ Proof.: The case for \(n=1\) follows easily from the Mather-Gaffney criterion for \(\mathscr{A}\)-equivalence and Proposition 4.1. Therefore, we assume \(n>1\). Suppose first that \(f\) has finite \(\mathscr{F}\)-codimension: by Corollary 4.5, \(\mathscr{T}^{1}_{\mathscr{F}_{e}}f\) is a coherent sheaf. In addition, \[\dim_{\mathbb{C}}(\mathscr{T}^{1}_{\mathscr{F}_{e}}f)_{0}=\dim_{\mathbb{C}}T^{1 }_{\mathscr{F}_{e}}f=\operatorname{codim}_{\mathscr{F}_{e}}f<\infty\] By Ruckert's Nullstellensatz, there exists an open neighbourhood \(Z^{\prime}\) of \(0\) in \(Z\) such that \(\operatorname{supp}\mathscr{T}^{1}_{\mathscr{F}_{e}}f\cap Z\subseteq\{0\}\). Therefore, every other stalk of \(\mathscr{T}^{1}_{\mathscr{F}_{e}}f\) is \(0\), and the restriction of \(f\) to \(N^{\prime}\backslash\{0\}\) is \(\mathscr{F}\)-stable, where \(N^{\prime}=f^{-1}(Z^{\prime})\). Conversely, suppose that there exists a representative \(f\colon N^{\prime}\to Z^{\prime}\) such that the restriction \(f\colon N^{\prime}\backslash\{0\}\to Z^{\prime}\backslash\{0\}\) is locally \(\mathscr{F}\)-stable. Given \(z\in Z\backslash\{0\}\), \((\mathscr{T}^{1}_{\mathscr{F}_{e}}f)_{z}=0\), so there exists an open neighbourhood \(U\) of \(0\) in \(Z\) such that \(\operatorname{supp}\mathscr{T}^{1}_{\mathscr{F}_{e}}f\cap U\subseteq\{0\}\). By Ruckert's Nullstellensatz, it follows that the dimension of the stalk of \(\mathscr{T}^{1}_{\mathscr{F}_{e}}f\) at \(0\) is finite, but that dimension is equal to \(\operatorname{codim}_{\mathscr{F}_{e}}f\). We conclude that the germ of \(f\) at \(0\) is \(\mathscr{F}\)-finite. ## 5. Frontal reduction of a corank 1 map germ In [22], we presented the notion of frontalisation for a fold surface \(f\colon(\mathbb{C}^{2},S)\to(\mathbb{C}^{3},0)\), and proved that the frontalisation process preserves some of the topological invariants of \(f\). We also defined frontal versions of Mond's \(S_{k}\), \(B_{k}\), \(C_{k}\) and \(F_{4}\) singularities (see [18]), observing that none of them are wave fronts. We now seek to describe a more general procedure to generate frontals using arbitrary corank 1 map germs. **Example 5.1**.: _Let \(\gamma\colon(\mathbb{K},0)\to(\mathbb{K}^{2},0)\) be the parametrised curve \(\gamma(t)=(t^{3},t^{4})\): the unfolding \(\Gamma\colon(\mathbb{K}^{3}\times\mathbb{K},0)\to(\mathbb{K}^{3}\times\mathbb{ K}^{2},0)\) given by_ \[\Gamma(u,t)=(u,t^{3}+u_{1}t,t^{4}+u_{2}t+u_{3}t^{2})=(u,p(u,t),q(u,t))\] _is an \(\mathscr{A}\)-miniversal deformation for \(\gamma\). By Proposition 2.7 and since \(\deg_{t}p_{t}<\deg_{t}q_{t}\), \(\Gamma\) is frontal if and only if \(p_{t}|q_{t}\). If \(\mu\in\mathscr{O}_{1}\) is such that \(q_{t}=\mu p_{t}\), a simple computation then shows that the identity_ \[4t^{3}+u_{2}+2u_{3}t=(3t^{2}+u_{1})(\mu_{1}t+\mu_{0})\] _holds if and only if \(u_{2}=\mu_{0}=0\), \(\mu_{1}=4/3\) and \(2u_{3}=3u_{1}\). Setting \(h(v)=(3v,0,2v)\), we obtain the unfolding_ \[h^{*}\Gamma(t,v)=(v,t^{3}+3vt,t^{4}+2vt^{2})\] _which is a swallowtail singularity._ In this section, we show that the frontal reduction of the versal unfolding of a plane curve is a \(\mathscr{F}\)-versal unfolding. The proof of this result gives a procedure to compute the frontal reduction of a given unfolding (versal or otherwise) via a system of polynomial equations, which may be solved using a computer algebra system such as Oscar or Singular. **Remark 5.2** (Piuseux parametrisation).: _Let \(\gamma\colon(\mathbb{C},0)\to(\mathbb{C}^{2},0)\) be an analytic plane curve with isolated singularities. There exists a \(f\in\mathbb{C}\{x,y\}\) such that \(f\circ\gamma=0\). By Piuseux's Theorem (see e.g. [28], Theorem 2.2.6, or [5], Theorem 5.1.1), if \(\alpha=\operatorname{ord}f\), \(f(t^{\alpha},t^{\alpha+1}h(t))=0\) for some \(h\in\mathbb{C}\{t\}\). Therefore, \(\gamma\) is \(\mathscr{A}\)-equivalent to the plane curve_ \[t\mapsto(t^{\alpha},t^{\alpha+1}g(t)).\] _In particular, \(\gamma\) is \(\mathscr{A}\)-finite (and thus finitely determined) by the Mather-Gaffney criterion, so we can further assume that \(g\in\mathbb{C}[t]\)._ _If \(\mathbb{K}=\mathbb{R}\), it suffices to replace \(\gamma\) with its complexification \(\gamma_{\mathbb{C}}\) in the argument above, as \(\gamma\) is analytic. Therefore, such a parametrisation also exists in the real case._ **Lemma 5.3**.: _Let \(\gamma\colon(\mathbb{K},0)\to(\mathbb{K}^{2},0)\) be the plane curve from Remark 5.2. There exists a smooth \(d\)-parameter deformation \((g_{w})\) such that_ \[\Gamma(u,v,w,t)=\left(u,v,w,t^{\alpha}+\sum_{j=1}^{\alpha-2}u_{j}t^{j},\sum_{ j=1}^{\alpha-1}v_{j}t^{j}+t^{\alpha+1}g_{w}(t)\right)\] _is a miniversal unfolding of \(\gamma\)._ Proof.: Let \(G=\{g_{1},\ldots,g_{d}\}\subset\mathbb{K}[t]\) be a \(\mathbb{K}\)-basis for \(T^{1}_{\mathscr{A}_{e}}\gamma\): by Martinet's theorem, a miniversal unfolding for \(\gamma\) is given by the expression \[\Gamma(x,t)=(x,\gamma(t)+x_{1}g_{1}(t)+\cdots+x_{n-1}g_{n-1}(t)) \tag{11}\] A simple computation shows that \[T\mathscr{A}_{e}\gamma\subseteq\mathscr{O}_{1}\left\{\left(\begin{matrix} \alpha t^{\alpha-1}\\ (\alpha+1)t^{\alpha}q_{0}(t)+t^{\alpha+1}q^{\prime}_{0}(t)\end{matrix} \right)\right\}+\mathfrak{m}_{1}^{\alpha}\mathscr{O}_{1}^{2}. \tag{12}\] Using Equation (12), we may assume that \(g_{j}(t)=(t^{j},0)\) and \(g_{j+\alpha-2}(t)=(0,t^{j})\) for \(1\leq j\leq\alpha-2\). Setting \(g_{w}(t)=g(t)+w_{1}g_{2\alpha-1}(t)+\cdots+w_{d-2\alpha+1}g_{d}(t)\), Equation (11) becomes \[\Gamma(u,v,w,t)=\left(u,v,w,t^{\alpha}+\sum_{j=1}^{\alpha-2}u_{j}t^{j},t^{ \alpha+1}g_{w}(t)+\sum_{j=1}^{\alpha-1}v_{j}t^{j}\right),\] as claimed. **Remark 5.4**.: _Let \(h\colon(\mathbb{K}^{r},0)\to(\mathbb{K}^{d},0)\) be a smooth map-germ and \(\Gamma\) be the unfolding from Lemma 5.3. The pullback \(h^{*}\Gamma\) is given by_ \[(h^{*}\Gamma)(x,t)=\left(x,t^{\alpha}+\sum_{j=1}^{\alpha-2}u_{j}(x)t^{j},\sum_ {j=1}^{\alpha-1}v_{j}(x)t^{j}+t^{\alpha+1}g_{w(x)}(t)\right),\] _where \(u_{j}(x)\equiv(u_{j}\circ h)(x)\), \(v_{j}(x)\equiv(v_{j}\circ h)(x)\) and \(w(x)\equiv(w\circ h)(x)\). As we saw in the proof of Lemma 5.3,_ \[g_{w}(t)=g(t)+w_{1}g_{2\alpha-1}(t)+\cdots+w_{d-2\alpha+1}g_{d}(t),\] _where \(g\) can be assumed to be a polynomial function (due to Remark 5.2). Therefore, the component functions of \(h^{*}\Gamma\) are elements of \(\mathscr{O}_{r}[t]\), the algebra of polynomials on \(t\) with coefficients in \(\mathscr{O}_{r}\)._ **Theorem 5.5**.: _If \(\gamma\) has a miniversal \(d\)-parameter unfolding \(\Gamma\), there is an immersion \(h\colon(\mathbb{K}^{l},0)\to(\mathbb{K}^{d},0)\) with the following properties:_ 1. \(h^{*}\Gamma\) _is a frontal unfolding of_ \(\gamma\) _;_ 2. _if_ \((h^{\prime})^{*}\Gamma\) _is frontal for any other_ \(h^{\prime}\colon(\mathbb{K}^{l^{\prime}},0)\to(\mathbb{K}^{d},0)\)_,_ \((h^{\prime})^{*}\Gamma\) _is equivalent as an unfolding to a pullback of_ \(h^{*}\Gamma\) _._ _Therefore, \(h^{*}\Gamma\) is a frontal miniversal unfolding._ We shall denote \(h^{*}\Gamma\) as \(\Gamma_{\mathscr{F}}\) and call it a _frontal reduction_ of \(\Gamma\). Proof.: Let \(\Gamma\) be the unfolding from Lemma 5.3 and \(d=\operatorname{codim}_{\mathscr{A}_{e}}\gamma\). We first want to show that there is an immersion \(h\colon(\mathbb{K}^{\ell},0)\to(\mathbb{K}^{d},0)\) making \(h^{*}\Gamma\) a frontal map germ; to do so, we shall derive a system of equations that determines whether a given pullback yields a frontal unfolding. Let \((h^{*}\Gamma)(x,t)=(x,P(x,t),Q(x,t))\). By Remark 5.4, \(Q\in\mathscr{O}_{r}[t]\), so we can write \(Q(x,t)=q_{1}(x)t+\cdots+q_{\beta}(x)t^{\beta}\). Since \(h^{*}\Gamma\) is a corank 1 map germ, Corollary 2.7 states that it is frontal if and only if either \(P_{t}|Q_{t}\) or \(Q_{t}|P_{t}\); in particular, we can assume that \(\deg_{t}P_{t}\leq\deg_{t}Q_{t}\), allowing us to impose the condition \(P_{t}|Q_{t}\) to \(h^{*}\Gamma\). If \(Q_{t}=\mu P_{t}\) for some \(\mu\in\mathscr{O}_{r+1}\), there will exist \(\mu_{0},\ldots,\mu_{\beta-\alpha}\) such that \(\mu(x,t)=\mu_{0}(x)+\cdots+\mu_{\beta-\alpha}(x)t^{\beta-\alpha}\). Therefore, the identity \(Q_{t}=\mu P_{t}\) is equivalent to \[kq_{k}(x)=\sum_{i+j=k}iu_{i}(x)\mu_{j}(x) \tag{13}\] for \(k=1,2\ldots,\beta\). For \(k\geq\alpha\), we may solve for \(\mu_{k-\alpha}\) to get the expression \[\mu_{k-\alpha}(x) =\frac{k}{\alpha}q_{k}(x)-\frac{1}{\alpha}\sum_{i+j=k}iu_{i}(x) \mu_{j}(x); u_{\alpha}(x) \equiv 1.\] The remaining terms define an immersion germ \(h\colon(\mathbb{K}^{d-\alpha+1},0)\to(\mathbb{K}^{d},0)\) given by \(h(u,w)=(u,v(u,w),w)\), which verifies Equation (13) by construction. This proves Item 1. Let \(\Lambda\) be a frontal unfolding of \(\gamma\): versality of \(\Gamma\) implies that \(\Lambda\) is equivalent to \((h^{\prime})^{*}\Gamma\) for some \(h^{\prime}\colon(\mathbb{K}^{r},0)\to(\mathbb{K}^{d},0)\). Let \(h\colon V\to U\) be a one-to-one representative of \(h\), \(\pi\colon U\to V\) be the projection \[\pi(x_{1},\ldots,x_{d})=(x_{1},\ldots,x_{\alpha-2},x_{2\alpha-2},\ldots,x_{d})\] and \(h^{\prime}\colon V^{\prime}\to U^{\prime}\) be a representative of \(h^{\prime}\). Since \((h^{\prime})^{*}\Gamma\) is frontal, \(h^{\prime}\) verifies Equation (13) and thus \(h^{\prime}(V^{\prime})\subseteq h(V)\) by construction. Given \(v^{\prime}\in V^{\prime}\), there exists a unique \(v\in V\) such that \[h^{\prime}(v^{\prime})=h(v)\implies(\pi\circ h^{\prime})(v^{\prime})=v\implies( h\circ\pi\circ h^{\prime})(v^{\prime})=h(v)=h^{\prime}(v^{\prime}),\] and thus \((h^{\prime})^{*}\Gamma=(h\circ\pi\circ h^{\prime})^{*}\Gamma=(\pi\circ h^{ \prime})^{*}(h^{*}\Gamma)\). **Example 5.6**.: _Consider Arnol'd's \(E_{8}\) singularity, \(\gamma(t)=(t^{3},t^{5})\). A versal unfolding of this curve is given by_ \[(u,v,w,t^{3}+ut,t^{5}+wt^{4}+v_{2}t^{2}+v_{1}t)=(u,v,w,p(u,t),q(v,w,t))\] _The frontal reduction of this unfolding may now be computed using Equation (13), which can be written in matrix form as_ \[\begin{pmatrix}5\\ 4w\\ 0\\ 2v_{2}\\ v_{1}\end{pmatrix}=\begin{pmatrix}3&0&0\\ 0&3&0\\ u&0&3\\ 0&u&0\\ 0&0&u\end{pmatrix}\begin{pmatrix}\mu_{2}\\ \mu_{1}\\ \mu_{0}\end{pmatrix}\implies\begin{pmatrix}\mu_{2}\\ \mu_{1}\\ \mu_{0}\end{pmatrix}=\frac{1}{9}\begin{pmatrix}15\\ 12w\\ -5u\end{pmatrix}\] _Since this system has five equations and only three unknowns, we can now solve for \(v\), yielding \(v_{1}=-5/9u^{2}\) and \(v_{2}=2/3w\)._ **Remark 5.7**.: _While the method of frontal reductions successfully turns \(\mathscr{A}\)-versal unfoldings into \(\mathscr{F}\)-versal unfoldings, the same does not hold for stable unfoldings. For example, given the plane curve \(\gamma(t)=(t^{2},t^{2k+1})\), \(k>1\), a stable unfolding of \(\gamma\) is given by \(f(u,t)=(u,t^{2},t^{2k+1}+ut)\). However, the only pullback that can turn \(f\) into a frontal map germ is \(u(s)=0\), giving us \(\gamma\), which is not stable by hypothesis._ _A more general method to compute stable unfoldings will be given in SS6._ **Corollary 5.8**.: _Given \(\gamma\colon(\mathbb{K},0)\to(\mathbb{K}^{2},0)\),_ \[\operatorname{codim}_{\mathscr{F}_{e}}\gamma=\operatorname{codim}_{\mathscr{ A}_{e}}\gamma-\operatorname{mult}(\gamma)+1\] _Consequently, if \(\gamma(\mathbb{K},0)\) is the zero locus of some analytic \(g\in\mathscr{O}_{2}\),_ \[\operatorname{codim}_{\mathscr{F}_{e}}\gamma=\tau(g)-\operatorname{ord}(g)- \frac{1}{2}\mu(g)+1.\] Proof.: In the proof of Theorem 5.5, we see that \(l=d-\alpha+1\), where \(d=\operatorname{codim}_{\mathscr{A}_{e}}\gamma\) and \(\alpha=\operatorname{mult}(\gamma)\). Since \(h^{*}\Gamma\) is a miniversal \(l\)-parameter unfolding, \(\operatorname{codim}_{\mathscr{F}_{e}}\gamma=l\), giving the first identity. Now assume \(\mathbb{K}=\mathbb{C}\): Milnor's formula [17] states that the delta invariant \(\delta(g)\) and the Milnor number \(\mu(g)\) of \(g\) are related via the identity \(2\delta(g)=\mu(g)\), since \(\gamma\) is a mono-germ. On the other hand, a result in [8] states that \(\operatorname{codim}_{\mathscr{A}_{e}}\gamma=\tau(g)-\delta(g)=\tau(g)-1/2\mu (g)\), \(\tau\) being the Tjurina number, hence yielding the expression \[\operatorname{codim}_{\mathscr{F}_{e}}\gamma=\tau(g)-\frac{1}{2}\mu(g)- \operatorname{mult}(\gamma)+1\] In particular, the order of \(g\) is equal to \(\operatorname{mult}(\gamma)\) (see [5] Corollary 5.1.6). For \(\mathbb{K}=\mathbb{R}\), simply note that \(\mu(g)=\mu(g_{\mathbb{C}})\), \(\operatorname{ord}(g)=\operatorname{ord}(g_{\mathbb{C}})\) and \(\tau(g)=\tau(g_{\mathbb{C}})\), where \(g_{\mathbb{C}}\) is the complexification of \(g\). **Example 5.9**.: _Let \(\gamma\colon(\mathbb{C},0)\to(\mathbb{C}^{2},0)\) be the \(A_{2k}\) singularity, with normalisation \(\gamma(t)=(t^{2},t^{2k+1})\). Direct computations show that_ \[\frac{\theta(\gamma)}{T\mathscr{A}_{e}\gamma}\cong\operatorname{Sp}\{(0,t^{2 \ell+1}):0\leq\ell<k\};\qquad\frac{\mathscr{F}(\gamma)}{T\mathscr{A}_{e}\gamma }\cong\operatorname{Sp}\{(0,t^{2\ell+1}):1\leq\ell<k\},\] _from which follows that its \(\mathscr{A}_{e}\)-codimension is \(k\) and its \(\mathscr{F}_{e}\)-codimension is \(k-1\). Therefore, we have \(\operatorname{codim}_{\mathscr{F}_{e}}\gamma=k-1=k-2+1=\operatorname{codim} _{\mathscr{A}_{e}}\gamma-\operatorname{mult}(\gamma)+1\), as expected._ _The image of \(\gamma\) is given as the zero locus of the function \(g(x,y)=y^{2}-x^{2k+1}\). Using the second expression for the frontal codimension, we have_ \[\tau(g)-\frac{1}{2}\mu(g)=\operatorname{codim}_{\mathscr{F}_{e}}\gamma+ \operatorname{ord}(g)-1=k-1+2-1=k\] _as expected, since both the Tjurina and Milnor numbers of \(g\) are \(2k\)._ In [22] SS5, we introduced the notion of frontal Milnor number \(\mu_{\mathscr{F}}\) for a frontal multi-germ \(f\colon(\mathbb{C}^{n},S)\to(\mathbb{C}^{n+1},0)\). This analytic invariant was defined in a similar fashion to Mond's image Milnor number [19], only changing smooth stabilisations for frontal ones. We then conjectured that \(\mu_{\mathscr{F}}\) verified an adapted version of Mond's conjecture, which we called _Mond's frontal conjecture_. Applying [22], Proposition 5.10 to Corollary 5.8, we can now prove Mond's frontal conjecture in dimension \(1\). **Corollary 5.10**.: _Given a plane curve \(\gamma\colon(\mathbb{K},S)\to(\mathbb{K}^{2},0)\), \(\mu_{\mathscr{F}}(\gamma)\geq\operatorname{codim}_{\mathscr{F}}(\gamma)\), with equality if \(\gamma\) is quasi-homogeneous._ Proof.: Let \(\gamma\) be a non-constant analytic plane curve. By the Curve Selection Lemma [2], \(\gamma\) has an isolated singularity at the origin, so it is \(\mathscr{A}\)-finite and \[\mu_{I}(\gamma)\geq\operatorname{codim}_{\mathscr{A}_{e}}(\gamma),\] with equality if \(\gamma\) is quasi-homogeneous (see [19]). By Corollary 5.8, \(\gamma\) is \(\mathscr{F}\)-finite and \(\operatorname{codim}_{\mathscr{A}_{e}}(\gamma)=\operatorname{codim}_{ \mathscr{F}_{e}}(\gamma)+\operatorname{mult}(\gamma)-1\). Using [22] Proposition 5.10 and Conservation of Multiplicity (see e.g. [20], Corollary E.4), \(\mu_{\mathscr{F}}(\gamma)=\mu_{I}(\gamma)-\operatorname{mult}(\gamma)+1\), as stated above. Therefore, \[\mu_{\mathscr{F}}(\gamma)+\operatorname{mult}(\gamma)-1=\mu_{I}(\gamma)\geq \operatorname{codim}_{\mathscr{A}}(\gamma)=\operatorname{codim}_{\mathscr{F}}( \gamma)+\operatorname{mult}(\gamma)-1,\] with equality if \(\gamma\) is quasi-homogeneous. Now let \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) be a corank \(1\) frontal map germ with isolated frontal instability. We can choose coordinates in the source and target such that \[f(x,y)=(x,p(x,y),q(x,y)); q_{y}=\mu p_{y}; (x,y)\in\mathbb{K}^{n-1}\times\mathbb{K},\] for some \(p,q,\mu\in\mathscr{O}_{n}\). We then set \(S^{\prime}\) as the projection on the \(y\) coordinate of \(S\) and consider the _generic slice_\(\gamma\colon(\mathbb{K},S^{\prime})\to(\mathbb{K}^{2},0)\) of \(f\), given by \(\gamma(t)=(p(0,t),q(0,t))\). Since \(f\) has isolated frontal instabilities, \(\gamma\) is \(\mathscr{A}\)-finite (see Proposition 4.1 above) and we may consider a versal unfolding \(\Gamma\) of \(\gamma\) with frontal reduction \[\Gamma_{\mathscr{F}}\colon(\mathbb{K}^{d}\times\mathbb{K},S^{\prime}\times\{ 0\})\to(\mathbb{K}^{d}\times\mathbb{K}^{2},0).\] It is not true in general that the sum of two frontal mappings is frontal (e.g. \((x,y)\mapsto(x,y^{3},y^{4})\) and \((x,y)\mapsto(x,xy,0)\)), but we can still construct a _frontal sum_ operator that yields a frontal mapping given two frontal mappings with corank at most \(1\). Let \(p^{\prime},q^{\prime},\mu^{\prime}\in\mathscr{O}_{d+1}\) such that \[\Gamma_{\mathscr{F}}(u,y)=(u,p^{\prime}(u,y),q^{\prime}(u,y)); q^{\prime}_{y}=\mu p^{\prime}_{y}:\] we define the _frontal sum_\(F\colon(\mathbb{K}^{d}\times\mathbb{K}^{n},\{0\}\times S)\to(\mathbb{K}^{d} \times\mathbb{K}^{n+1},0)\) of \(f\) and \(\Gamma_{\mathscr{F}}\) as \(F(u,x,y)=(u,x,P(u,x,y),Q(u,x,y))\), where \[P(u,x,y) =p(x,y)+p^{\prime}(u,y)-p(0,y);\] \[Q(u,x,y) =\int_{0}^{y}(\mu(x,s)+\mu^{\prime}(u,s)-\mu(0,s))P_{s}(u,x,s)\,ds. \tag{14}\] This map germ constitutes an unfolding of both \(f\) and \(\Gamma_{\mathscr{F}}\) by construction. Versality of \(\Gamma_{\mathscr{F}}\) then implies that \(F\) is also versal, and thus stable. Therefore, frontal sums allow us to construct stable frontal unfoldings that are not necessarily versal. **Example 5.11** (Frontalised fold surfaces).: _Let \(f\colon(\mathbb{K}^{2},0)\to(\mathbb{K}^{3},0)\) be a frontal fold surface given in the form_ \[f(x,y)=(x,y^{2},a_{1}(x)y^{3}+a_{2}(x)y^{5}+\cdots+a_{n}(x)y^{2n+1}+y^{2n+3});\] _wherein we assume \(a_{0},\ldots,a_{n}\in\mathbb{K}[x]\). The function \(t\mapsto f(0,t)\) has order \(2n+3\), so \(f\) can be seen as a smooth \(1\)-parameter unfolding of the curve_ \[\gamma(t)=(t^{2},t^{2n+3}+a_{n}(0)t^{2n+1}+\cdots+a_{1}(0)t^{3}).\] _A frontal miniversal unfolding for \(\gamma\) is given by_ \[\Gamma(u,t)=(u,t^{2},t^{2n+3}+u_{n}t^{2n+1}+\cdots+u_{1}t^{3}),\] _and we can recover \(f\) by setting \(u_{j}(x)=a_{j}(x)\). Taking \((u,x)\mapsto(0,u_{1}+a_{1}(x),\ldots,u_{n}+a_{n}(x))\) gives the stable unfolding_ \[F(u,x,t)=(u,t^{2},t^{2n+3}+[u_{n}+a_{n}(x)]t^{2n+1}+\cdots+[u_{1}+a_{1}(x)]t^ {3}).\] **Remark 5.12**.: _The frontal sum defined on (14) can be used to show that \(\mathscr{F}(f)\) is linear when \(f\) has corank at most \(1\): first, since \(f\) is a corank \(1\) frontal, we take coordinates in the source and target such that_ \[f(x,y) =(x,p(x,y),q(x,y)); q_{y} =\mu p_{y},\] _and consider the generic slice \(\gamma(t)=(p(0,t),q(0,t))\)._ _Let \(\xi,\eta\in\mathscr{F}(f)\) with respective integral \(\mathscr{F}\)-curves \(F=(f_{u},u)\), \(G=(g_{u},u)\). Since \(F\) and \(G\) are unfoldings of \(f\), they may also be regarded as unfoldings of \(\gamma\). We then consider the frontal sum \(H=(u,v,h_{(u,v)})\) of \(F\) and \(G\), and set \(\hat{H}=(w,\hat{h}_{w})=(w,h_{(w,w)})\). Note that the image of \(\hat{H}\) is simply the intersection of the image of \(H\) with the hypersurface of equation \(u=v\), so \(\hat{H}\) is frontal._ _Using the chain rule and Leibniz's integral rule, we see that_ \[\left.\begin{array}{l}P_{w}=P_{u}+P_{v}\\ Q_{w}=Q_{u}+Q_{v}\end{array}\right\}\implies\left.\begin{array}{l}\partial \hat{h}_{w}\\ \partial w\end{array}\right|_{w=0}=\xi+\eta\] _and thus \(\xi+\eta\in\mathscr{F}(f)\)._ ## 6. Stability of frontal map germs In SS5, we described a method to generate \(\mathscr{F}\)-versal unfoldings of analytic plane curves using pullbacks. Nonetheless, as pointed out in Remark 5.7, the pullback of a stable unfolding is generally not stable as a frontal. In this section, we describe a technique to generate stable frontal unfoldings, not too dissimilar to the method Mather used to generate all stable map germs. We also give a classification of all \(\mathscr{F}\)-stable proper frontal map germs \((\mathbb{C}^{3},S)\to(\mathbb{C}^{4},0)\) of corank \(1\) in SS6.2, aided by Hefez and Hernandes' Normal Form Theorem for plane curves [9, 10]. Let \(f\colon(\mathbb{C}^{n},S)\to(\mathbb{C}^{n+1},0)\) be a frontal map germ and \(\xi\in\mathscr{F}(f)\). By definition of \(\mathscr{F}(f)\), \(\xi\) is given by a frontal \(1\)-parameter unfolding \(F=(f_{t},t)\) of \(f\); this is, \(F\) verifies that \[d(Y\circ f_{t})=\sum_{i=1}^{n}p_{i}\,d(X_{i}\circ f_{t})+p_{0}\,dt\] for some \(p_{0},\ldots,p_{n}\in\mathscr{O}_{n+1}\). If we now consider the vector field germ \(\lambda\xi\) with \(\lambda\in\mathscr{O}_{n}\), \(\lambda\xi\) is given by the \(1\)-parameter unfolding \((\lambda f_{t},t)\). This unfolding is frontal if and only if \[d(Y\circ\lambda f_{t})=\sum_{i=1}^{n}q_{i}\,d(X_{i}\circ\lambda f_{t})+q_{0}\,dt \tag{15}\] for some \(q_{0},\ldots,q_{n}\in\mathscr{O}_{n+1}\). Expanding on both sides of the equality and rearranging, we see that Equation (15) is equivalent to \[\lambda\sum_{i=1}^{n}(q_{i}-p_{i})d(X_{i}\circ f_{t})+(q_{0}-\lambda p_{0})\, dt=[(Y\circ f_{t})-\sum_{i=1}^{n}q_{i}(X_{i}\circ f_{t})]\,d\lambda.\] Therefore, the ring \(R_{f}=\{\lambda\in\mathscr{O}_{n}:d\lambda\in\mathscr{O}_{n}d(f^{*}\mathscr{O }_{n+1})\}\) acts on \(\mathscr{F}(f)\) via the usual action. In particular, \(f^{*}\mathscr{O}_{n+1}\subseteq R_{f}\), so \(\mathscr{F}(f)\) is an \(\mathscr{O}_{n+1}\)-module via the action \(h\xi=(h\circ f)\xi\). If we assume that \(f\) has integral corank \(1\) (so that \(\mathscr{F}(f)\) is a \(\mathbb{K}\)-vector space), we can define the \(\mathbb{K}\)-vector spaces \[T\mathscr{K}_{\mathscr{F}e}f=tf(\theta_{n})+\mathfrak{m}_{n+1}\mathscr{F}(f); T^{1}_{\mathscr{K}_{\mathscr{F}e}f}=\frac{\mathscr{F}(f)}{T\mathscr{K}_{ \mathscr{F}e}f}.\] We also define the **frontal \(\mathscr{K}_{e}\)-codimension**\(\operatorname{codim}_{\mathscr{K}_{\mathscr{F}e}}f\) of \(f\) as the dimension of \(T^{1}_{\mathscr{K}_{\mathscr{F}e}}f\) in \(\mathbb{K}\), and will say that \(f\) is \(\mathscr{K}_{\mathscr{F}e}\)**-finite** if \(\operatorname{codim}_{\mathscr{K}_{\mathscr{F}e}}f<\infty\). **Remark 6.1**.: _The space \(\mathscr{F}(f)\) is not generally a \(\mathscr{O}_{n}\)-module via the usual action: consider the plane curve \(\gamma\colon(\mathbb{K},0)\to(\mathbb{K}^{2},0)\) given by \(\gamma(t)=(t^{2},t^{3})\). Using Remark 3.13, we see that \((0,1)\in\mathscr{F}(\gamma)\), but \((0,t)=t(0,1)\not\in\mathscr{F}(\gamma)\)._ Recall that the Kodaira-Spencer map is defined as the mapping \(\overline{\omega}f\colon T_{0}\mathbb{K}^{n+1}\to T^{1}_{\mathscr{K}_{e}}f\) sending \(v\in T_{0}\mathbb{K}^{n+1}\) onto \(\omega f(\eta)\), where \(\eta\in\theta_{n+1}\) is such that \(\eta_{0}=v\). Since \(f\) is frontal, the image of \(\omega f\) is contained within \(\mathscr{F}(f)\), and the target space becomes \(T^{1}_{\mathscr{K}_{\mathscr{F}_{e}}}f\). Similarly, the kernel of this \(\overline{\omega}f\) becomes \[\tau(f):=(\overline{\omega}f)^{-1}[T\mathscr{K}_{\mathscr{F}_{e}}f]|_{0},\] since no element in \(T\mathscr{K}_{e}f\backslash\mathscr{F}(f)\) has a preimage. **Lemma 6.2**.: _The map germ \(f\) is \(\mathscr{F}\)-stable if and only if \(\overline{\omega}f\) is surjective._ Proof.: Assume \(f\) is \(\mathscr{F}\)-stable and let \(\zeta\in\mathscr{F}(f)\): there exist \(\xi\in\theta_{n}\) and \(\eta\in\theta_{n+1}\) such that \(\zeta=tf(\xi)+\omega f(\eta)\). Setting \(v=\eta_{0}\), it follows that \(\overline{\omega}f(v)\equiv\zeta\mod T\mathscr{K}_{\mathscr{F}e}f\), and surjectivity of \(\overline{\omega}f\) follows. Conversely, assume \(\overline{\omega}f\) is surjective: we have the identity \[T\mathscr{A}_{e}f+\mathfrak{m}_{n+1}\mathscr{F}(f)=\mathscr{F}(f) \tag{16}\] Set \(V^{\prime}=\mathscr{F}(f)/tf(\theta_{n,S})\) and denote by \(p\colon\mathscr{F}(f)\to V^{\prime}\) the quotient projection. We may then write Equation (16) as \[(\pi\circ\omega f)(\theta_{n+1})+\mathfrak{m}_{n+1}V^{\prime}=V^{\prime}\implies \frac{V^{\prime}}{\mathfrak{m}_{n+1}V^{\prime}}\lesssim(\pi\circ\omega f)( \theta_{n+1}).\] Since \((p\circ\omega f)(\theta_{n+1})\) is finitely generated over \(\mathscr{O}_{n+1}\), so is \(V^{\prime}/\mathfrak{m}_{n+1}V^{\prime}\). This implies that \(V^{\prime}/\mathfrak{m}_{n+1}V^{\prime}\) is finitely generated over \(\mathbb{K}\), so \(V^{\prime}\) is finitely generated over \(\mathscr{O}_{n+1}\) by Weierstrass' Preparation Theorem. Since \(\mathscr{O}_{n+1}\) is a local ring, Nakayama's lemma implies that \(V^{\prime}=(\pi\circ\omega f)(\theta_{n+1})\), which is equivalent to \(\mathscr{F}(f)=T\mathscr{A}_{e}f\), and frontal stability follows. **Theorem 6.3**.: _A frontal \(f\colon(\mathbb{K}^{n},S)\to(\mathbb{K}^{n+1},0)\) with branches \(f_{1},\ldots,f_{r}\) is \(\mathscr{F}\)-stable if and only if \(f_{1},\ldots,f_{r}\) are \(\mathscr{F}\)-stable and the vector subspaces \(\tau(f_{1}),\ldots,\tau(f_{r})\subseteq T_{0}\mathbb{K}^{n+1}\) meet in general position._ Proof.: Let \(g\) be either \(f\) or one of its branches. By Lemma 6.2, \(g\) is \(\mathscr{F}\)-stable if and only if \(\overline{\omega}g\) is surjective; this is, \[\frac{\mathscr{F}(g)}{T\mathscr{K}_{\mathscr{F}_{e}}g}\cong\frac{T_{0}\mathbb{ K}^{n+1}}{\ker\overline{\omega}g}=\frac{T_{0}\mathbb{K}^{n+1}}{\tau(g)} \tag{17}\] Let \(S=\{s_{1},\ldots,s_{r}\}\), the ring isomorphism \(\mathscr{O}_{n,S}\to\mathscr{O}_{n,s_{1}}\oplus\cdots\oplus\mathscr{O}_{n,s _{r}}\) induces a module isomorpiism \(\mathscr{F}(f)\to\mathscr{F}(f_{1})\oplus\ldots\mathscr{F}(f_{r})\), which in turn induces an isomorphism (18) On the other hand, the spaces \(\tau(f_{i})\) meet in general position if and only if the canonical map \[T_{0}\mathbb{K}^{n+1}\xrightarrow{}\frac{T_{0}\mathbb{K}^{n+1}}{\tau(f_{1})} \oplus\cdots\oplus\frac{T_{0}\mathbb{K}^{n+1}}{\tau(f_{r})} \tag{19}\] is surjective. The statement then follows from (17 - 19). We now use Ephraim's theorem to give a geometric interpretation to \(\tau(f_{i})\), \(i=1,\ldots,r\). Recall that the isosingular locus \(\operatorname{Iso}(D,x_{0})\) of a complex space \(D\subseteq W\) at \(x_{0}\) is defined as the germ at \(x_{0}\) of the set of points \(x\in D\) such that \((D,x)\) is diffeomorphic to \((D,x_{0})\) Ephraim [6] showed that \(\operatorname{Iso}(D,x_{0})\) is a germ of smooth submanifold of \((W,x_{0})\) and its tangent space at \(x_{0}\) is given by the evaluation at \(x_{0}\) of the elements in the space \[\operatorname{Der}(-\log(D,x_{0}))=\{\xi\in\theta_{W}:\xi(I)\subseteq I\}\] where \(I\subset\mathscr{O}_{W}\) is the ideal of map germs vanishing on \((D,x_{0})\). We shall now use this result to give a geometric interpretation to the space \(\tau(f)\). **Proposition 6.4**.: _Let \(f\colon(\mathbb{C}^{n},S)\to(\mathbb{C}^{n+1},0)\) be a finite, frontal map germ with integral corank \(1\). If \(f\) is \(\mathscr{F}\)-stable and \(\operatorname{codim}\Sigma(\tilde{f})>1\), \(\tau(f)\) is the tangent space at \(0\) of \(\operatorname{Iso}(f(\mathbb{C}^{n},S))\)._ To prove this result, we shall make use of the following **Lemma 6.5** (cf. [20]).: _Let \(f\colon(\mathbb{C}^{n},S)\to(\mathbb{C}^{n+1},0)\) be a finite, frontal map germ with integral corank \(1\) and \(\xi\in\theta_{n+1}\). If \(f\) is \(\mathscr{F}\)-finite and \(\operatorname{codim}V(p_{y},\mu_{y})>1\),_ \[\operatorname{Der}(-\log f)=\operatorname{Lift}(f):=\{\eta\in\theta_{n+1}: \omega f(\eta)=tf(\xi)\text{ for some }\xi\in\theta_{n}\}.\] Proof of Proposition 6.4.: By Ephraim's theorem [6], the tangent space to \(\operatorname{Iso}(f(\mathbb{C}^{n},S))\) at \(0\) is given by the evaluation at \(0\) of the elements in \(\operatorname{Der}(-\log f)\). Using Lemma 6.5, \(\operatorname{Der}(-\log f)\) is the space of elements in \(\theta_{n+1}\) that are liftable via \(f\). Therefore, we only need to show that the evaluation of \(0\) of this space coincides with \(\tau(f)\). Let \(\eta\in\operatorname{Lift}(f)\): there exists a \(\xi\in\theta_{n}\) such that \(\omega f(\eta)=tf(\xi)\in T\mathscr{K}_{\mathscr{F}_{e}}f\), so \(\eta|_{0}\in\tau(f)\). Conversely, if \(\eta\in\theta_{n+1}\) verifies that \(\eta|_{0}\in\tau(f)\), there exist \(\xi\in\theta_{n}\), \(\zeta\in\mathscr{F}(f)\) such that \[\omega f(\eta)=tf(\xi)+(f^{*}\beta)\zeta\] for some \(\beta\in\mathfrak{m}_{n+1}\). Since \(f\) is \(\mathscr{F}\)-stable, \(\mathscr{F}(f)=T\mathscr{A}_{e}f\), which implies that \[(f^{*}\mathfrak{m}_{n+1})\mathscr{F}(f)=(f^{*}\mathfrak{m}_{n+1})[tf(\theta_{ n})+\omega f(\theta_{n+1})]\subseteq tf(\mathfrak{m}_{n}\theta_{n})+\omega f (\mathfrak{m}_{n+1}\theta_{n+1})\] Therefore, there exist \(\xi^{\prime}\in\mathfrak{m}_{n}\theta_{n}\) and \(\eta^{\prime}\in\mathfrak{m}_{n+1}\theta_{n+1}\) such that \[(f^{*}\beta)\zeta=tf(\xi^{\prime})+\omega f(\eta^{\prime})\implies\omega f( \eta-\eta^{\prime})=tf(\xi+\xi^{\prime})\] and \(\eta-\eta^{\prime}\in\operatorname{Lift}(f)\). In particular, if \(s\in S\), \((\eta-\eta^{\prime})|_{0}=\omega f(\eta-\eta^{\prime})|_{s}=v-0=v\), thus finishing the proof. ### Generating stable frontal unfoldings The generation of stable unfoldings in Thom-Mather's theory of smooth deformations is done by computing the \(\mathscr{K}_{e}\)-tangent space of a smooth map germ \(f\colon(\mathbb{K}^{n},0)\to(\mathbb{K}^{p},0)\) of rank \(0\). If \(\mathfrak{m}_{n}\theta(f)/T\mathscr{K}_{e}f\) is generated over \(\mathbb{K}\) by the classes of \(g_{1},\dots,g_{s}\in\mathscr{O}_{n}\), Martinet's theorem ([20], Theorem 7.2) states that the map germ \[F(u,x)=(u,f(x)+u_{1}g_{1}(x)+\dots+u_{s}g_{s}(x))\] is a stable unfolding of \(f\). While such a result fails to yield frontal unfoldings of frontal map germs, if \(f\) has corank \(1\), we can still make use of the frontal sum operation defined on SS5 to formulate a frontal version of Martinet's theorem. **Lemma 6.6**.: _Let \(f\colon(\mathbb{K}^{n},0)\to(\mathbb{K}^{n+1},0)\) be a frontal map germ of integral corank \(1\) with frontal unfolding \(F=(u,f_{u})\), and \((u,y)\) be local coordinates on \((\mathbb{K}^{d}\times\mathbb{K}^{n+1},0)\). There is an \(\mathscr{O}_{n+d+1}\)-linear isomorphism_ \[\beta\colon\frac{\mathscr{F}(F)}{T\mathscr{K}_{\mathscr{F}_{e}}F}\longrightarrow \frac{\mathscr{F}(f)}{T\mathscr{K}_{\mathscr{F}_{e}}f}\] _induced by the \(\mathscr{O}_{n+d}\)-linear epimorphism \(\beta_{0}\colon\theta(F)\to\theta(f)\) sending \(\partial y_{i}\) onto \(\partial y_{i}\) for \(i=1,\dots,n+1\) and \(\partial u_{j}\) onto \(-\dot{F}_{j}\) for \(j=1,\dots,d\)._ Proof.: In [20], Lemma 5.5, it is shown that \(\beta_{0}\) induces a \(\mathscr{O}_{n+d}\)-linear isomorphism \(\beta_{1}\colon T^{1}_{\mathscr{K}_{\varepsilon}}F\to T^{1}_{\mathscr{K}_{ \varepsilon}}f\). In particular, we can consider \(\beta_{0}\) as a \(\mathscr{O}_{n+d+1}\)-epimorphism via \(F^{*}\). Note that \(T\mathscr{K}_{\mathscr{F}e}g=T\mathscr{K}_{e}g\cap\mathscr{F}(g)\) for any frontal map germ \(g\) with integral corank \(1\), so it suffices to show that \(\beta_{0}\) sends \(\mathscr{F}(F)\) onto \(\mathscr{F}(f)\). Let \(\xi\in\theta(F)\) with integral \(\mathscr{F}\)-curve \(F_{t}\): the integral \(\mathscr{F}\)-curve for \(\beta_{0}(\xi)\) is given by \[f_{t} =i^{*}(\pi\circ F_{t}); \pi(t,u,y) =(t,y); i(x) =(0,x).\] In particular, if \((t,F_{t})\) is a frontal, \((t,f_{t})\) is also frontal, since the image of \((t,f_{t})\) is embedded within the image of \((t,F_{t})\). Conversely, given a frontal unfolding \((t,f_{t})\) of \(f\), the map \((t,u,f_{t})\) is a frontal unfolding of \(F\) with \(f_{t}=i^{*}(\pi\circ F_{t})\), hence \(\beta_{0}(\mathscr{F}(F))=\mathscr{F}(f)\). As a consequence of Lemma 6.6, if \(f\colon(\mathbb{K}^{n},0)\to(\mathbb{K}^{n+1},0)\) is a stable frontal map germ, it is either the versal unfolding of some frontal map germ of rank \(0\) or a prism (i.e. a trivial unfolding) thereof. **Theorem 6.7**.: _Let \(\gamma\colon(\mathbb{K},0)\to(\mathbb{K}^{2},0)\) be the plane curve from Remark 5.2, and_ \[T_{j}(t) =(t^{j},B_{j}(t)), B_{j}(t) =j\int_{0}^{t}s^{j-1}\mu(s)\,ds.\] _If \(\mathscr{F}_{0}(\gamma)=\mathscr{F}(\gamma)\cap\mathfrak{m}_{1}\theta(\gamma)\), then_ \[\operatorname{Sp}_{\mathbb{K}}\{T_{1},\dots,T_{\alpha-2}\}\hookrightarrow \frac{\mathscr{F}_{0}(\gamma)}{T\mathscr{K}_{\mathscr{F}e}\gamma}\hookrightarrow \operatorname{Sp}_{\mathbb{K}}\{T_{1},\dots,T_{\alpha-2},(0,t^{\alpha}),\dots,(0,t^{2\alpha-1})\}.\] Proof.: Let \(\xi=(a,b)\in\theta(\gamma)\): by Remark 3.13, \(\xi\in\mathscr{F}(\gamma)\) if and only if \(b^{\prime}-\mu a^{\prime}\in\mathfrak{m}_{1}^{\alpha-1}\), which in turn is equivalent to assuming that \(b^{\prime}-\mu a^{\prime}\equiv\lambda_{1}T_{1}^{\prime}+\dots+\lambda_{\alpha -2}T_{\alpha-2}^{\prime}\mod\mathfrak{m}_{1}^{\alpha-1}\) for some \(\lambda_{1},\dots,\lambda_{\alpha-2}\in\mathbb{K}\). Therefore, \[\mathscr{F}(\gamma) =\mathbb{K}\oplus\operatorname{Sp}_{\mathbb{K}}\{T_{1},\dots,T_{ \alpha-1}\}\oplus\mathfrak{m}_{1}^{\alpha}\theta(\gamma). \tag{20}\] A simple computation shows that \(T\mathscr{K}_{\mathscr{F}e}\gamma\subseteq\mathfrak{m}_{1}^{\alpha-1}\theta(\gamma)\), hence \(T_{j}\not\in T\mathscr{K}_{\mathscr{F}e}\gamma\) for \(j<\alpha-1\). However, \(T_{\alpha-1}\in t\gamma(\theta_{1})\), giving the first monomorphism. For the second monomorphism, first note that \(\gamma\) is finitely determined, so there exists a \(k>0\) such that \(\mathfrak{m}_{1}^{k+1}\theta(\gamma)\subseteq T\mathscr{K}_{\mathscr{L}e} \gamma\subseteq T\mathscr{K}_{\mathscr{F}e}\gamma\). If \(j=\alpha,\dots,k\), there exist \(l>0\) and \(0\leq\beta<\alpha\) such that \(j=l\alpha+\beta\). Using Equation (20), we see that \[(t^{j},0) =(t^{\alpha})^{l}(t^{\beta},0)=(t^{\alpha})^{l}T_{\beta}(t)+(t^{ \alpha})^{l}(0,B_{\beta}(t))\in\mathfrak{m}_{2}\mathscr{F}(\gamma)\subseteq T \mathscr{K}_{\mathscr{F}e}\gamma.\] Similarly, \((0,t^{j})\in T\mathscr{K}_{\mathscr{F}e}\gamma\) for all \(j\geq 2\alpha\). If we now consider the \(1\)-parameter unfolding \(\Gamma_{j}(u,t)=(u,\gamma(t)+uT_{j}(t))\), \[\frac{\partial}{\partial t}(t^{\alpha+1}h(t)+uB_{j}(t)) =\mu(t)\frac{\partial}{\partial t}(t^{\alpha}+jut^{j})\] and \(\Gamma_{j}\) is frontal due to Corollary 2.7. Similarly, if we set \(\Gamma_{k}(u,t)=(u,\gamma(t)+ut^{\alpha}k(t))\) with \(k\in\mathscr{O}_{1}^{2}\), \[Q_{t}(u,t) =\frac{\partial}{\partial t}(t^{\alpha+1}h(t)+ut^{\alpha}k_{2}(t)) =t^{\alpha-1}(\alpha\mu(t)+uk_{2}(t)+utk_{2}^{\prime}(t));\] \[P_{t}(u,t) =\frac{\partial}{\partial t}(t^{\alpha}+ut^{\alpha}k_{1}(t)) =t^{\alpha-1}(\alpha+\alpha uk_{1}(t)+tk_{1}^{\prime}(t)).\] Since \(\alpha+\alpha uk_{1}(t)+tk_{1}^{\prime}(t)\) is a unit, \(P_{t}\,|\,Q_{t}\) and \(\Gamma_{k}\) is also frontal. If \(\mathscr{F}_{0}(\gamma)=T\mathscr{K}_{\mathscr{F}e}\gamma+\mathrm{Sp}_{\mathbb{K}} \{T_{j_{1}},\dots,T_{j_{d}},k_{1},\dots,k_{b}\}\) for some \(k_{1},\dots,k_{b}\in\mathfrak{m}_{1}^{\alpha}\mathscr{O}_{1}^{2}\), we consider the \((d+b)\)-parameter frontal unfolding \[F(u,t)=\Gamma_{j_{1}}(u_{1},t)\#\dots\#\Gamma_{j_{d}}(u_{d},t)\#\Gamma_{k_{1}} (u_{d+1},t)\#\dots\#\Gamma_{k_{b}}(u_{d+b},t), \tag{21}\] where \(\#\) denotes the frontal sum operation defined on Equation (14). **Example 6.8**.: _Let \(f\colon(\mathbb{K},0)\to(\mathbb{K}^{2},0)\) be the plane curve \(f(t)=(t^{3},t^{5})\), which verifies that \(\mathscr{F}_{0}(f)=T\mathscr{K}_{\mathscr{F}e}f\oplus\mathrm{Sp}_{\mathbb{K}} \{(9t,5t^{3}),(0,t^{4})\}\). We then consider the \(1\)-parameter unfoldings_ \[F_{1}(t,v)=(v,t^{3},t^{5}+vt^{4});\qquad\qquad F_{2}(t,u)=(u,t^{3}+9ut,t^{5}+5 ut^{3}),\] _whose frontal sum is_ \[F(t,u,v)=\left(u,v,t^{3}+9ut,t^{5}+5ut^{3}+\frac{1}{3}vt^{4}+6uvt^{2}\right).\] _This unfolding is \(\mathscr{A}\)-equivalent to the \(A_{3,1}\) singularity from [14], Example 4.2._ **Theorem 6.9**.: _The map germ \(F\colon(\mathbb{K}^{d}\times\mathbb{K}^{b}\times\mathbb{K},0)\to(\mathbb{K}^{d }\times\mathbb{K}^{b}\times\mathbb{K}^{2},0)\) defined on Equation (21) is stable as a frontal. Moreover, if the \(\mathbb{K}\)-codimension of \(T\mathscr{K}_{\mathscr{F}_{e}}f\) over \(\mathscr{F}_{0}(f)\) is \(d+b\), every other stable frontal unfolding of \(f\) must have at least \(d+b\) parameters._ Proof.: It is clear by definition of \(T\mathscr{K}_{\mathscr{F}e}F\) that \[\mathscr{F}_{0}(F)\supseteq T\mathscr{K}_{\mathscr{F}e}F\supseteq T\mathscr{ A}_{e}F\cap\mathfrak{m}_{d+b+1}\theta(F),\] so \(F\) is \(\mathscr{F}\)-stable if and only if \(\mathscr{F}_{0}(F)=T\mathscr{K}_{\mathscr{F}e}F\). By Lemma 6.6, this is equivalent to \[\mathscr{F}_{0}(f)=T\mathscr{K}_{\mathscr{F}e}f+\mathrm{Sp}_{\mathbb{K}} \left\{-\dot{F}_{1},\dots,-\dot{F}_{d+b}\right\}.\] It follows from the definition of frontal sum that \[\dot{F}_{i}(t)=(P_{u_{i}}(0,t),Q_{u_{i}}(0,t))=\begin{cases}\dot{\Gamma}_{j_{i }}(t)&\text{ if }i\leq d;\\ \dot{\Gamma}_{k_{i}}(t)&\text{ if }i>d,\end{cases}\] and thus \(F\) is stable. ### Corank \(1\) stable frontal map germs in dimension \(3\) By Theorem 6.3, a frontal multigerm \(f\colon(\mathbb{K}^{3},S)\to(\mathbb{K}^{4},0)\) is \(\mathscr{F}\)-stable if and only if its branches \(f_{1},\dots,f_{r}\) are \(\mathscr{F}\)-stable and \(\tau(f_{1}),\dots,\tau(f_{r})\) meet in general position. Therefore, we only need to classify the stable monogerms. By Lemma 6.6, every \(\mathscr{F}\)-stable monogerm with corank \(1\) is a versal unfolding of an irreducible analytic plane curve \(\gamma\) with \(\mathscr{F}_{e}\)-codimension at most \(2\). In particular, if \(\gamma(\mathbb{C},0)\) is the zero locus of some analytic \(g\in\mathscr{O}_{2}\), \(\tau(g)-\delta(g)\leq\mathrm{ord}(g)+1\) due to Corollary 5.8. A consequence of Theorem 6.7 is that \(\mathrm{codim}_{\mathscr{K}_{\mathscr{F}_{e}}}\,\gamma\geq\mathrm{ord}(g)\), meaning that \(\mathrm{ord}(g)\) must be at most \(4\). If \(\mathrm{ord}(g)=2\), it follows from a result by Zariski [30] that \(g(x,y)=x^{2}-y^{2n+1}\). For \(n=0,1\), this yields an \(\mathscr{F}\)-stable plane curve; for \(n>1\), we can unfold \(\gamma(t)\) into \[\Gamma_{n}(u,t)=(u,t^{2},t^{2n+1}+ut^{3}),\] which is stable. The cases \(\operatorname{ord}(g)=3\) and \(\operatorname{ord}(g)=4\) will be examined using Hefez and Hernandes' classification of analytic plane curves from [10]. Every analytic plane curve has an associated invariant \(\Sigma=\langle v_{0},\dots,v_{g}\rangle\), known as the **semigroup of values**. If the curve is irreducible, its delta invariant \(\delta\) is equal to \[\frac{1}{2}\left[1-v_{0}-\sum_{i=1}^{g}v_{i}\left(1-\frac{\operatorname{GCD}(v _{0},\dots,v_{i-1})}{\operatorname{GCD}(v_{0},\dots,v_{i})}\right)\right],\] regardless of its analytic family. Therefore, the expression \(\tau-\delta\) only depends on \(\tau\). For \(\operatorname{ord}(g)=3\), \(\Sigma\) is given by \(\langle 3,v_{1}\rangle\) with \(v_{1}>3\), so \(\delta=v_{1}-1\). If \(\tau=2(v_{1}-1)\), \(\tau-\delta=v_{1}-1<4\), so \(g(x,y)\) is either \(x^{3}-y^{4}\) or \(x^{3}-y^{5}\). The case \(\tau=2v_{1}-j-1\) with \(j\geq 2\) implies that \(\tau<\delta\), which is impossible. For \(\operatorname{ord}(g)=4\), \(\Sigma\) can be either \(\langle 4,v_{1}\rangle\) or \(\langle 4,v_{1},v_{2}\rangle\). If \(\Sigma=\langle 4,v_{1}\rangle\), \(v_{1}\) is coprime with \(4\), so \(\delta=3/2(v_{1}-1)\) and we have two possible values for \(\tau\): 1. if \(\tau=3(v_{1}-1)\), \(\tau-\delta=3/2(v_{1}-1)\leq 5\), which implies that \(\tau<\delta\); 2. if \(\tau=3v_{1}-j-2\) with \(j>1\), \[\tau-\delta=\frac{1}{2}(3v_{1}-2j-1)\leq 5\implies j\geq\frac{1}{2}(3v_{1}-11).\] Since \(j\leq v_{1}/2\), it follows that \(v_{1}\geq 3v_{1}-11\), giving us \(\gamma(t)=(t^{4},t^{5}+t^{7})\). If \(\Sigma=\langle 4,v_{1},v_{2}\rangle\), \(\operatorname{GCD}(4,v_{1})=2\) and \(\operatorname{GCD}(4,v_{1},v_{2})=1\), which implies that \(v_{1}\geq 6\) and \(v_{2}\geq 2v_{1}\). Using \[\delta=\frac{1}{2}(v_{2}+v_{1}-3); \tau=v_{2}+\frac{1}{2}v_{1}-2,\] it follows that \(\tau-\delta=(v_{2}-1)/2>5\). Since we are only interested in the case \(\tau-\delta\leq 5\), we can ignore this case. For the remaining cases, the possible values for \(\tau-\delta\) fall into one the following categories: \[\frac{3(v_{1}-1)}{2}+k-\left[\frac{v_{1}}{4}\right]; \frac{3(v_{1}-1)}{2}-2j+1; \frac{3(v_{1}-1)}{2}-2j+2,\] for \(2\leq j\leq[v_{1}/4]\) and \(1\leq k\leq[v_{1}/4]-j\). If \(\tau-\delta\leq 5\), then \(v_{1}\geq 7\), which is not possible. **Theorem 6.10**.: _Table 1 shows all stable proper frontal map germs \((\mathbb{C}^{3},0)\to(\mathbb{C}^{4},0)\) of corank \(1\) together with the plane curves of which they are versal unfoldings. All stable frontal multigerms are obtained by transverse self-intersections of these mono-germs, as shown in Theorem 6.3._ \begin{table} \begin{tabular}{l l l} Plane curve & & \multicolumn{1}{c}{Versal frontal unfolding} \\ \hline \hline \((t^{2},t^{3})\) & \(A_{2,0}\) & \((u,v,t^{2},t^{3})\) \\ \((t^{2},t^{5})\) & \(A_{2,1}\) & \((u,v,t^{2},t^{5}+ut^{3})\) \\ \((t^{3},t^{4})\) & \(A_{3,0}\) & \((u,v,t^{3}+3ut,3t^{4}+2ut^{2})\) \\ \((t^{3},t^{5})\) & \(A_{3,1}\) & \((u,v,t^{3}+tu,t^{5}+vt^{4}+2uvt^{2}-5u^{2}t)\) \\ \((t^{4},t^{5}+t^{7})\) & \(A_{4,0}\) & \((u,v,t^{4}+8tu,t^{7}+t^{5}+t^{3}v(5-14v)+t^{2}u(5-42v)-28tu^{2})\) \\ \end{tabular} \end{table} Table 1. Stable proper frontal map germs \((\mathbb{C}^{3},0)\to(\mathbb{C}^{4},0)\). The notation \(A_{i,j}\) is due to Ishikawa [14]. Proof.: The discussion conducted throughout this subsection shows that the only plane curves of frontal codimension less than or equal to \(2\) are \((t^{2},t^{3})\), \((t^{2},t^{5})\), \((t^{2},t^{7})\), \((t^{3},t^{4})\), \((t^{3},t^{5})\) and \((t^{4},t^{5}+t^{7})\). The curve \((t^{2},t^{3})\) is easily checked to be stable as a frontal. The family of curves \((t^{2},t^{2k+1})\) for \(k>1\) unfolds into \((s,t)\mapsto(s,t^{2},t^{2k+1}+st^{3})\), which is \(\mathscr{A}\)-equivalent to the folded Whitney umbrella \((s,t)\mapsto(s,t^{2},st^{3})\),which is stable as a frontal ([22, 23]). The curves \((t^{3},t^{4})\) and \((t^{4},t^{5}+t^{7})\) unfold into the swallowtail and butterfly singularities (\(A_{3,0}\) and \(A_{4,0}\) in Table 1), both of which are stable wave fronts ([27]). The \(E_{8}\) singularity unfolds into Ishikawa's \(A_{3,1}\) singularity [14]. **Conjecture 6.11**.: _The stable proper frontal map germs \(f\colon(\mathbb{C}^{n},0)\to(\mathbb{C}^{n+1},0)\) of corank \(1\) are given by Ishikawa's \(A_{i,j}\) singularities, where_ \[i=\dim\frac{\tilde{f}^{*}\mathscr{O}_{2n+1}}{f^{*}\mathfrak{m}_{n+1}}\in\{2, \ldots,n\};\qquad\quad j+1=\dim\frac{\mathscr{O}_{n}}{\tilde{f}^{*}\mathfrak{ m}_{2n+1}}\in\left\{1,\ldots,\left[\frac{n}{2}\right]\right\},\] _where square brackets denote the floor function. All stable frontal multigerms are obtained by transverse self-intersections of these mono-germs, as shown in Theorem 6.3._ The algebra \(\tilde{f}^{*}\mathscr{O}_{2n+1}/f^{*}\mathfrak{m}_{n+1}\) was introduced by Ishikawa in [14] in order to give a characterisation of Legendrian stability ## 7. Acknowledgements We would like to thank M. E. Hernandes for his helpful contributions to SS6.2.
2307.16187
Numerical Simulation of an Idealised Richtmyer-Meshkov Instability Shock Tube Experiment
The effects of initial conditions on the evolution of the Richtmyer-Meshkov instability (RMI) at early to intermediate times are analysed, using numerical simulations of an idealised version of recent shock tube experiments performed at the University of Arizona. The experimental results are bracketed by performing both implicit large-eddy simulations of the high-Reynolds-number limit as well as direct numerical simulations (DNS) at Reynolds numbers lower than those observed in the experiments. Various measures of the mixing layer width, based on both the plane-averaged turbulent kinetic energy and volume fraction profiles are used to explore the effects of initial conditions on $\theta$ and are compared with the experimental results. The decay rate of the total fluctuating kinetic energy is also used to estimate $\theta$ based on a relationship that assumes self-similar growth of the mixing layer. The estimates for $\theta$ range between 0.44 and 0.52 for each of the broadband perturbations considered and are in good agreement with the experimental results. Overall, the results demonstrate important differences between broadband and narrowband surface perturbations, as well as persistent effects of finite bandwidth on the growth rate of mixing layers evolving from broadband perturbations. Good agreement is obtained with the experiments for the different quantities considered; however, the results also show that care must be taken when using measurements based on the velocity field to infer properties of the concentration field.
Michael Groom, Ben Thornber
2023-07-30T09:50:32Z
http://arxiv.org/abs/2307.16187v1
# Numerical Simulation of an Idealised Richtmyer-Meshkov Instability Shock Tube Experiment ###### Abstract The effects of initial conditions on the evolution of the Richtmyer-Meshkov instability (RMI) at early to intermediate time are analysed, using numerical simulations of an idealised version of recent shock tube experiments performed at the University of Arizona (Sewell et al., _J. Fluid Mech._ (2021), **917**, A41). The experimental results are bracketed by performing both implicit large-eddy simulations (ILES) of the high-Reynolds number limit as well as direct numerical simulations (DNS) at Reynolds numbers lower than those observed in the experiments, both using the Flamenco finite-volume code. Various measures of the mixing layer width \(h\), known to scale as \(\sim t^{\theta}\) at late time, based on both the plane-averaged turbulent kinetic energy (TKE) and volume fraction (VF) profiles are used to explore the effects of initial conditions on \(\theta\) and are compared with the experimental results. The decay rate \(n\) of the total fluctuating kinetic energy is also used to estimate \(\theta\) based on a relationship that assumes self-similar growth of the mixing layer. The estimates for \(\theta\) range between 0.44 and 0.52 for each of the broadband perturbations considered and are in good agreement with the experimental results. Decomposing the mixing layer width into separate bubble and spike heights \(h_{b}\) and \(h_{s}\) shows that, while the bubbles and spikes initially grow at different rates, their growth rates \(\theta_{b}\) and \(\theta_{s}\) have equalised by the end of the simulations indicating that the mixing layer is approaching self-similarity. Anisotropy of the Reynolds stresses is also analysed and is shown to persist throughout each of the simulations. Outer-scale Reynolds numbers and various key length scales are calculated for the DNS cases, showing that fully developed turbulence is not obtained due to the challenges associated with performing DNS for broadband initial conditions. Overall the results demonstrate important differences between broadband and narrowband surface perturbations, as well as persistent effects of finite bandwidth on the growth rate of mixing layers evolving from broadband perturbations. Good agreement is obtained with the experiments for the different quantities considered, however the results also show that care must be taken when using measurements based on the velocity field to infer properties of the concentration field, as well as when it is appropriate to assume the mixing layer is growing self-similarly with a single growth rate \(\theta\). shock waves, turbulent mixing, transition to turbulence ## 1 Introduction This paper analyses the effects of initial conditions on the evolution of the Richtmyer-Meshkov instability (RMI), which occurs when an interface separating two materials of differing densities is accelerated impulsively, typically by an incident shock wave (Richtmyer 1960; Meshkov 1969). The instability evolves due to the deposition of baroclinic vorticity at the interface, caused by a misalignment of density and pressure gradients during the shock-interface interaction. This occurs either from surface perturbations on the interface, or when the shock wave is non-uniform or inclined relative to the interface. The baroclinic vorticity that is deposited on the interface leads to the growth of surface perturbations and the development of secondary shear layer instabilities, which drive the transition to a turbulent mixing layer. Unlike the closely related Rayleigh-Taylor instability (RTI), the RMI is induced for both light to heavy and heavy to light configurations. In both cases the initial growth of the interface is linear in time and can be described by analytical expressions (Richtmyer 1960; Meyer & Blewett 1972; Vandenboomgaerde _et al._ 1998). However, as the amplitudes of modes in the perturbation become large with respect to their wavelengths the growth becomes nonlinear, whereby numerical simulation is required to calculate the subsequent evolution of the mixing layer. Another key difference between RTI and RMI is that, for the RMI, baroclinic vorticity is only deposited initially and not continuously generated, compared to the (classical) RTI where the interface is continuously accelerated. For a comprehensive and up-to-date review of the literature on both RTI, RMI and the Kelvin-Helmholtz instability (KHI), the reader is referred to Zhou (2017\(a\),_b_); Zhou _et al._ (2021), as well as Livescu (2020) for an excellent review on variable-density turbulence more generally. The understanding of mixing due to RMI is of great importance in areas such as inertial confinement fusion (ICF) (Lindl _et al._ 2014), where a spherical capsule containing thermonuclear fuel is imploded using powerful lasers with the aim of compressing the contents to sufficient pressures and temperatures so as to initiate nuclear fusion. The compression is performed using a series of strong shocks, which trigger hydrodynamic instabilities at the ablation front due to capsule defects and drive asymmetries (Clark _et al._ 2016). The subsequent mixing of ablator material and fuel that ensues can dilute and cool the hotspot, which reduces the overall efficiency of the implosion. As a contrast to ICF, in high-speed combustion such as in a scramjet or rotating detonation engine, RMI due to weak shocks improves the mixing of fuel and oxidiser leading to more efficient combustion (Yang _et al._ 1993, 2014). An understanding of mixing due to RMI is also important for many astrophysical phenomena such as supernovae and the dynamics of interstellar media (Arnett 2000). Note that in such applications RTI usually occurs alongside RMI and in general it is impossible to separate the effects of both instabilities. However, there is still great value in studying RMI independently, particularly when comparing with shock tube experiments that have been designed to isolate its effects using an RT-stable configuration. In the applications mentioned above, the most important statistical quantity one would like to know is typically the mixing layer width, denoted by \(h\). At late time \(h\) scales as \(\sim t^{2}\) for RTI and \(\sim t^{\theta}\) for RMI where the exponent \(\theta\leqslant 1\) has been shown to depend on initial conditions (Youngs 2004; Thornber _et al._ 2010). Various approaches have been taken to define \(h\), which fall into one of two categories. The first is to consider the distance between two cutoff locations based on a particular threshold of some spatially-averaged profile in the direction normal to the mixing layer (i.e. the direction of the shock-induced acceleration). Examples include the visual width (Cook & Dimotakis 2001) based on the 1% and 99% locations of the mean volume fraction profile (the choice of a 1% threshold is somewhat arbitrary; see Zhou & Cabot (2019) for a comparison of different thresholds in the context of RTI). Such measures have the advantage of being easily interpretable but can be sensitive to statistical fluctuations. The second approach is to define an integral measure by integrating a particular spatially-averaged profile in the normal direction, for example the integral width (Andrews & Spalding 1990). Integral measures are less susceptible to statistical fluctuations but are also less interpretable, as different profiles can give the same integrated value. The recently proposed mixed mass (Zhou _et al._ 2016) and integral bubble and spike heights (Youngs & Thornber 2020_a_) are attempts to combine the best aspects of both approaches. Over the last few decades, both shock tube experiments and numerical simulations have been performed in order to better understand the fundamentals of RMI, such as the value of \(\theta\) at late time. Previous numerical studies have typically used large-eddy simulation (LES) or implicit LES (ILES) to predict mixing at late time in the high Reynolds number limit (Youngs 1994; Hill _et al._ 2006; Thornber _et al._ 2010; Lombardini _et al._ 2012; Tritschler _et al._ 2014\(a\); Thornber _et al._ 2017; Soulard _et al._ 2018). Key findings include the dependence of \(\theta\) on the type of surface perturbation used to initiate the instability (Youngs 2004; Thornber _et al._ 2010). Narrowband perturbations, which include only a small, annular band of modes in wavenumber space, have been found to give values of \(\theta\) at late-time between 0.25 (Soulard & Griffond 2022) and 0.33 (Youngs & Thornber 2020_b_) whereas perturbations including additional long wavelength modes, known as broadband perturbations, have been found to give values of \(\theta\) as high as 0.75 (Groom & Thornber 2020). Studies of the effects of initial conditions in RTI have found similar results for the growth rate \(\alpha\) when additional long wavelength modes were included in the initial perturbation (Ramaprabhu _et al._ 2005; Banerjee & Andrews 2009). When only short wavelength perturbations are present the growth rate of RTI is limited by the nonlinear coupling of saturated short wavelength modes (bubble merger), while additional long wavelength perturbations cause the growth rate to become limited by the amplification and saturation of long wavelength modes (bubble competition). Futhermore, Aslangil _et al._ (2020) considered the case of RTI where the applied acceleration is completely withdrawn after initial development. The resulting mixing layer is closely related to an RMI-induced mixing layer, differing only by the mechanism of the initial acceleration, with the growth rate exponent for narrowband initial conditions shown to be within the bounds of 0.2 to 0.28 suggested by Weber _et al._ (2013). Early shock tube experiments made use of membranes to form the initial perturbation between the two gases (Vetter & Sturtevant 1995), however these tended to leave fragments that dampened the subsequent instability growth, inhibited mixing and interfered with diagnostics. In order to circumvent this, modern shock tube experiments use membraneless interfaces, for example by forming by a shear layer between counter-flowing gases (Weber _et al._ 2012, 2014; Reese _et al._ 2018; Mohaghar _et al._ 2017, 2019), using a gas curtain (Balakumar _et al._ 2008; Balasubramanian _et al._ 2012) or by using loudspeakers to generate Faraday waves at the interface (Jacobs _et al._ 2013; Krivets _et al._ 2017; Sewell _et al._ 2021). These methods of interface generation typically result in the formation of a broadband surface perturbation and as such these experiments have obtained values of \(\theta\) that are higher than the 0.25-0.33 expected for narrowband initial conditions. For example Weber _et al._ (2012, 2014) measured \(\theta\) in the range 0.43-0.58, while later experiments on the same facility by Reese _et al._ (2018) obtained \(\theta=0.34\pm 0.01\) once the concentration field was adjusted to remove larger-scale structures from the mixing layer prior to averaging in the spanwise direction. Jacobs _et al._ (2013) found that their measurements of mixing layer width prior to reshock could be partitioned into two groups with different power law exponents. The particular diagnostic used was the mixing layer half width, found by taking the distance between the 10% and 90% average concentration locations and halving this. Prior to reshock, both groups initially had growth rates close to 0.5 (\(\theta=0.51\) and \(\theta=0.54\)), while at later times the growth rates were smaller but also more different (\(\theta=0.38\) and \(\theta=0.29\) respectively). Krivets _et al._ (2017) also found a wide range of \(\theta\) for the integral width prior to reshock, ranging from \(\theta=0.18\) to \(\theta=0.57\), using a similar experimental setup. During these experiments the timing of the arrival of the shock wave relative to the phase of the forcing cycle was not controlled, which resulted in large variations in the initial amplitudes of the perturbation. More recent experiments by Sewell _et al._ (2021) took this into account and divided the results into a low-amplitude and high-amplitude group. Using a measure for the mixing layer width based on 5% threshold locations of the turbulent kinetic energy profile, they found \(\theta=0.45\pm 0.08\) and \(\theta=0.51\pm 0.04\) for the low- and high-amplitude groups prior to reshock. In this paper, both ILES and direct numerical simulations (DNS) are performed of 3D RMI with narrowband and broadband perturbations, using a setup that represents an idealised version of the shock tube experiments performed at the University of Arizona (Jacobs _et al._, 2013; Krivets _et al._, 2017; Sewell _et al._, 2021) to investigate the effects of long wavelength modes in the initial perturbation. A similar study was performed in Groom & Thornber (2020) but the main aim in that paper was to approximate the regime where there are always longer and longer wavelength modes in the initial condition that are yet to saturate (referred to as the infinite bandwidth limit). Of primary interest here is to explore the impacts of finite bandwidth broadband perturbations on the mixing layer growth over the length and time scales of a typical shock tube experiment and compare the results with those of both narrowband perturbations and broadband perturbations in the infinite bandwidth limit. While the main aim is not to match the experiments as closely as possible, it is anticipated that the results generated in this study could in principle be verified experimentally. Direct comparisons are also still able to be made through appropriate non-dimensionalisations, which has previously been difficult to do when comparing results between simulations and experiments. An assessment will also be made as to the validity of using measurements based on the velocity field to draw conclusions about the concentration field (and vice versa). The paper is organised as follows. In SS2, an overview of the governing equations and numerical methods employed to solve these equations is given, as well as a description of the computational setup and initial conditions. This section also gives a brief discussion on some of the challenges associated with performing DNS with broadband surface perturbations. SS3 details an analysis of many of the same quantities presented in Sewell _et al._ (2021), including turbulent kinetic energy profiles and spectra as well as various measures of the mixing layer width that are used to estimate the growth rate \(\theta\). The evolution of key length scales and Reynolds numbers is also given for the DNS cases. Finally, SS4 gives a summary of the main findings, as well as directions for future work on this problem. ## 2 Computational Setup ### Governing Equations The computations presented in this paper all solve the compressible Navier-Stokes equations extended to a five-equation, quasi-conservative system of equations based on volume Figure 1: A schematic of the problem setup. The major ticks correspond to a grid spacing of \(\Delta x=1.0\) m. The interface is initially located at \(x=3.0\) m and the shock is initially located at \(x=2.5\) m in the light fluid and travels from light to heavy. fractions rather than the conventional four-equation, fully-conservative model based on mass fractions for multicomponent flows. This ensures that pressure and temperature equilibrium is maintained across material interfaces when upwind discretisations are used and the ratio of specific heats varies across the interface, as is the case for air and SF\({}_{6}\), which greatly improves the accuracy and efficiency of the computation (Allaire _et al._, 2002; Massoni _et al._, 2002). This is a well-established approach for inviscid computations and was recently extended to include the effects of species diffusion, viscosity and thermal conductivity by Thornber _et al._ (2018), enabling accurate and efficient DNS to be performed for this class of problems. The full set of equations for binary mixtures is \[\frac{\partial\rho}{\partial t}+\boldsymbol{\nabla\cdot(\rho u)} = 0 \tag{1a}\] \[\frac{\partial\rho u}{\partial t}+\boldsymbol{\nabla\cdot(\rho u u^ {t}+p\delta)} = \boldsymbol{\nabla\cdot\sigma}\] (1b) \[\frac{\partial\rho e}{\partial t}+\boldsymbol{\nabla\cdot([\rho e+p ]\,u)} = \boldsymbol{\nabla\cdot(\sigma\cdot u-q)}\] (1c) \[\frac{\partial\rho_{1}f_{1}}{\partial t}+\boldsymbol{\nabla\cdot( \rho_{1}f_{1}u)} = \boldsymbol{\nabla\cdot(\rho D_{12}\boldsymbol{\nabla}\frac{W_{1 }f_{1}}{W})}\] (1d) \[\frac{\partial f_{1}}{\partial t}+\boldsymbol{u\cdot\nabla}f_{1} = \boldsymbol{\nabla\cdot(D_{12}\boldsymbol{\nabla}f_{1})-\mathcal{ M}D_{12}\boldsymbol{\nabla}f_{1}\boldsymbol{\cdot\nabla}f_{1}+D_{12}\boldsymbol{ \nabla}f_{1}\boldsymbol{\cdot\frac{\nabla N}{N}}}. \tag{1e}\] In (1), \(\rho\) is the mass density, \(\boldsymbol{u}=[u,v,w]^{t}\) is the mass-weighted velocity vector, \(p\) is the pressure, \(f_{n}\) is the volume fraction of species \(n\) and \(e=e_{i}+e_{k}\) is the total energy per unit mass, where \(e_{k}=\frac{1}{2}\boldsymbol{u\cdot u}\) is the kinetic energy and the internal energy \(e_{i}\) is given by the equation of state. Note that only (1e) is in non-conservative form, hence the term quasi-conservative as conservation errors are negligible (only species internal energies are not conserved). All computations are performed using the ideal gas equation of state \[e_{i}=\frac{p}{\rho(\overline{\gamma}-1)} \tag{2}\] where \(\overline{\gamma}\) is the ratio of specific heats of the mixture. For the five-equation model this is given by \[\frac{1}{\overline{\gamma}-1}=\sum_{n}\frac{f_{n}}{\gamma_{n}-1} \tag{3}\] which is an isobaric closure (individual species temperatures are retained in the mixture). The viscous stress tensor \(\boldsymbol{\sigma}\) for a Newtonian fluid is \[\boldsymbol{\sigma}=-\overline{\mu}\big{[}\boldsymbol{\nabla u}+(\boldsymbol {\nabla u})^{t}\big{]}+\frac{2}{3}\overline{\mu}(\boldsymbol{\nabla\cdot u})\delta \tag{4}\] where \(\overline{\mu}\) is the dynamic viscosity of the mixture. Note that in (4) the bulk viscosity is assumed to be zero according to Stokes' hypothesis. The heat flux \(\boldsymbol{q}=\boldsymbol{q}_{c}+\boldsymbol{q}_{d}\), with the conductive heat flux \(\boldsymbol{q}_{c}\) given by Fourier's law \[\boldsymbol{q}_{c}=-\overline{\kappa}\boldsymbol{\nabla}T \tag{5}\] where \(\overline{\kappa}\) is the thermal conductivity of the mixture, and \(T\) is the temperature. The thermal conductivity of species \(n\) is calculated using kinetic theory as \(\kappa_{n}=\mu_{n}\left(\frac{5}{4}\frac{\mathcal{R}}{W_{n}}+c_{p,n}\right)\), while the thermal conductivity of the mixture (as well as the mixture viscosity) is calculated using Wilke's rule. The enthalpy flux \(\boldsymbol{q}_{d}\), arising from changes in internal energy due to mass diffusion, is given by \[\mathbf{q}_{d}=\sum_{n}h_{n}\mathbf{J}_{n} \tag{6}\] where \(h_{n}=c_{p,n}T\) is the enthalpy of species \(n\) and \(c_{p,n}\) the specific heat at constant pressure. The diffusion flux on the RHS of (1\(d\)) invokes Fick's law of binary diffusion, written in terms of volume fraction. \(W_{n}\) is the molecular weight of species \(n\), \(W\) is the molecular weight of the mixture and the binary diffusion coefficient \(D_{12}\) is calculated by assuming both species have the same Lewis number (\(Le_{1}=Le_{2}=Le\)), such that \[D_{12}=\frac{\overline{\kappa}}{Le\rho\bar{c}_{p}} \tag{7}\] with \(\bar{c}_{p}\) the specific heat at constant pressure for the mixture. Finally in (1\(e\)), \(\mathcal{M}=\frac{W_{1}-W_{2}}{W_{1}f_{1}+W_{2}f_{2}}\) and \(N=p/k_{b}T\) is the number density. ### Numerical method The governing equations presented in SS2.1 are solved using the University of Sydney code Flamenco, which employs a method of lines discretisation approach in a structured, multiblock framework. Spatial discretisation is performed using a Godunov-type finite-volume method, which is integrated in time via a second-order TVD Runge-Kutta method (Spiteri & Ruuth, 2002). The spatial reconstruction of the inviscid terms uses a fifth-order MUSCL scheme (Kim & Kim, 2005), which is augmented by a modification to the reconstruction procedure to ensure the correct scaling of pressure, density and velocity fluctuations in the low Mach number limit (Thornber _et al._, 2008). The inviscid flux component is calculated using the HLLC Riemann solver (Toro _et al._, 1994), while the viscous and diffusive fluxes are calculated using second-order central differences. Following Abgrall (1996), the non-conservative volume fraction equation is written as a conservative equation minus a correction term \[\frac{\partial f_{1}}{\partial t}+\mathbf{\nabla}\mathbf{\cdot}(\mathcal{U}f_{1})-f_{ 1}(\mathbf{\nabla}\mathbf{\cdot}\mathcal{U})=\mathbf{\nabla}\mathbf{\cdot}(D_{12}\mathbf{\nabla}f_ {1}) \tag{8}\] with \(\mathcal{U}=\mathbf{u}+\mathcal{M}D_{12}\mathbf{\nabla}f_{1}-D_{12}\frac{\mathbf{\nabla}N}{N}\). The additional terms in \(\mathcal{U}\) that arise from species diffusion must be included in the calculation of the inviscid flux component, as even though they are viscous in nature they modify the upwind direction of the advection of volume fraction in the solution to the Riemann problem at each cell interface. In the HLLC Riemann solver used in Flamenco this is achieved by modifying the wave speeds to incorporate the additional diffusion velocity, see Thornber _et al._ (2018) for further details. In the absence of viscosity and thermal conductivity the governing equations reduce to the inviscid five-equation model of Allaire _et al._ (2002), which has been used in previous studies of RMI (Thornber 2016; Thornber _et al._, 2017). The numerical algorithm described above has been extensively demonstrated to be an effective approach for both ILES and DNS of shock-induced turbulent mixing problems (see Thornber _et al._, 2010, 2011; Groom & Thornber, 2019, 2021). ### Problem Description and Initial Conditions The computational setup is similar to previous studies of narrowband and broadband RMI by Groom & Thornber (2019, 2020) but with a few key differences that will be described here. A Cartesian domain of dimensions \(x\times y\times z=L_{x}\times L\times L\) where \(L=2\pi\) m is used for all simulations. The extent of the domain in the \(x\)-direction is either \(L_{x}=1.5\pi\) for the ILES cases or \(L_{x}=0.75\pi\) for the DNS cases. Periodic boundary conditions are used in the \(y\)- and \(z\)-directions, while in the \(x\)-direction outflow boundary conditions are imposed very far away from the test section so as to minimise spurious reflections from outgoing waves impacting the flow field. The initial mean positions of the shock wave and the interface are \(x_{s}=2.5\) m and \(x_{0}=3.0\) m respectively and the initial pressure and temperature of both (unshocked) fluids is \(p=0.915\) atm and \(T=298\) K, equal to that in the experiments of Jacobs _et al._ (2013). All computations employ the ideal gas equation of state with a fixed value of \(\gamma\) for each species. A schematic of the initial condition is shown in Figure 1. The shock Mach number is \(M=1.5\), which is higher than the \(M=1.2\) shock used in Jacobs _et al._ (2013); Krivets _et al._ (2017) and the \(M=1.17\) shock used in Sewell _et al._ (2021). This is so that the initial velocity jump is larger, which makes more efficient use of the explicit time stepping algorithm, but not so large that it introduces significant post-shock compressibilty effects. Therefore the post-shock evolution of the mixing layer is still approximately incompressible in both the present simulations and the experiments in (Jacobs _et al._, 2013; Krivets _et al._, 2017; Sewell _et al._, 2021). The initial densities of air and SF\({}_{6}\) are \(\rho_{1}=1.083\) kg/m\({}^{3}\) and \(\rho_{2}=5.465\) kg/m\({}^{3}\) and the post-shock densities are \(\rho_{1}^{+}=2.469\) kg/m\({}^{3}\) and \(\rho_{2}^{+}=15.66\) kg/m\({}^{3}\) respectively. This gives a post-shock Atwood number of \(A^{+}=0.72\), which is essentially the same as the value of 0.71 given in Jacobs _et al._ (2013), indicating that the effects of compressibilty are minimal. The variation in \(\rho\) and \(f_{1}\) across the interface are computed based on the surface perturbation described in (2.8) below. The evolution of the interface is solved in the post-shock frame of reference by applying a shift of \(\Delta u=-158.08\) m/s to the initial velocities of the shocked and unshocked fluids. The initial velocity field is also modified to include an initial diffusion velocity at the interface, which is calculated as in previous DNS studies of RMI (Groom & Thornber, 2019, 2021). To improve the quality of the initial condition, three-point Gaussian quadrature is used in each direction to accurately compute the cell averages required by the finite-volume algorithm. Table 1 gives the thermodynamic properties of each fluid. The dynamic viscosities of both fluids are calculated using the Chapman-Enskog viscosity model at a temperature of \(T=298\) K, while the diffusivities are calculated under the assumption of Lewis number equal to unity (hence \(Pr_{l}=Sc_{l}\)). In the DNS calculations, the actual values of viscosity used are much higher, so as to give a Reynolds number that is able to be fully resolved, but are kept in the same proportion to each other. This is so that the same domain width \(L\) can be used for each calculation. Based on the interface characterisation of the low-amplitude set of experiments performed in Sewell _et al._ (2021), four different initial surface perturbations of a planar interface are considered which follow an idealised power spectrum of the form \[P(k)=Ck^{m}. \tag{2.9}\] Three broadband initial conditions are simulated, containing length scales in the range \begin{table} \begin{tabular}{l c c} Property & Air & SF\({}_{6}\) \\ \(W_{l}\) & 28.964 & 146.057 \\ \(\gamma_{l}\) & 1.4 & 1.1 \\ \(\mu_{l}\) & 1.836 & 1.535 \\ \(Pr_{l}\) & 0.71 & 0.90 \\ \(Sc_{l}\) & 0.71 & 0.90 \\ \end{tabular} \end{table} Table 1: The molecular weight \(W_{l}\) (g/mol), ratio of specific heats \(\gamma\), dynamic viscosities (\(\times 10^{5}\) Pa-s) and Prandtl and Schmidt numbers of air and SF\({}_{6}\). \(\lambda_{max}=L/2\) to \(\lambda_{min}=L/32\) and with a spectral exponent \(m=-1\), \(-2\) and \(-3\) respectively. The choice of bandwidth \(R=\lambda_{max}/\lambda_{min}=16\) is based on estimates of the minimum initial wavelength performed in Jacobs _et al._ (2013) of \(\lambda_{min}=2.9\) to \(3.2\) mm, relative to a test section width of \(L=8.9\times 10^{-2}\) m. When scaled to the dimensions of the experiment, the perturbations in this study all have a minimum wavelength of \(\lambda_{min}=2.8\) mm. Note also that the diagnostic spatial resolution of the PIV method used in Sewell _et al._ (2021) is \(1.98\) mm, resulting in attenuation of the measured scales that are smaller than this. The constant \(C\) dictates the overall standard deviation of the perturbations and is set such that all initial amplitudes are linear and each perturbation has the same amplitude in the band between \(k_{max}/2\) and \(k_{max}\), specifically \(a_{k_{max}}k_{max}=1\). See Groom & Thornber (2020) for further details, noting that unlike the broadband perturbations analysed in that study the perturbations considered here have different total standard deviations for the same bandwidth. The power spectra for these three perturbations are shown in Figure 2, along with the mean power spectrum of the low-amplitude experiments from Sewell _et al._ (2021). In Figure 2 it can be seen that the \(m=-3\) initial condition is the closest match to the experiments (with an estimated slope of \(m=-2.99\) over the same range of modes), with the other perturbations included to study the effects of varying \(m\). A fourth perturbation (not shown) is also considered; a narrowband perturbation with a constant power spectrum (i.e. \(m=0\)) and length scales in the range \(\lambda_{min}=L/16\) to \(\lambda_{max}=L/32\). This is used to study the effects of additional long wavelength modes in the initial condition and is essentially the same perturbation as the quarter-scale scale case in Thornber _et al._ (2017), however the initial amplitudes are larger and are defined such that \(a_{k_{max}}k_{max}=1\), which is at the limit of the linear regime. Note that in the experiments of Jacobs _et al._ (2013), \(a_{k_{max}}k_{max}\) ranged between \(2.82\) and \(3.14\), which is much more nonlinear. The choice of restricting the mode amplitudes such that all modes are initially linear is made so that the results may be easily scaled by the initial growth rate and compared with the results of the previous studies. The amplitudes and phases of each mode are defined using a set of random numbers that are constant across all grid resolutions and cases, thus allowing for a grid convergence study to be performed for each case. The interface is also initially diffuse for this same reason, with the profile given by an error function with characteristic initial thickness \(\delta=\lambda_{min}/4\). The Figure 2: Power spectra of the broadband perturbations as well as the mean power spectrum of the low-amplitude experiments from Sewell _et al._ (2021). Note that the spectra are scaled to match the dimensions of the experiment. volume fractions \(f_{1}\) and \(f_{2}=1-f_{1}\) are computed as \[f_{1}(x,y,z)=\frac{1}{2}\text{erfc}\left\{\frac{\sqrt{\pi}\left[x-S(y,z)\right]}{ \delta}\right\} \tag{10}\] where \(S(y,z)=x_{0}+A(y,z)\), with \(A(y,z)\) being the amplitude perturbation satisfying the specified power spectrum and \(x_{0}\) the mean position of the interface. The amplitude perturbation \(A(y,z)\) is given by \[A(y,z)=\sum_{m,n=0}^{N_{max}}\left[\begin{array}{c}a_{mn}\, \cos(mk_{0}y)\cos(nk_{0}z)+b_{mn}\cos(mk_{0}y)\sin(nk_{0}z)\\ \\ \hskip 14.226378pt+\,c_{mn}\,\sin(mk_{0}y)\cos(nk_{0}z)+d_{mn}\sin(mk_{0}y) \sin(nk_{0}z)\end{array}\right. \tag{11}\] where \(N_{max}=k_{max}L/(2\pi)\), \(k_{0}=2\pi/L\) and \(a_{mn}\ldots d_{mn}\) are selected from a Gaussian distribution. Crucially, the Mersenne Twister pseudorandom number generator is employed which allows for the same random numbers to be used across all perturbations. This facilitates grid convergence studies for DNS and ensures that the phases of each mode are identical when comparing across perturbations with different values of \(m\); only the amplitudes are varied. For full details on the derivation of the surface perturbation see Thornber _et al._ (2010, 2017) and Groom & Thornber (2020). A visualisation of each initial perturbation is shown in figure 3. Whilst there is a noticeable difference between the narrowband and broadband surface perturbations, the differences between the \(m=-1\) and \(m=-2\) perturbations in particular are quite subtle. Nevertheless these subtle differences in the amplitudes of the additional, longer wavelengths are responsible for quite noticeable differences in the subsequent evolution of the mixing layer, as will be shown in the following sections. This highlights the importance of understanding the sensitivity to initial conditions in RMI-induced flows. For each perturbation, the weighted-average wavelength can be defined as \(\bar{\lambda}=2\pi/\bar{k}\), where \[\bar{k}=\frac{\sqrt{\int_{k_{min}}^{k_{max}}k^{2}P(k)\;\mathrm{d}k}}{\sqrt{ \int_{k_{min}}^{k_{max}}P(k)\;\mathrm{d}k}}. \tag{12}\] Similarly, the initial growth rate of the perturbation variance is given by \[\dot{\sigma_{0}}=\sigma_{0}^{+}A^{+}\Delta u\bar{k}/\psi \tag{13}\] Figure 3: Contours of volume fraction \(f_{1}\) for the ILES cases at \(t=0\) and \(z=0\). The major ticks on both axes correspond to a grid spacing of \(\Delta x=\Delta y=\)1 m. where \(\sigma_{0}^{+}=C_{V}(1-\Delta u/U_{s})\sigma_{0}\) is the post-shock standard deviation, \(\sigma_{0}\) is the initial standard deviation and \(\psi\) is a correction factor to account for the diffuse interface (Duff _et al._, 1962; Youngs & Thornber, 2020_b_). Here \(C_{V}=(A^{-}+C_{R}A^{+})/(2C_{R}A^{+})\) is an additional correction factor that is applied to the Richtmyer compression factor \(C_{R}=(1-\Delta u/U_{s})\) to give the impulsive model of Vandenboomgaerde _et al._ (1998). For the present gas combination and configuration, \(C_{V}=1.16\) and is used to account for deficiencies in the original impulsive model of Richtmyer (1960) for certain cases. Thornber _et al._ (2017) showed that for a Gaussian height distribution, the integral width \(W=\int\langle f_{1}\rangle\langle f_{2}\rangle\ \mathrm{d}x\) is equal to \(0.564\sigma\) and therefore \(\dot{W}_{0}=0.564\sigma_{0}\). For the DNS cases, the initial Reynolds number is calculated in line with previous studies as \[Re_{0}=\frac{\bar{\lambda}\dot{W}_{0}\overline{\rho^{+}}}{\overline{\mu}} \tag{14}\] \(\overline{\rho^{+}}=9.065\) kg/m\({}^{3}\) is the mean post-shock density. Table 2 gives the initial growth rate and weighted-average wavelength for each perturbation. ### Direct Numerical Simulations Prior to presenting results for each perturbation, it is important to discuss some of the challenges present when performing DNS of RMI with broadband perturbations. Previous DNS studies of 3D multi-mode RMI have focussed exclusively on narrowband perturbations (Olson & Greenough, 2014; Groom & Thornber, 2019; Wong _et al._, 2019; Groom & Thornber, 2021) or perturbations with a dominant single mode (Tritschler _et al._, 2014_b_). The present set of broadband DNS use a perturbation with \(8\times\) the bandwidth of initial modes compared to the narrowband perturbation analysed in Groom & Thornber (2019, 2021), but still require the same number of cells per initial minimum wavelength for a given Reynolds number in order to fully resolve the calculation. To be considered fully resolved and thus qualify as "strict" DNS, grid convergence must be demonstrated for statistics that depend on the smallest scales in the flow, such as enstrophy and scalar dissipation rate. Of the previously cited studies, only Groom & Thornber (2019, 2021) fully resolve these gradient-dependent quantities and none of the studies mentioned (as well as the present study) resolve the internal structure of the shock wave. Demonstration of grid convergence for enstrophy and scalar dissipation rate in the present set of DNS cases is given in Appendix A, however this comes at the cost of limiting the Reynolds number that can be achieved, as discussed below. Regarding the Reynolds number, using the standard width-based definition \(Re_{h}=hh/\nu\) where the width \(h\propto t^{\theta}\) then the Reynolds number, and hence the grid resolution requirements, can either increase or decrease in time depending on the value of \(\theta\) since \[Re_{h}\propto\frac{\theta t^{\theta-1}t^{\theta}}{\nu}\propto t^{2\theta-1}. \tag{15}\] Therefore for \(\theta<1/2\) the Reynolds number is decreasing and vice versa for \(\theta>1/2\). Youngs \begin{table} \begin{tabular}{c c c c c} Quantity & \(m=0\) & \(m=-1\) & \(m=-2\) & \(m=-3\) \\ \(R\) & 2 & 16 & 16 & 16 \\ \(\bar{\lambda}\) & 0.278 & 0.463 & 0.785 & 1.33 \\ \(\dot{W}_{0}\) & 16.74 & 20.03 & 23.84 & 34.32 \\ \end{tabular} \end{table} Table 2: The bandwidth, weighted-average wavelength (m) and initial growth rate of integral width (m/s) for each of the four perturbations. (2004); Thornber _et al._ (2010) showed that the value of \(\theta\) depends on both the bandwidth and spectral slope \(m\) of the initial condition, which was recently demonstrated in Groom & Thornber (2020) using ILES for perturbations of the form given by (2.9) with \(m=-1\), \(-2\) and \(-3\). For the largest bandwidths simulated, these perturbations gave values of \(\theta=0.5\), \(0.63\) and \(0.75\) respectively, which for the \(m=-1\) and \(-2\) cases are quite close to the theoretical values of \(\theta=1/2\) and \(\theta=2/3\). What these results imply is that the Reynolds number of a broadband perturbation with \(m\leqslant-1\) will either be constant or increase with time as the layer develops, which make performing fully grid-resolved DNS more challenging than for a narrowband layer where \(\theta\leqslant 1/3\)(Elbaz & Shvarts, 2018; Soulard _et al._, 2018). For DNS of narrowband RMI the number of cells per \(\lambda_{min}\) can be maximised, which sets the smallest scale that can be grid resolved and therefore the maximum Reynolds number that can be obtained on a given grid. For fully developed isotropic turbulence, it is well known that grid resolution requirements scale as \(Re^{9/4}\) and the total number of floating point operations required to perform a simulation to a given time scales as \(Re^{3}\)(Pope, 2000). For transitional RMI, empirically the scaling appears to be less severe (closer to \(Re^{2}\)), but available computing power still quickly limits the maximum Reynolds number that can be obtained. The simulations presented in Groom & Thornber (2021) represent the current state of the art in terms of maximum Reynolds number that can be achieved using the Flamenco algorithm. Even then, the highest Reynolds number simulation in that study was still short of meeting the mixing transition requirement for fully developed turbulence in unsteady flows (Zhou _et al._, 2003). For DNS of broadband RMI, assuming the same grid resolution is used, the larger bandwidth necessitates a smaller Reynolds number since the number of cells per \(\lambda_{min}\) required to resolve the shock-interface interaction and subsequent evolution is the same. This is before any considerations about whether additional grid resolution is required at later time due to increasing Reynolds number. The requirement that all initial amplitudes be linear also limits the initial velocity jump (and hence the Reynolds number) that can be obtained, and the diffuse profile across the interface that is required to properly resolve the shock-interface interaction in DNS also dampens the initial velocity jump (relative to if a sharp interface was used). All of this results in the fact that for the current maximum grid sizes simulated in this and previous studies (e.g. \(2048^{2}\) cross-sectional resolution), DNS can be performed at either a moderate Reynolds number but small bandwidth (i.e. too narrow to be indicative of real surface perturbations) as in Groom & Thornber (2021) or a moderate bandwidth but low Reynolds number (i.e. too diffuse to be indicative of fully-developed turbulence) as in the present study. These observations are not exclusive to DNS of RMI but also apply to RTI, Kelvin-Helmholtz instability and other flows where the effects of initial conditions are important and realistic initial perturbations need to be considered. In spite of all this, DNS is still a useful tool in the context of this study as it provides results that may be considered a plausible lower bound to the experimental results in a similar manner to which ILES results may be considered a plausible upper bound. It is also necessary for computing statistical quantities that depend on the smallest scales of motion being sufficiently resolved, such as the turbulent length scales and Reynolds numbers presented in SS3.6 as well as many other quantities that are important for informing modelling of these types of flows (see Groom & Thornber (2021); Wong _et al._ (2022) for some examples). Comments on how some of the limitations mentioned above might be resolved are given in SS4. ## 3 Results Using the initial conditions and computational setup described in SS2, six simulations are performed with Flamenco. These consist of four ILES corresponding to the four different initial conditions as well as two DNS; one for the \(m=-1\) initial condition and one for the \(m=-2\) initial condition. The viscosity used in these DNS is \(\overline{\mu}=0.3228\) Pa-s, which corresponds to initial Reynolds numbers of \(Re_{0}=261\) and \(Re_{0}=526\) for the \(m=-1\) and \(m=-2\) cases respectively. While this viscosity is much higher than would occur experimentally, it is equivalent to using a much smaller value of \(\overline{\lambda}\) to obtain the same Reynolds number due to the various simplifications employed in the governing equations, such as no variation in viscosity with temperature. For each simulation, grid convergence is assessed using the methodology outlined in Thornber _et al._ (2017) for ILES and Groom & Thornber (2019) for DNS. The simulations were run up to a physical time of \(t=0.1\) s, at which point some of the spikes were observed to have reached the domain boundaries in the \(m=-3\) ILES case. The complete set of simulations is summarised in table 3. Figure 4 shows visualisations of the solution at the latest time of \(t=0.1\) s for the four ILES cases. Bubbles of light fluid can be seen flowing into the heavy fluid on the lower side of the mixing layer, while heavy spikes are penetrating into the light fluid on the upper side. In the narrowband case the mixing layer has remained relatively uniform over the span of the domain, whereas in the broadband cases, particularly the \(m=-2\) and \(m=-3\) cases, large-scale entrainment is starting to occur at scales on the order of the domain width. Another noticeable phenomenon at this time is that in the narrowband case some spikes have penetrated much further away from the main mixing layer than in the broadband cases. This is shown in greater detail in figure 6 where isosurfaces of volume fraction \(f_{1}=0.001\) and \(f_{1}=0.999\) are plotted for both the \(m=0\) narrowband case and the \(m=-2\) broadband case to highlight the differences in spike behaviour. Note that in the narrowband case there are taller structures on the spike side that in some instances have been ejected from the main layer. See also Figure 5 from Youngs & Thornber (2020_a_) for a similar visualisation at a lower Atwood number. A plausible explanation for this is that the slower but more persistent growth of the low wavenumber modes in the broadband cases cause the main mixing layer to eventually disrupt the trajectory of any spikes that were initially ejected from high wavenumber modes. Future work will study this comparison of spike behaviour between narrowband and broadband mixing perturbations at higher Atwood numbers that are more relevant to ICF. Figure 5 shows visualisations at the same physical time for the two DNS cases. As discussed in SS2.4, these DNS are at quite low Reynolds number so as to be able to fully resolve the wide range of initial length scales. They are therefore quite diffuse, however good agreement can still be observed in the largest scales of motion with the corresponding ILES cases. The fluctuating kinetic energy spectra presented in SS3.5 also corroborate this observation. \begin{table} \begin{tabular}{c c c c c c} Case & \(m\) & \(Re_{0}\) & Simulation time (s) & Domain size (m\({}^{3}\)) & Grid resolution \\ 1 & 0 & - & 0.1 & \(1.5\pi\times 2\pi\times 2\pi\) & \(384\times 512^{2}\) \\ 2 & -1 & - & 0.1 & \(1.5\pi\times 2\pi\times 2\pi\) & \(384\times 512^{2}\) \\ 3 & -2 & - & 0.1 & \(1.5\pi\times 2\pi\times 2\pi\) & \(384\times 512^{2}\) \\ 4 & -3 & - & 0.1 & \(1.5\pi\times 2\pi\times 2\pi\) & \(384\times 512^{2}\) \\ 5 & -1 & 261 & 0.1 & \(0.75\pi\times 2\pi\times 2\pi\) & \(384\times 1024^{2}\) \\ 6 & -2 & 526 & 0.1 & \(0.75\pi\times 2\pi\times 2\pi\) & \(384\times 1024^{2}\) \\ \end{tabular} \end{table} Table 3: The initial power spectrum slope, initial Reynolds number (DNS only), total simulation time, domain size and maximum grid resolution employed for each case. ### Non-dimensionalisation The results in the following sections are appropriately non-dimensionalised to allow for direct comparisons with the experiments in Jacobs _et al._ (2013) and Sewell _et al._ (2021). All length scales are normalised by \(\lambda_{min}\), which is equal to 0.196 m in the simulations and is estimated to lie between 2.9 mm and 3.2 mm in the experiments. As the effects of different Figure 4: Contours of volume fraction \(f_{1}\) for the ILES cases at \(t=0.1\) s and \(z=0\). The major ticks on both axes correspond to a grid spacing of \(\Delta x=\Delta y=\)1 m. Figure 5: Contours of volume fraction \(f_{1}\) for the DNS cases at \(t=0.1\) s and \(z=0\). The major ticks on both axes correspond to a grid spacing of \(\Delta x=\Delta y=\)1 m. Figure 6: Isosurfaces of volume fraction \(f_{1}\) for the \(m=0\) (left) and \(m=-2\) (right) ILES cases at \(t=0.1\) s. initial impulses are of primary interest, it does not makes sense to use \(\dot{W_{0}}\) as the normalising velocity scale, therefore all velocities are normalised by \(A^{+}\Delta u\) instead. In the simulations \(A^{+}=0.72\) and \(\Delta u=158.08\) m/s, while in the experiments \(A^{+}=0.71\) and \(\Delta u=74\) m/s. Therefore the non-dimensional time is given by \[\tau=\frac{(t-t_{0})A^{+}\Delta u}{\lambda_{min}} \tag{1}\] where \(t_{0}=0.0011\) s is the shock arrival time. This equates to a dimensionless time of \(\tau=57.4\) at the latest time considered in the simulations (\(t=0.1\) s), \(107\leqslant\tau\leqslant 118\) at the latest time prior to reshock in the experiments of Jacobs _et al._ (2013) (\(t-t_{0}=6.5\) ms) and \(73.9\leqslant\tau\leqslant 81.5\) at the latest time prior to reshock in the experiments of Sewell _et al._ (2021) (\(t-t_{0}=4.5\) ms), assuming the same range of values for \(\lambda_{min}\) of 2.9 to 3.2 mm. Figure 7 shows a subset of the image sequence taken from a typical vertical shock tube experiment in Jacobs _et al._ (2013) using the Mie diagnostic. For comparison with the present simulations, a dimensionless time of \(\tau=57.4\) corresponds to a physical time in the range of \(t=3.17\) ms to \(t=3.50\) ms, which may be compared with the images shown for times \(t=3.00\) ms and \(t=3.50\) ms in figure 7. ### Turbulent Kinetic Energy and Mix Width In this section comparisons are made both between the present simulation results and those of the experiments, as well as between the methods for calculating those results in the experiments with methods that have been commonly employed in previous simulation studies of RMI. To measure the mixing layer width, Jacobs _et al._ (2013) used Mie scattering over a single plane, with each image then row-averaged to obtain the mean smoke concentration in the streamwise direction. For each concentration profile, the mixing layer width is defined as the distance between the 10% and 90% threshold locations. This is similar to the definition of visual width used in simulation studies of both RMI and RTI (see Cook & Dimotakis 2001; Cook & Zhou 2002; Zhou & Cabot 2019), where the plane-averaged mole fraction or volume fraction profile is used along with a typical threshold cutoff of 1% and 99%, e.g. \[h=x\left(\langle f_{1}\rangle=0.01\right)-x\left(\langle f_{1}\rangle=0.99 \right). \tag{2}\] This is a useful definition of the outer length scale of the mixing layer, however the choice of cutoff location is somewhat arbitrary and when used to estimate growth rates the results are influenced by both the the choice of cutoff location as well as statistical fluctuations (Zhou & Cabot 2019). For that purpose, an integral definition is typically used such as the integral Figure 7: Image sequence taken from a typical vertical shock tube experiment using the Mie diagnostic. Times relative to shock impact are shown in each image. Reshock occurs at \(t=6.50\) ms. _Source_: Figure 3 of Jacobs _et al._ (2013). width (Andrews & Spalding 1990) \[W=\int\langle f_{1}\rangle\langle f_{2}\rangle\;\mathrm{d}x. \tag{3.3}\] If \(f_{1}\) varies linearly with \(x\) then \(h=6W\)(Youngs 1994). See also the recent paper by Youngs & Thornber (2020\(a\)) where integral definitions of the bubble and spike heights are proposed that are of similar magnitude to the visual width. These are presented in Appendix B and are discussed in SS3.3 below. In the experiments of Sewell _et al._ (2021), PIV was used as the main diagnostic and therefore an alternate definition of the mixing layer width was required. In that study, the row-averaged turbulent kinetic energy was used and a mixing layer width defined as the distance between the \(x\)-locations at which the TKE is 5% of its peak value. This definition assumes that the turbulent velocity field spreads at the same rate as the mixing layer. Figure 8 shows streamwise profiles of mean turbulent kinetic energy for each of the four initial conditions, defined as \[\mathrm{TKE}=\frac{1}{2}\overline{u_{i}^{\prime}u_{i}^{\prime}} \tag{3.4}\] where \(\psi^{\prime}=\psi-\overline{\psi}\) indicates a fluctuating quantity and the ensemble average \(\overline{\psi}=\langle\psi\rangle\) is calculated as a plane average taken over the statistically homogeneous directions (in this case \(y\) and \(z\)). The volume fraction profile \(\langle f_{1}\rangle\langle f_{2}\rangle\) is also shown on the right axis of each plot, as well as the (outermost) \(x\)-locations at which the TKE is 5% of its peak value. An important feature worth noting when comparing the narrowband case with the other broadband cases is that the 5% cutoff on the spike side (\(x<x_{c}\)) is further from the mixing layer centre \(x_{c}\) than in the \(m=-1\) and \(m=-2\) cases, despite these cases having a greater overall amplitude in the initial perturbation. There is also a greater amount of mixed material, as measured by the product \(\langle f_{1}\rangle\langle f_{2}\rangle\), at this location than in those two broadband cases, which is in line with the observations made in figure 4 about the greater penetration distances of spikes from the main layer in the narrowband case. In all cases the TKE profile is asymmetric, with the 5% cutoff on the spike side being located further away from the mixing layer centre than the corresponding 5% cutoff on the bubble side. This asymmetry, along with the implications it has for the growth rate exponent \(\theta\), is discussed in further detail SS3.3. In Sewell _et al._ (2021) a definition for the mixing layer centre is given as the centroid of the mean turbulent kinetic energy profile, i.e. \[x_{c}=\frac{\int xf(x)\;\mathrm{d}x}{\int f(x)\;\mathrm{d}x} \tag{3.5}\] where \(f(x)\) is the mean turbulent kinetic energy profile. This centroid is also shown in figure 8. This definition is compared with an alternate definition in terms of the \(x\)-location of equal mixed volumes, \[\int_{-\infty}^{x_{c}}\langle f_{2}\rangle\;\mathrm{d}x=\int_{x_{c}}^{\infty} \langle f_{1}\rangle\;\mathrm{d}x \tag{3.6}\] which has been used previously in both computational (Walchli & Thornber 2017; Groom & Thornber 2021) and experimental (Krivets _et al._ 2017) studies of RMI. Figure 9 plots the temporal evolution of both of these definitions for \(x_{c}\) for each initial condition, showing that the TKE centroid consistently drifts towards the spike side of the layer as time progresses. The definition in terms of position of equal mixed volumes is much more robust and remains virtually constant throughout the simulation. There is also little variation between cases for this definition, unlike the TKE centroid which is more biased towards the spike side in the \(m=-3\) and \(m=0\) cases. The choice of definition for the mixing layer centre is important as it will influence the bubble and spike heights that are based off it (as well as their ratio), along with any quantities that are plotted at the mixing layer centre over various points in time. Figure 10 shows the temporal evolution of the mixing layer width, using both the visual width definition based on the mean volume fraction profile (referred to as the VF-based width) as well as the definition from Sewell _et al._ (2021) based on the distance between the 5% cutoff locations in the mean turbulent kinetic energy profile (referred to as the TKE-based width). The mean volume fraction \(f_{1}\) at these 5% cutoff locations is \(\geqslant 0.997\) on the spike side (\(x<x_{c}\)) and \(\leqslant 0.003\) on the bubble side (\(x>x_{c}\)) in all cases, hence why the TKE-based width is larger than the VF-based width in each of the plots as the VF-based width is defined using a 1% and 99% cutoff in the volume fraction profile. Using nonlinear regression to fit a function of the form \(h=\beta(\tau-\tau_{0})^{\theta}\), the growth rate exponent \(\theta\) can be obtained for the TKE-based width, VF-based width and the integral width (not shown in figure 10) for each case. Following Sewell _et al._ (2021), the fit is performed only for times satisfying \(\overline{k}\sigma_{0}t>1\) so that the flow is sufficiently developed. The estimated value of \(\theta\) for each case is given in table 4. Note that the uncertainties reported are merely taken from the variance of the curve-fit and do not represent uncertainties in the true value of \(\theta\). from the visual and integral widths for all cases. This is mainly a verification that the results are not severely impacted by a lack of statistical resolution at the lowest wavenumbers, which would result in the visual width measurements being dependent on the specific realisation. The small differences in the values of \(\theta\) reported indicate that there is still some influence of statistical fluctuations, therefore the estimates made using the integral width should be regarded as the most accurate. When comparing the TKE-based and VF-based threshold widths, there is good agreement for the broadband ILES cases and in particular for the \(m=-3\) ILES case. For the narrowband ILES case however, the VF-based (and integral) width is growing at close to the theoretical value of \(\theta=1/3\) for self-similar decay proposed by Elbaz & Shvarts (2018), whereas the TKE-based width is growing at a much fast rate of \(\theta=0.589\). This is even faster than any of the broadband cases and is due to the sensitivity of the TKE-based width to spikes located far from the mixing layer centre in the narrowband case, which contain very little material but are quite energetic and which grow at a faster rate than the rest of the mixing layer. For the broadband DNS, the growth rate of the TKE-based width is slightly lower than that of the VF-based width for both cases, indicating that turbulent fluctuations are more confined to the core of the mixing layer. In the \(m=-1\) case, the value of \(\theta\) obtained from the integral width is Reynolds number independent, while for \(m=-2\) the value of \(\theta\) obtained from the integral width in the DNS case is converging towards the high Reynolds number limit given by the ILES case. Given that the broadband perturbations, specifically the \(m=-3\) perturbation, are the most relevant to the experiments in Jacobs _et al._ (2013) and Sewell _et al._ (2021), it is reassuring to note that estimates of \(\theta\) Figure 9: Temporal evolution of the mixing layer centre \(x_{c}\), comparing the definition based on the centroid of the mean turbulent kinetic energy profile with the definition based on the \(x\)-location of equal mixed volumes. made using TKE-based widths measured with PIV correspond well with estimates based off the concentration field. An alternative method for estimating \(\theta\) is also given in Sewell _et al._ (2021), which makes use of the decay rate of total fluctuating kinetic energy and a relationship between this decay rate \(n\) and the mixing layer growth rate \(\theta\) originally derived by Thornber _et al._ (2010). Assuming that \(h\propto t^{\theta}\) and the mean fluctuating kinetic energy \(q_{k}\propto\dot{h}^{2}\) gives the relation \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Case & \(m\) & \(Re_{0}\) & TKE-based width \(\theta\) & VF-based width \(\theta\) & Integral width \(\theta\) & TKE decay rate \(\theta\) \\ 1 & 0 & - & \(0.589\pm 1.20\times 10^{-2}\) & \(0.323\pm 4.89\times 10^{-3}\) & \(0.330\pm 1.27\times 10^{-3}\) & \(0.253\pm 7.00\times 10^{-3}\) \\ 2 & -1 & - & \(0.460\pm 1.03\times 10^{-2}\) & \(0.450\pm 1.54\times 10^{-3}\) & \(0.442\pm 1.10\times 10^{-4}\) & \(0.429\pm 5.65\times 10^{-3}\) \\ 3 & -2 & - & \(0.479\pm 3.92\times 10^{-3}\) & \(0.522\pm 3.59\times 10^{-3}\) & \(0.514\pm 3.60\times 10^{-4}\) & \(0.512\pm 3.47\times 10^{-3}\) \\ 4 & -3 & - & \(0.493\pm 6.25\times 10^{-3}\) & \(0.492\pm 1.25\times 10^{-3}\) & \(0.510\pm 1.91\times 10^{-3}\) & \(0.562\pm 2.22\times 10^{-3}\) \\ 5 & -1 & 261 & \(0.444\pm 1.41\times 10^{-2}\) & \(0.501\pm 8.40\times 10^{-4}\) & \(0.441\pm 1.00\times 10^{-4}\) & \(0.492\pm 8.08\times 10^{-3}\) \\ 6 & -2 & 526 & \(0.456\pm 3.69\times 10^{-3}\) & \(0.556\pm 2.27\times 10^{-3}\) & \(0.549\pm 1.52\times 10^{-3}\) & \(0.576\pm 4.69\times 10^{-3}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Estimates of the growth rate exponent \(\theta\) from curve-fits to the TKE-based, VF-based and integral widths, as well as from the decay rate of total turbulent kinetic energy. Figure 10: Temporal evolution of mixing layer width \(h\) based on the distance between cutoff locations using either the mean turbulent kinetic energy or mean volume fraction profiles. Solid lines indicate ILES results and dotted lines indicate DNS results. Curve-fits to the data are also shown, with the relevant data points used given by the symbols in each plot. \(q_{k}\propto t^{2\theta-2}\). Since the total fluctuating kinetic energy is proportional to the width of the mixing layer multiplied by the mean fluctuating kinetic energy, this gives \(\text{TKE}\propto t^{3\theta-2}\propto t^{n}\). Directly measuring the decay rate \(n\) therefore gives an alternative method for estimating \(\theta\), which is particularly useful in experimental settings where only velocity field data is available. This predicted value of \(\theta=(n+2)/3\) has been found to be in good agreement with the measured growth rate from the integral width in multiple studies of narrowband RMI Thornber _et al._ (2010, 2017). However, Groom & Thornber (2020) showed that for RMI evolving from broadband perturbations with bandwidths as large as \(R=128\) the measured values of \(\theta\) do not agree with this theoretical prediction, indicating that longer periods of growth dominated by just-saturating modes are required than can currently be obtained in simulations. Figure 11 shows the temporal evolution of TKE, where the integration has been performed between the 5% cutoff locations used to define the TKE-based width. Nonlinear least squares regression is again used to estimate \(n\) for each case, with the fit performed for times greater than the point at which the curvature becomes convex. The corresponding value of \(\theta\) for each \(n\) using the relation \(n=3\theta-2\) is given in table 4. For the narrowband case the estimate of \(\theta\) from the TKE decay rate does not agree with the other estimates, indicating that the mixing layer growth is not sufficiently self-similar (a key assumption in the derivation) and lags the decay in TKE. This is still true even when the range of times used in the curve-fitting procedure is restricted to be the same as for the curve-fit to the decay rate (not shown). For the broadband cases there is better agreement Figure 11: Temporal evolution of total fluctuating kinetic energy, integrated between the 5% cutoff locations. Solid lines indicate ILES results and dotted lines indicate DNS results. Curve-fits to the data are also shown, with the relevant data points used given by the symbols in each plot. however, particularly in the \(m=-1\) and \(m=-2\) ILES cases. In all broadband cases the bandwidth of the initial perturbation is relatively small compared to the perturbations analysed in Groom & Thornber (2020) and the longest initial wavelength saturates early on in the overall simulation, therefore the conclusions made in that study regarding the \(n=3\theta-2\) relation do not necessarily apply here as the current broadband cases are not in the self-similar growth regime. They are also likely not in full self-similar decay however, especially if the narrowband case is not, yet the values of \(\theta\) are in better agreement than in the narrowband case. Further work is required to determine why this is indeed the case. Comparing the estimates of \(\theta\) with those in Sewell _et al._ (2021) using both the TKE-based width and TKE decay rate, the \(m=-3\) simulation results are in between the results of the low-amplitude and high-amplitude experiments. For the low-amplitude experiments (prior to reshock), the TKE-based width measurements gave \(\theta=0.45\) and the TKE decay rate measurements gave \(\theta=0.68\) (which would correspond to no decay of TKE if the layer was homogeneous (Barenblatt _et al._, 1983)). The equivalent results in the \(m=-3\) simulation were \(\theta=0.493\) and \(\theta=0.562\), i.e. larger and smaller than the respective experimental results but both within the experimental margins of error. Similarly for the high-amplitude experiments, both the TKE-based width measurements and the TKE decay rate measurements gave \(\theta=0.51\), indicating that the turbulence in the mixing layer is more developed and closer to self-similar prior to reshock. The \(m=-3\) simulation results are also within the experimental margins of error for these results. Overall, the combination of experimental and computational evidence indicates that there are persistent effects of initial conditions when broadband surface perturbations are present for a much greater period of time than just the time to saturation of the longest initial wavelength (as considered in previous simulation studies of broadband RMI) and last for the duration of the first-shock growth in a typical shock tube experiment. Furthermore, a consideration of the impact of finite bandwidth in the initial power spectrum (also referred to as confinement) is required when adapting theoretical results for infinite bandwidth (unconfined, see Youngs (2004); Thornber _et al._ (2010); Soulard _et al._ (2018); Soulard & Griffond (2022)) to a specific application. ### Bubble and Spike Heights In order to help better explain the estimates for \(\theta\) given in table 4, it is useful to decompose the TKE-based and VF-based widths into separate bubble and spike heights, \(h_{b}\) and \(h_{s}\), defined as the distance from the mixing layer centre \(x_{c}\) to the relevant cutoff location on the bubble and spike side of the layer respectively. Given the drift in time for the centroid of the TKE profile shown in figure 9, the \(x\)-location of equal mixed volumes is used as the definition of the mixing layer centre for both the VF-based and TKE-based bubble and spike heights. Figures 12 and 13 show the evolution in time of \(h_{b}\) and \(h_{s}\) respectively for heights based off both the 5% TKE cutoff (referred to as TKE-based heights) and the 1% and 99% volume fraction cutoff (referred to as VF-based heights). Some important trends can be observed. Firstly, the VF-based heights are smoother than the corresponding TKE-based heights indicating that they are less sensitive to statistical fluctuations. Secondly, the TKE-based \(h_{b}\) and \(h_{s}\) are greater than the corresponding VF-based heights in all cases and for both measures the spike height is greater than the bubble height. This can also be seen in figure 14, which plots the ratio \(h_{s}/h_{b}\) vs. time and shows that \(h_{s}/h_{b}>1\) for all cases. The same trend was observed in Youngs & Thornber (2020\(a\)) for both \(At=0.5\) and \(At=0.9\) but in a heavy-light configuration where the heavy spikes are being driven into the lighter fluid in the same direction as the shock wave. Appendix B plots the same integral definitions of the bubble and spike heights used in Youngs & Thornber (2020\(a\)), verifying that the behaviour is very similar to the VF-based heights presented here. The ratio of spike to bubble heights using both threshold measures is also very similar at late time in all cases with the exception of the narrowband case. The ratio \(h_{s}/h_{b}\) also appears to be converging to the same value at late-time in all cases except for the TKE-based heights in the narrowband case, suggesting it is only dependent on the Atwood number. Figure 14 shows that the ratio of \(h_{s}/h_{b}\) is approximately constant by the end of the simulations. This indicates that a single \(\theta\) is appropriate for describing the growth of the mixing layer beyond this point. However, prior to that \(h_{b}\) and \(h_{s}\) do grow at different rates as shown in table 5, where the bubble growth rate exponent is denoted by \(\theta_{b}\) and the spike Figure 12: Temporal evolution of the bubble height \(h_{b}\) based on the distance between cutoff locations using either the mean turbulent kinetic energy or mean volume fraction profiles. Solid lines indicate ILES results and dotted lines indicate DNS results. Curve-fits to the data are also shown, with the relevant data points used given by the symbols in each plot. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Case & \(m\) & \(Re_{0}\) & TKE-based \(\theta_{b}\) & VF-based \(\theta_{b}\) & TKE-based \(\theta_{s}\) & VF-based \(\theta_{s}\) \\ 1 & 0 & - & \(0.493\pm 2.43\times 10^{-2}\) & \(0.441\pm 2.43\times 10^{-2}\) & \(0.615\pm 1.72\times 10^{-2}\) & \(0.277\pm 5.38\times 10^{-3}\) \\ 2 & -1 & - & \(0.350\pm 8.94\times 10^{-3}\) & \(0.514\pm 1.04\times 10^{-3}\) & \(0.509\pm 1.57\times 10^{-2}\) & \(0.425\pm 2.46\times 10^{-3}\) \\ 3 & -2 & - & \(0.355\pm 1.31\times 10^{-2}\) & \(0.466\pm 4.29\times 10^{-3}\) & \(0.543\pm 8.58\times 10^{-3}\) & \(0.550\pm 3.31\times 10^{-3}\) \\ 4 & -3 & - & \(0.280\pm 2.95\times 10^{-2}\) & \(0.282\pm 1.79\times 10^{-3}\) & \(0.586\pm 1.07\times 10^{-2}\) & \(0.606\pm 1.49\times 10^{-3}\) \\ 5 & -1 & 261 & \(0.338\pm 1.36\times 10^{-2}\) & \(0.461\pm 3.20\times 10^{-4}\) & \(0.509\pm 1.61\times 10^{-2}\) & \(0.523\pm 1.46\times 10^{-3}\) \\ 6 & -2 & 526 & \(0.284\pm 8.39\times 10^{-3}\) & \(0.458\pm 2.57\times 10^{-3}\) & \(0.561\pm 4.89\times 10^{-3}\) & \(0.613\pm 2.23\times 10^{-3}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Estimates of the growth rate exponents \(\theta_{b}\) and \(\theta_{s}\) from curve-fits to the TKE-based and VF-based bubble and spike heights. growth rate exponent is denoted by \(\theta_{s}\). Two key trends can be observed; the VF-based \(\theta_{b}\) is greater than the TKE-based \(\theta_{b}\) in all cases other than the narrowband (\(m=0\)) case, while the VF-based \(\theta_{s}\) is greater than the TKE-based \(\theta_{s}\) in all cases other than the \(m=-1\) ILES case and the narrowband case. The \(m=-3\) case also has the smallest difference in \(\theta_{b}\) and \(\theta_{s}\) for both threshold measures. Comparing the DNS cases with their respective ILES cases, the VF-based \(h_{b}\) is almost independent of the Reynolds number in both the \(m=-1\) and \(m=-2\) cases. This is also true for the TKE-based \(h_{s}\) in the \(m=-2\) cases. A higher degree of Reynolds number dependence is observed for both definitions of \(h_{s}\), which is consistent with previous observations made about turbulence developing preferentially on the spike side of the mixing layer Groom and Thornber (2021). This can also be observed for the integral definitions of \(h_{b}\) and \(h_{s}\) given in Appendix B. This analysis provides evidence that, prior to reshock, \(h_{b}\) and \(h_{s}\) do grow at different rates in a typical shock tube experiment. However, their growth rate exponents have equalised by the time reshock arrives. This is a complicating factor when estimating a single value for \(\theta\) at early times and points to the difficulties in obtaining self-similar growth for RMI in both experiments and simulations. This also suggests that the ratio of spike to bubble heights could be used to determine when it is appropriate to start curve-fitting for estimating a single value of \(\theta\), and that measurements based on the concentration field are likely more accurate in this regard than those made using the velocity field. Figure 13: Temporal evolution of the spike height \(h_{s}\) based on the distance between cutoff locations using either the mean turbulent kinetic energy or mean volume fraction profiles. Solid lines indicate ILES results and dotted lines indicate DNS results. Curve-fits to the data are also shown, with the relevant data points used given by the symbols in each plot. ### Anisotropy The anisotropy of the fluctuating velocity field is explored using the same two measures presented in Sewell _et al._ (2021). The first is a global measure of anisotropy, defined as \[\text{TKR}=\frac{2\times\text{TKX}}{\text{TKY}+\text{TKZ}} \tag{3.7}\] where \(\text{TKX}=\frac{1}{2}\overline{u^{\prime}u^{\prime}}\), \(\text{TKY}=\frac{1}{2}\overline{v^{\prime}v^{\prime}}\) and \(\text{TKZ}=\frac{1}{2}\overline{w^{\prime}w^{\prime}}\), with each quantity integrated between the cutoff locations based on 5% of the maximum TKE. The second measure is the Reynolds stress anisotropy tensor, whose components are defined by \[b_{ij}=\frac{\overline{u^{\prime}_{i}u^{\prime}_{j}}}{\frac{1}{2}\overline{u^{ \prime}_{i}u^{\prime}_{i}}}-\frac{1}{3}\delta_{ij}. \tag{3.8}\] This tensor, specifically the \(x\)-direction principal component \(\boldsymbol{b}_{11}\) for this particular flow, is a measure of anisotropy in the energy-containing scales of the fluctuating velocity field with a value of 0 indicating isotropy in the direction of that component. The local version of TKR (i.e. with TKX, TKY and TKZ not integrated in the \(x\)-direction) can be written in terms of \(\boldsymbol{b}_{11}\) as \[\frac{2\overline{u^{\prime}u^{\prime}}}{\overline{v^{\prime}v^{\prime}+w^{ \prime}w^{\prime}}}=\frac{2\boldsymbol{b}_{11}+2/3}{2/3-\boldsymbol{b}_{11}} \tag{3.9}\] allowing the two measures to be related to one another. Figure 14: Temporal evolution of the ratio of spike to bubble height. Solid lines indicate ILES results and dotted lines indicate DNS results. Curve-fits to the data are also shown, with the relevant data points used given by the symbols in each plot. Figure 15 shows the temporal evolution of the global anisotropy measure TKR for each case. Compared to the equivalent figure 13 in Sewell _et al._ (2021) the peak in anisotropy at early time is less pronounced, however this is due to only integrating TKK, TKY and TKZ between the 5% cutoff locations. Figure 10 in Groom & Thornber (2019) shows the same measure without this limit on the integration for a similar case, with the peak in anisotropy much closer to that observed in Sewell _et al._ (2021). This indicates that much of the anisotropy observed at very early times is due to the shock wave. At an equivalent dimensionless time to the latest time simulated here, the anisotropy ratio presented in Sewell _et al._ (2021) is approximately 2 for the high-amplitude experiments and 3 for the low-amplitude experiments. For the \(m=-3\) perturbation that most closely matches those experiments the TKR at the latest time is 2.46, while for the other ILES cases the late-time TKR decreases as \(m\) increases. For the \(m=0\) narrowband case the late-time value is 1.55, which is within the range of 1.49-1.66 observed across codes on the \(\theta\)-group quarter-scale case (Thornber _et al._, 2017); a case which is essentially the same perturbation but at a lower Atwood number. For the DNS cases a very different trend is observed where the anisotropy continually grows as time progresses. This is due to the very low Reynolds numbers of these simulations, with the lack of turbulence preventing energy from being transferred to the transverse directions. The spatial variation in anisotropy is shown in figure 16, plotted between the 5% cutoff locations for each case. For the broadband cases the anisotropy is slightly higher on the spike side of the layer, with the greatest increase in the \(m=-3\) case. This mirrors the results shown in Sewell _et al._ (2021) for \(\mathbf{b}_{11}\), with quite good agreement observed between the \(m=-3\) case Figure 15: Temporal evolution of the global anisotropy measure, with each component integrated between the 5% cutoff locations. Solid lines indicate ILES results and dotted lines indicate DNS results. at the latest time and the low-amplitude experiments just prior to reshock. In the narrowband case the increase in anisotropy from the mixing layer centre to the spike side is greater but the overall magnitude of \(\boldsymbol{b}_{11}\) is lower, consistent with what was observed for TKR. The DNS results show that the biggest increase in anisotropy at low Reynolds numbers is in the centre of the mixing layer; there is a smaller difference in anisotropy between the DNS and ILES cases at either edge. Figure 17 shows the temporal evolution of \(\boldsymbol{b}_{11}\) at the mixing layer centre, both for the definition of \(x_{c}\) in terms of the TKE centroid (shown in figure 16) as well as the alternate definition in terms of the position of equal mixed volumes. The results for both definitions are similar across all cases, with the anisotropy at the position of equal mix being slightly lower in all cases. In the DNS cases \(\boldsymbol{b}_{11}\) is approximately constant in time, indicating that the growth in anisotropy that was observed for TKR in figure 15 is occurring on either side of the mixing layer centre. The range of values are also comparable to those given in Wong _et al._ (2019) prior to reshock. ### Spectra The distribution of fluctuating kinetic energy per unit mass across the different scales of motion is examined using radial power spectra of the transverse and normal components, calculated as \[E_{i}(\kappa)=\widehat{u_{i}^{\dagger}}^{\dagger}\widehat{u_{i}^{\prime}} \tag{3.10}\] Figure 16: Spatial distribution of the \(x\)-direction principal component of the Reynolds stress anisotropy tensor at time \(\tau=57.4\). Solid lines indicate ILES results and dotted lines indicate DNS results. Also shown is the mixing layer centre defined by the TKE centroid (black dashed lines). where \(\kappa=\sqrt{\kappa_{y}+\kappa_{z}}\) is the (dimensionless) radial wavenumber in the \(y\)-\(z\) plane at \(x=x_{c}\) (given by the \(x\)-location of equal mixed volumes), \(\widehat{(\ldots)}\) denotes the 2D Fourier transform taken over this plane and \(\widehat{(\ldots)}^{\dagger}\) is the complex conjugate of this transform. As isotropy is expected in the transverse directions, the \(E_{y}(\kappa)\) and \(E_{z}(\kappa)\) spectra are averaged to give a single transverse spectrum \(E_{yz}(\kappa)\). The normal and transverse spectra are shown in figure 18 for each of the ILES and DNS cases at the latest simulated time. Curve-fits are made to the data to determine the scaling of each spectrum, with some interesting trends observed. For broadband cases evolving from perturbations of the form given in (2.9), a scaling of \(E(\kappa)\sim\kappa^{(m+2)/2}\) is expected for the low wavenumbers at early time while the growth of the mixing layer is being dominated by the just-saturating mode (Groom & Thornber, 2020). This is not observed in figure 18 since saturation of the longest wavelength occurs quite early relative to the end time of the simulations, however some lingering effects can still be seen at the lowest wavenumbers. For all three broadband ILES cases there are two distinct ranges in both the normal and transverse spectra, which approximately correspond to wavenumbers lower and higher than \(\kappa_{max}=k_{max}(L/2\pi)=32\). Thornber _et al._ (2010) modified the analysis of Zhou (2001) to take into account the effects of the initial perturbation spectrum, resulting in an expected scaling for broadband perturbations of the form \(E(\kappa)\sim\kappa^{(m-6)/4}\). This scaling is observed for the transverse spectra at wavenumbers greater than \(\kappa_{max}\), while for the normal spectra a scaling of \(E(\kappa)\sim\kappa^{(m-5)/4}\) is observed, the reason for which is currently unclear. Figure 17: Temporal evolution of \(x\)-direction principal component of the Reynolds stress anisotropy tensor at the mixing layer centre plane. Solid lines indicate ILES results and dotted lines indicate DNS results. For wavenumbers less than \(\kappa_{max}\) the normal spectra scale as \(\kappa^{-3/2}\) in the \(m=-2\) and \(m=-3\) cases, which is in good agreement with previous calculations for narrowband perturbations (Thornber, 2016; Groom and Thornber, 2019). The narrowband case presented here has a slightly less steep scaling for both the normal and transverse spectra, although it has not been run to as late of a dimensionless time as in previous studies such as Thornber _et al._ (2017). The normal spectrum in the \(m=-1\) case also has a scaling that is less steep than \(\kappa^{-3/2}\). A possible explanation for this is that saturation occurs a lot later in this case than the other broadband cases and therefore it may still be transitioning between an \(E(\kappa)\sim\kappa^{(m+2)/2}\) and a \(\kappa^{-3/2}\) scaling. For the transverse spectra in each of the broadband cases at wavenumbers less than \(\kappa_{max}\) a similar trend is observed, with each spectrum having a scaling that is shallower than \(\kappa^{-3/2}\). The same argument of transition between an \(E(\kappa)\sim\kappa^{(m+2)/2}\) and a \(\kappa^{-3/2}\) scaling may also be applied here, however simulations to later time would be required to confirm this. Finally, for the DNS cases no inertial range is observed due to the low Reynolds numbers that are simulated. For the normal spectra there is quite good agreement between the DNS and ILES data in the energy-containing scales at low wavenumbers. The transverse spectra contain less energy at these wavenumbers in the DNS cases due to suppression of secondary instabilities that transfer energy from the normal to transverse directions. Sewell _et al._ (2021) did not observe an inertial range in their TKE spectra prior to reshock, however they noted that there is likely some attenuation of the spectra at scales smaller than the effective window size of their PIV method, which is equivalent to a dimensionless wavenumber of \(\kappa=47\). This makes it difficult to compare and verify the current findings with their existing experimental setup. ### Turbulent Length Scales and Reynolds Numbers In order to give a better indication of how the present set of results compare with the experiments of Jacobs _et al._ (2013) and Sewell _et al._ (2021), the outer-scale Reynolds numbers and key turbulent length scales used to evaluate whether a flow has transitioned to turbulence are computed using the DNS data. For the purposes of comparison, both the TKE-based and VF-based threshold widths are used as the outer length scale \(h\) from which to compute the outer-scale Reynolds number as \[Re_{h}=\frac{\overline{\rho^{+}}hh}{\overline{\mu}}. \tag{3.11}\] Figure 19 shows the temporal variation for both definitions of the outer-scale Reynolds number. The outer-scale Reynolds numbers using the TKE-based definition for \(h\) are roughly a factor of 2 larger, mostly due to the TKE-based width being a lot larger than the VF-based width in all cases, with neither definition close to reaching the critical value of \(Re_{h}\gtrsim 1\)-\(2\times 10^{4}\) for fully developed turbulence (Dimotakis, 2000). For both the \(m=-1\) and \(m=-2\) perturbations the VF-based Reynolds number is approximately constant in time, consistent with the measured values of \(\theta\) given in table 4. Dimotakis (2000) showed that for stationary flows, fully developed turbulence is obtained when \(\lambda_{L}/\lambda_{V}\geqslant 1\) where \(\lambda_{L}=5\lambda_{T}\) is the Liepmann-Taylor length scale and \(\lambda_{V}=50\lambda_{K}\) is the inner-viscous length scale, with \(\lambda_{T}\) and \(\lambda_{K}\) the Taylor and Kolmogorov length scales respectively. These length scales may be related to the outer-scale Reynolds number by \[\lambda_{L} =5Re_{h}^{-1/2}h \tag{3.12}\] \[\lambda_{V} =50Re_{h}^{-3/4}h \tag{3.13}\] from which it can be shown that \(Re_{h}\geqslant 10^{4}\) for fully developed turbulence. For a time-dependent flow, Zhou _et al._ (2003) showed that an additional length scale \(\lambda_{D}=5(\nu t)^{1/2}\) that characterises the growth rate of shear-generated vorticity must be considered, referred to as the diffusion layer scale. The condition for fully developed turbulence then becomes \[\min(\lambda_{L},\lambda_{D})>\lambda_{V}. \tag{3.14}\] Figure 19: Outer-scale Reynolds numbers vs. time. Figure 18: Transverse and normal components of fluctuating kinetic energy per unit mass at the mixing layer centre plane at time \(\tau=57.4\). Solid lines indicate ILES results and dotted lines indicate DNS results. Figure 20 shows the temporal variation of each length scale in (3.14), with \(\lambda_{L}\) and \(\lambda_{V}\) calculated from the outer-scale Reynolds number using both definitions for \(h\). In both cases there is good agreement between the length scales calculated from either definition of \(Re_{h}\). The inner-viscous length scale is greater than the Liepmann-Taylor scale at all times in both cases, consistent with other observations in this paper on the lack of fully developed turbulence in the DNS cases at the Reynolds numbers capable of being simulated currently. Sewell _et al._ (2021) also observed \(\lambda_{L}<\lambda_{V}\) at all times prior to reshock in their low-amplitude experiments. The authors note that, because of the different dependence of each length scale on \(Re_{h}\), for \(\theta\leqslant 0.5\) the flow can never transition to turbulence as \(\lambda_{V}\) will grow faster than \(\lambda_{D}\). Furthermore, the definition for \(\lambda_{D}\) implies that it will be 0 at time \(t=0\), which would seem to imply that an RMI-induced flow with \(\theta\leqslant 0.5\) can never become turbulent. However, the virtual time origin is neglected in the original definition for \(\lambda_{D}\); if it is included then this allows for the possibility that \(\lambda_{V}<\lambda_{D}\) at early time. In that situation, transition to turbulence will occur provided the initial velocity jump is strong enough to produce \(\lambda_{L}>\lambda_{V}\) for some period of time. The turbulence will still be decaying over time if \(\theta\leqslant 0.5\) though and will eventually no longer be fully developed, reflecting a fundamental difficulty to obtaining universal behaviour in experiments or numerical simulations of RMI. ## 4 Conclusions This paper has presented simulations of an idealised shock tube experiment between air and sulphur hexafluoride that builds upon the previous results and analysis presented in Groom & Thornber (2020, 2021). In particular, the effects of additional long wavelength modes in the initial perturbation were explored by comparing the results obtained using a narrowband surface perturbation (similar to the one presented in Groom & Thornber (2021)) and three broadband perturbations (similar to those presented in Groom & Thornber (2020)). Both implicit large-eddy simulations (ILES) of the high-Reynolds number limit as well as direct numerical simulations (DNS) at Reynolds numbers lower than those observed in the experiments were performed with the Flamenco finite-volume code. Various measures of the mixing layer width, based on both the plane-averaged turbulent kinetic energy and volume fraction profiles, were compared in order to explore the effects of initial conditions as well as the validity of using measurements based on the velocity field to draw conclusions about the concentration field (and vice versa) as is commonly done in experiments due to the difficulties of using diagnostics for both fields simultaneously. The Figure 20: The Liepmann–Taylor (circles), inner-viscous (squares) and diffusion length scales vs. time for both definitions of the outer-scale Reynolds number. effects of initial conditions on the growth rate exponent \(\theta\) were analysed by curve-fitting the expected power law behaviour for the mixing layer width \(h\) to two different definitions of \(h\); one based on a threshold of 5% of the peak turbulent kinetic energy (TKE) and the other based on 1% and 99% of the mean volume fraction (VF). A third method for estimating \(\theta\) was also considered, based on the relationship between the total fluctuating kinetic energy decay rate \(n\) and \(\theta\) that is derived under the assumption that the mixing layer growth is self-similar. In general, estimates of \(\theta\) using either definition for \(h\) were found to be in good agreement with one another, particularly for the \(m=-3\) broadband perturbation that is the most representative of the initial conditions used in the experiments of Sewell _et al._ (2021). The estimates of \(\theta\) based on \(h\) for all three broadband cases were between 0.44 and 0.52, which is in very good agreement with the experimental estimates in Sewell _et al._ (2021), who found \(\theta=0.45\pm 0.08\) for their low-amplitude cases and \(\theta=0.51\pm 0.04\) for their high-amplitude cases prior to reshock. When the TKE decay rate was used to estimate \(\theta\) the results were generally close to the estimates based on \(h\), indicating that the mixing layer growth is close to self-similar by the end of the simulation. Comparing the ILES and DNS results also shows that there is only a small Reynolds number dependence, which is consistent with previous observations in Groom & Thornber (2019) that the integral quantities are mostly determined by the largest scales of motion. When the mixing widths were decomposed into individual bubble and spike heights \(h_{b}\) and \(h_{s}\), it was found that \(h_{b}\sim t^{\theta_{b}}\) and \(h_{s}\sim t^{\theta_{s}}\) with \(\theta_{b}\neq\theta_{s}\) at early time. However, it was shown that \(\theta_{b}\approx\theta_{s}\) by the end of each simulation by examining the ratio of \(h_{s}/h_{b}\) and showing this to be tending towards a constant at late time. The particular regime being analysed here is different to the self-similar growth regime analysed in Groom & Thornber (2020) as the current set of broadband perturbations have a much smaller bandwidth and therefore saturate quite early relative to the total simulation time. The present findings, which are supported by the experiments, are that while the growth rate in the saturated regime is less sensitive to the specific power spectrum of the initial conditions, the effects of additional long wavelength modes are quite persistent over the duration of a typical shock tube experiment and give rise to growth rates much higher than for narrowband perturbations. Comparing \(\theta\) for the two definitions of \(h\) in the narrowband case also leads to some interesting observations. For the TKE-based mixing layer width the value of \(\theta\) that is measured is almost a factor of two higher than the value that is measured for the VF-based width. This is due to spikes that penetrate further into the lighter fluid and in some cases are ejected from the main layer. These spikes have been observed in previous studies of similar cases, such as Thornber & Zhou (2012); Youngs & Thornber (2020_a_), and are quite energetic but contain very little heavy material. Therefore they affect the TKE-based width much more than the VF-based width, which can be seen in the greater relative difference between the two measures for the spike height \(h_{s}\) than the bubble height \(h_{b}\). Presumably if such spikes are ejected at early time in the broadband cases then they get overtaken by the linear growth of the low wavelength modes; future work will investigate this in further detail as it is potentially quite an important phenomenon for applications where multiple interfaces are located in close proximity to one another. Future work will also aim to further quantify the effects of finite bandwidth on \(\theta\) and other important integral quantities, see Soulard & Griffond (2022) for an initial discussion in this direction. Analysing the anisotropy of the fluctuating velocity field showed that the mixing layer is persistently anisotropic in the direction of the shock wave in all cases, in good agreement with previous experiments (prior to reshock) as well as numerical studies. For the broadband ILES cases, the energy spectra in both the normal and transverse directions showed two distinct scalings either side of the highest wavenumber \(k_{max}\) in the initial perturbation and which were dependent on the specific initial condition. These scalings were also different for the normal vs. transverse energy spectrum in each case. This was also observed in the narrowband case but only for wavenumbers higher than \(k_{max}\). Finally, calculations of outer-scale Reynolds numbers and turbulent length scales in the DNS cases showed that the outer-scale Reynolds numbers are approximately constant throughout the simulations, as expected from the estimates of \(\theta\approx 0.5\), and that good agreement was obtained between the turbulent length scales calculated using either the TKE-based or VF-based width as the outer length scale. Overall the results of this study show that, in general, care needs to be taken when using measurements based on the velocity field to infer properties of the concentration field such as the growth rate \(\theta\). This is particularly true when using thresholds rather than integral quantities to represent the mixing layer width. At early times (i.e. prior to reshock in a typical shock tube experiment) the mixing layer is not growing self-similarly, which makes it difficult to determine the value for the growth rate exponent \(\theta\) as a single value may not even be appropriate. However, at the latest time simulated here (just prior to reshock in the experiments of Jacobs _et al._ (2013); Sewell _et al._ (2021)) the mixing layer is tending toward self-similarity and good agreement was able to be obtained with the experimental results across a wide range of quantities, providing additional insight on how to correctly interpret such results and when it is valid to use a single growth rate to describe the mixing layer. **Acknowledgements.** The authors would like to acknowledge David Youngs for providing useful advice and insight on the formulation of the initial conditions, as well as Jeffrey Jacobs for helpful discussions on the computational setup and interpreting experimental results. The authors would also like to acknowledge the computational resources at the National Computational Infrastructure provided through the National Computational Merit Allocation Scheme, as well as the Sydney Informatics Hub and the University of Sydney's high performance computing cluster Artemis, which were employed for all cases presented here. **Declaration of interests.** The authors report no conflict of interest. **Author ORCID.** M. Groom, [https://orcid.org/0000-0003-2473-7229](https://orcid.org/0000-0003-2473-7229); B. Thornber, [https://orcid.org/0000-0002-7665-089X](https://orcid.org/0000-0002-7665-089X) ## Appendix A Grid Convergence of Direct Numerical Simulations Following the methodology presented in Olson & Greenough (2014) for demonstrating grid convergence, two quantities are used that depend on gradients of the velocity and concentration fields and which are therefore sensitive to the smallest scales in the flow. These are the domain-integrated enstrophy \(\Omega\) and scalar dissipation rate \(\chi\), given by \[\Omega(t)=\iiint\rho\omega_{i}\omega_{i}\;\mathrm{d}x\;\mathrm{d}y\;\mathrm{d}z \tag{1}\] and \[\chi(t)=\iiint D_{12}\frac{\partial Y_{1}}{\partial x_{i}}\frac{\partial Y_{1 }}{\partial x_{i}}\;\mathrm{d}x\;\mathrm{d}y\;\mathrm{d}z \tag{2}\] where \(\omega_{i}\) is the vorticity in direction \(i\) (summation over \(i\) is implied). Figures 21 and 22 demonstrate grid convergence in the domain-integrated enstrophy and scalar dissipation rate for both DNS cases. Each case is shown to be suitably converged for both of these integral quantities at the finest grid resolution considered, even during the early-time period prior to the shock exiting the domain. ## Appendix B Integral Definitions of Bubble and Spike Heights In Youngs & Thornber (2020_a_) novel definitions were given for the bubble and spike heights \(h_{b}\) and \(h_{s}\) as weighted average distances from the mixing layer centre, \[h_{s}^{(p)} =\left[\frac{(p+1)(p+2)}{2}\frac{\int_{-\infty}^{x_{c}}|x|^{P}(1- \langle f_{1}\rangle)\;\mathrm{d}x}{\int_{-\infty}^{x_{c}}(1-\langle f_{1} \rangle)\;\mathrm{d}x}\right]^{1/p}\] (B 1 _a_) \[h_{b}^{(p)} =\left[\frac{(p+1)(p+2)}{2}\frac{\int_{x_{c}}^{\infty}|x|^{P} \langle f_{1}\rangle\;\mathrm{d}x}{\int_{x_{c}}^{\infty}\langle f_{1}\rangle\; \mathrm{d}x}\right]^{1/p}.\] (B 1 _b_) Figures 23 and 24 plot the bubble and spike heights (with \(p=3\)), while figure 25 plots their ratio \(h_{s}/h_{b}\). The results are quite similar to the VF-based bubble and spike heights shown in figures 12 to 14, albeit smoother and therefore more suitable for estimating \(\theta_{b}\) and \(\theta_{s}\). While the main purpose of this paper is to compare the quantities typically measured in experiments based on thresholds of the TKE or VF profiles, it is recommended that future studies focus on using integral definitions such as the ones given here. Figure 21: Temporal evolution of domain integrated enstrophy for each grid resolution employed in the DNS cases. Figure 22: Temporal evolution of domain integrated scalar dissipation rate for each grid resolution employed in the DNS cases.
2302.05043
A Review of Predictive and Contrastive Self-supervised Learning for Medical Images
Over the last decade, supervised deep learning on manually annotated big data has been progressing significantly on computer vision tasks. But the application of deep learning in medical image analysis was limited by the scarcity of high-quality annotated medical imaging data. An emerging solution is self-supervised learning (SSL), among which contrastive SSL is the most successful approach to rivalling or outperforming supervised learning. This review investigates several state-of-the-art contrastive SSL algorithms originally on natural images as well as their adaptations for medical images, and concludes by discussing recent advances, current limitations, and future directions in applying contrastive SSL in the medical domain.
Wei-Chien Wang, Euijoon Ahn, Dagan Feng, Jinman Kim
2023-02-10T04:12:11Z
http://arxiv.org/abs/2302.05043v2
# A Review of Predictive and Contrastive Self-supervised Learning for Medical Images ###### Abstract Over the last decade, supervised deep learning on manually annotated big data has been progressing significantly on computer vision tasks. But the application of deep learning in medical image analysis was limited by the scarcity of high-quality annotated medical imaging data. An emerging solution is self-supervised learning (SSL), among which contrastive SSL is the most successful approach to rivalling or outperforming supervised learning. This review investigates several state-of-the-art contrastive SSL algorithms originally on natural images as well as their adaptations for medical images, and concludes by discussing recent advances, current limitations, and future directions in applying contrastive SSL in the medical domain. Keywords:Self-supervised learning, contrastive learning, deep learning, medical image analysis, computer vision. ## 1 Introduction In recent years, deep learning networks, such as convolutional neural networks (CNNs), have seen massive progress in image analysis techniques. LeCun et al.[1] showed that CNNs achieved superior performance on diverse computer vision tasks, including semantic segmentation, image classification, object detection, and activity recognition. When a large amount of data and manually annotated labels are available, CNNs can automatically learn to approximate the relationship between the data and its labels. This type of deep learning algorithm is called supervised learning[2]. However, supervised learning can also be limited by large-scale labelled image data availability, where manual annotation is costly, labour-intensive, time-consuming, and prone to human subjectivity and error[3, 4, 5]. CNNs have also been broadly applied with medical imaging modalities and are considered state-of-the-art in many medical image analysis applications[6], such as with breast cancer classification[7], COVID-19 detection[8] and skin lesion analysis[9]. A variety of methods have been proposed to deal with the problem of limited training images and labels. Transfer learning has become the established method for this problem. With transfer learning, the model is pre-trained on a larger image dataset, such as the ImageNet dataset of labelled natural images, and is then fine-tuned on a smaller dataset in the target domain that does not need to be from the same image domain, such as with a type of medical imaging modality[10]. Although transfer learning has demonstrated promising results in various medical imaging analysis applications[11, 12], there are known limitations[10, 11]. The primary limitation is that the image features extracted from the natural image dataset are not directly relevant to medical imaging datasets. Thus, supervised learning methods optimally designed using natural images do not necessarily translate well when applied to medical imaging analysis[10]. There are several key differences between medical images and natural images. As an example, medical images typically involve the identification of a small part of the images related to its pathologies or abnormalities, also known as regions of interest (ROIs), by utilizing variations in local textures from the whole image; examples of these are small red dots in retinal fundus images which are signs of microaneurysms and diabetic retinopathy[14], and white opaque local patches in chest X-ray images indicate consolidation and pneumonia. Natural image datasets, however, often have a large and salient object of interest in images. Another key difference is that, compared to natural images with diverse content and colours, a large variety of medical images, typically from X-ray, computer tomography (CT), and magnetic resonance imaging (MRI), are grayscale and have similar colours and content attributes across the image dataset, with fewer diversities and contrasts than natural images. Additionally, most medical image datasets have fewer image samples despite large variability in the image visual attributes between them, e.g., the number of images in the medical image datasets varying from one thousand[13] to one hundred thousand[16, 17]; in comparison, natural image datasets often have over 1 million images (e.g., ImageNet). Considering these differences between natural and medical images, transfer learning of natural image pre-trained model to medical image application is not always an effective solution. He et al.[18] demonstrated that pretraining on ImageNet merely accelerates the model convergence early during the training process. To address the scarcity of medical image labels, researchers have been using other deep learning methods that do not entirely rely on labelled image data, and instead utilize abundant unlabelled image data[19, 20]. To address these issues, Yann LeCun presented the first concept of self-supervised learning (SSL) in 2017. His talk at the AAAI 2020 conference[21] started to attract people's attention, and people gradually realized SSL had a potential future. He described, "In SSL, the system learns to predict part of its input from other parts of its input". SSL, as its name implies, creates supervisory information that is derived from the data itself. As represented in Fig. 1, there are some examples of SSL, such as predicting future data (yellow color) from past data (purple color) and predicting past data from present data (blue color). Take sequential datasets, for example, the target objects or images can be seen as anchors. The objects or images before these anchors can be seen as the past data, while the objects or images after these anchors can be seen as the future data. SSL has been widely employed in computer vision applications using natural images. For example, the Bootstrap Your Own Latent (BYOL)[22] method obtained better image classification results than some supervised learning approaches on the ImageNet dataset. Other experiments [23, 24] further demonstrated how SSL could efficiently learn generalizable visual representations from the images. For example, Tendle and Hasan[25] analyzed the SSL representations that were trained from the ImageNet source dataset and then fine-tuned on two different target datasets: one that were considerably different from the source dataset, and the other that was similar to the source dataset. By investigating the invariance property of learned representations, such as rotation, scale change, translation (vertical and horizontal) and background change, their experiments demonstrated that SSL representations produced better generalizability in contrast to supervised learning representations Among SSL methods, contrastive self-supervised learning, or contrastive SSL, is the most successful approach that achieved outstanding performance close to, or even surpassing, the supervised learning counterparts [26]. Contrastive learning encourages learning feature representation with inter-class separability and intra-class compactness, which can assist in classifier learning [3, 27]. More specifically, intra-class compactness refers to how closely image representations from the same class are related to one another, and inter-class separability refers to how widely apart image representations are from different classes; this is due to SSL capability to learn without labels and therefore being able to leverage large datasets. Contrastive SSL has already been widely studied among both natural and medical image domains. There were several comprehensive reviews on natural images, such as contrastive learning of visual representations [28], generative learning and contrastive learning[3], pre-trained language models[29], and self-supervised contrastive learning[30]. However, these reviews did not focus on medical images that are different from natural ones with inherent medical image specific challenges and requirements. In addition, there were some SSL reviews on medical images [31, 32]: Some of them discussed three categories, including predictive, generative, and contrastive learning, but in the contrastive learning category, the authors did not divide it into subsections and provide structured portioning of the work. However, our paper exclusively focused on predictive and contrastive learning and used subsections to describe more details of the related backgrounds. In this study, we provide a state-of-the-art review of SSL research, focusing on predictive learning and contrastive SSL learning, and their adaptation and optimization for the medical imaging domain. With the focus of our paper on medical images, where possible, we have used medical images in our example figures. Our contributions are as follows: Section 2 introduces a systematic categorization of the state-of-the-art predictive learning and contrastive SSL methods and discusses their methodology; these methods are based on natural images; Section 3 presents a review of predictive learning and contrastive SSL methods applied to medical images and their unique adaptations from the natural image method counterparts. Section 4 concludes the review and discusses the limitation of predictive learning and contrastive SSL on medical images and makes suggestions for future research and directions. Fig. 1: The concept of self-supervised learning [1]. Fig. 2: Categorization of predictive learning(a) and categorization of contrastive SSL(b). ## 2 Predictive learning and Contrastive self-supervised learning (SSL) ### Predictive learning Through predicting geometric transformations of images, predictive learning tasks learn the structural and contextual semantics. Three types of spatially relevant position pretext tasks, as shown in Fig. 2(a), were described in this section: relative position, solving jigsaw puzzle, and rotation. #### 2.1.1 Relative position The relative position model[33] was trained to learn the relationships between a selected patch and the patches around it. The relative position model selected a particular size of the area of an image sample and divides this area into certain number of disconnected patches. The number and the area in a patch, as shown in Fig. 3, were used for learning the relationship between the centre patch, called the anchor, and the neighbouring patches. As a result, the model learned the relationships between the patches. It was worth noting that the gaps between patches and the random displacement of patches prevent the model from learning the shortcut. Such a shortcut might be provided by low-level indications like boundary patterns or textures that continue between patches. There were three disadvantages with the relative position approach. First, multiple different objects could be included in two individual patches. For example, one patch contained the left atrium and another one consisted of the right atrium. There was no relevance between these two objects that are only located in the individual patches. As a result, no information could be learned about the relationship between those two objects. Second, in the relative position approach, CNNs could learn trivial features, such as the shared corners or edges of patches, instead of semantic feature representations that are beneficial to downstream discriminative tasks, including segmentation and classification tasks. Although some methods, such as the randomly jittering patches, were designed to prevent the model from learning trivial features, there are possibilities that patch positions would be learned from other places, such as background patterns. Third, since the relative position approach only involves the patches, it did not include the global information of images. This leaded to limited performance on downstream tasks that rely on global information of images, such as in image classification tasks. However, some of these tasks counted on ad hoc heuristics that might restrict the transferability and generalization of learned feature representations for the following downstream tasks. #### 2.1.2 Solving Jigsaw puzzle One additional type of relative position was termed as "solving the jigsaw puzzle" [14]. The principal idea of this pretext task was to learn positional relations among divided patches of an input sample. In this approach, by solving the jigsaw puzzles, the algorithm learned to recognize the Fig. 4: An example of the “solving the jigsaw puzzle"[14] pretext task on an X-ray pneumothorax image. Fig. 3: An example of the predicting relative spatial position [33] pretext task on a CT lung image. The algorithm is trained to learn the relationships between a selected patch (blue centre) and the patches around it (red numbered patches). elemental structure of the objects, including objects and their relative parts. As shown in Fig. 4, within an image sample, the jigsaw puzzle solution first selected a particular size of the area that was relevant to the topic of interest. Then, this area was divided into nine puzzle patches shuffled based on the nine predefined permutation set as inputs. The model was trained to learn feature representation by correcting the order of those nine patches. The sequence of nine patches was used for the training model. The greatest challenge of the jigsaw puzzle was that the model required greater computational complexity and memory consumption. Noroozi et al.[1] also extended this to more complicated pretext tasks, such as the setting of 64 predefined permutations, demonstrating that more information on relative position can be learned. #### Rotation Another context-based pretext task was designed for learning high-level semantic features by training the model to predict the degrees to which the input images were rotated. The rotation angle could be seen as a pseudo label for training the model. This was exemplified in Fig. 5. The result of [25] showed that the CT lung images rotated by angles of 0, 90, 180, or 270 degrees learn better feature representations than the other degrees rotations. Li et al.[26] also conducted research based on the rotation pretext task in which the angle was an expansion to 360 degrees. Lee et al.[17] trained the model with multiple pretext task learning strategies, including two types of transformations, rotations, and colour permutation, as those various self-supervised data augmentations enabled the reduction of the effects from the transformation invariant. ### Contrastive self-supervised learning (SSL) Contrastive learning is a method to learn feature representations via contrastive loss functions to distinguish between negative and positive image samples. Positive image samples are an augmentation of a target image (also called an anchor) while negative image samples are from other non-target samples within the training set. The contrastive learning approach encourages models to learn general-purpose feature representations that can be reused to enhance learning specifically in downstream tasks, e.g., segmentation and classification tasks, where the models are built using the learned features[38]. Contrastive learning methods typically vary in how they use unlabelled data to create or define negative and positive image pairs, and also in how they are sampled during training. Based on the idea of Liu et al.[3], contrastive learning categories are divided into two subcategories: context-instance contrast and instance-instance contrast. The context-instance contrast, also known as the global-local contrast, is concerned with modelling the relationship between a sample's local feature and its global context representation. Instance-instance contrast investigates the connections between the instance-level local representations of distinct samples. However, these two categories do not cater for the specific needs of sequential image or time series datasets. Any data that has elements that are arranged in sequences is referred to as sequential data[39]. Sequences of user actions, time series, and DNA sequences are a few examples. Yue et al.[40] mentioned that time-series medical images include rich spatial and temporal information. Therefore, we suggest a third category named temporal contrast, which is related to SSL designed for the sequential datasets. The three categorisations of contrastive SSL are shown in Fig. 2(b). To train on unlabelled data, SSL uses "pretext" tasks as an alternative way to extract useful latent representations. Through solving the pretext tasks, pseudo labels, as supervisory signals, are generated automatically based on the dataset's properties. For example, with the rotation pretext task, the supervisory signals of "rotation angles", are derived from the unlabelled input samples. There are two different application paradigms for downstream tasks using the pretext task results. Fig. 6(a) shows that the first paradigm is learning transferable features. After solving the pretext tasks, the model will try to learn feature representation which can then be further trained for, e.g., fine-tuning for different tasks such as classification and detection. In contrast, Fig. 6(b) illustrates an example of learning "applicable embeddings" that refers to the pretext tasks used to directly learn generalizable features for downstream tasks. Various pretext tasks are designed with those different augmentation transformations to capture the expected semantic or structural characteristics of images for downstream tasks. Before diving into subcategories, contrastive learning loss function is defined in Section 2.2.1 for a fundamental understanding of the SSL. Then, context-instance contrast learning and instance-instance contrast learning are described in Sections 2.2.2 and 2.2.3, respectively. Finally, temporal contrast is introduced in Section 2.4. #### Contrastive learning To learn meaningful features from the images, SSL use Figure 5: An example of the predicting image rotations [25] pretext task on a CT lung image. The algorithm utilizes the rotation angle as a kind of supervision for training the model. "data augmentation" techniques to generate additional data by increasing the diversity of the data transformation. Data augmentation involved image manipulation techniques, i.e., image scaling, cropping, flipping, padding, rotation, translation, and colour augmentation, such as brightness, contrast, saturation, and hue. The fundamental concept of contrastive learning was to group the images with its augmentations closer together and place the other images further away. This description can be expressed as: \[score\big{(}f(x),f(x^{+})\big{)}>score(f(x),f(x^{-})\big{)} \tag{1}\] where _f(x)_ is an encoder. The target image (also called an anchor), \(x\), and the anchor's augmented sample, \(x^{+}\), can be grouped as a positive pair. However, the anchor and other sample from the training dataset, \(x\), are grouped as a negative pair. As a result, it can show that the score of the similar sample, \(x\) and \(x^{+}\), is higher than that of the dissimilar samples, \(x\) and \(x\). This score is a metric that compares the similarity between the two samples. Based on this concept, the following subsections discuss several common loss functions used in SSL. #### 2.2.1.1 Triplet loss Triplet loss[41] was a type of metric learning with a similar concept to Equation 1, with changes in how it calculates the distance on the embedding space. In detail, minimizing the triplet loss, as in Equation 2, encourages the distance between the anchor and the positive sample to 0; and the distance between the anchor and the negative sample to be greater than the distance between the anchor and the positive sample plus with margin. When the representations created for a negative pair are distant enough, the purpose of the margin is to prevent effort wasted on enlarging this distance. \[\mathcal{L}=\max\big{(}d(x,x^{+})-d(x,x^{-})+margin,0\big{)} \tag{2}\] Here, \(d(x,x^{+})\) denotes the distance between the anchor and the positive sample, \(d(x,x^{-})\) represents the distance between the anchor and the negative sample. The margin parameter is set to represent the minimum offset between the distances of the pairs. #### 2.2.1.2 Noise-contrastive estimation (NCE) loss[42] and InfoNCE loss[43] To decrease the complexity of optimization, NCE was introduced to transform the calculation from multiclass classification problems to a binary logistic regression to classify data from noises. Inspired by NCE loss, InfoNCE loss used categorical cross-entropy loss to find positive samples from a collection of unrelated noisy samples. InfoNCE used similar data pattern for training, including one positive sample and many negative samples. However, InfoNCE loss often generated higher accuracies due to the selection of negative samples. This was explained by the grouping of the negative samples in the NCE algorithm as a unit for calculating an approximate value, while with InfoNCE, it calculated the negative samples as an individual sample and hence can keep more information about each of the data point. InfoNCE is formulated as: \[L_{N}^{InfoNCE}=-\frac{x}{x}\bigg{[}\log\frac{f_{k}(x_{+k},c_{k})}{\sum_{x_{j },c_{k}}f_{k}(x_{j},c_{k})}\bigg{]} \tag{3}\] where \(f_{k}\)\(\mathcal{L}\) represents the density ratio, \(t+k\) denotes the future time steps after \(t\) from the dataset, \(\{x_{t+k},...,x_{t-1},x_{t},...x_{t+k}\}\in X\), where \(f_{k}(x_{t+k},c_{k})\), and \(f_{k}(x_{j},c_{k})\) can be seen as the positive sample pair and negative sample pair, respectively, in the collection of samples, \(C_{t}\). #### 2.2.1.3 Mutual information (MI) Mutual information[44] is a concept of reducing uncertainty about one random sample after observing another sample. Simply put, MI is a measure for assessing the relationship between arbitrary variables[45]. There were some MI applications, for example, Linsker et al.[46] which presented the InfoMax principle by using MI to calculate the relationship between the input and the output in the existence Figure 6: Two different application paradigms for downstream tasks. In (a), further training such as fine-tuning is needed; while in (b), no annotation is needed for downstream tasks. of processing noise. The relationship between InfoNCE and MI has been used in many state-of-art contrastive learning methods and after optimizing Equation 3, it can be expressed as: \[\mathrm{I}(x_{t+k}\,,\,c_{t})\geq\log(N)-c_{N}^{opt} \tag{4}\] where MI, \(\mathrm{I}(x_{t+k}\,,\,c_{t})\), is equal to or larger than \(\log(N)\), and \(N\) is the number of samples, minus the optimized InfoNCE, \(\mathcal{L}_{N}^{opt}\). #### Context-instance contrast learning Spatial context from images could be used to learn feature representations. It was originally from the concept of skipgram Word2Vec[47] algorithm used in natural language processing (NLP), and later implemented for images by Doersch et al.[33] With spatial context, feature representations were learned by predicting the position of an image patch relative to other patches. The context-instance contrast learned the relationship between local and global image features. The idea of context-instance contrast was to capture the local features that can adequately represent the global features. In this category, the most popular algorithm is maximizing MI. #### 2.2.2.1 Maximizing MI Unsupervised learning of feature representations could be achieved by maximizing MI between an input image and the output that was encoded by a deep neural network. The principle of high MI captures useful information rather than low-level noise. Tschann et al.[44] conducted research on MI maximization for unsupervised or self-supervised representation learning, including Deep InfoMax (DIM)[48], Contrastive Multiview Coding (CMC)[49], and Contrastive Predictive Coding (CPC)[43]. #### 2.2.2.1 Deep InfoMax (DIM)[48] and Augmented Multiscale DIM (AMDIM)[50] Hjelm et al. [48] showed that, depending on the downstream task, it is often insufficient to learn effective representations by maximizing the MI between the encoder output (i.e., global MI) and the entire input. It is because global MI maximizes MI between global representation pairs, which included an entire image together with a single feature vector summarized from patches divided from the results of encoding input images. However, global Infomax has the problem that the model captured undesirable information such as trivial noise that was particular to local patches or pixels and that was useless for certain tasks such as image classification. This was because grabbing feature information particular to only belonging parts of the input through encoders did not enhance the MI with other patches that did not include those trivial noise. Hence, this issue arose the idea of local Infomax to encourage the encoders to learn feature representation that is shared across the patches of an input image. Hjelm et al.[48] showed that adding location information of the input into the object enables to considerably increase a representation's fitness for subsequent tasks. Hence, they proposed the ideas of global DIM and local DIM to train the encoders by maximizing MI between global and local patch features. Local Infomax maximizes MI between the summarized patch feature vector and each local patch feature, where both are extracted from different layers of the same structure of the convolutional network. Later, Bachman et al.[50] extended the idea of local DIM by maximizing MI between features generated through augmentation of each input image. The author improved the local DIM from the following three perspectives: data augmentation, multi-scale mutual information, and encoder. For data augmentation, they first performed a random horizontal flip and then some common data augmentations, including random in the crop, jitter in colour space, and grayscale transformation. This model learned features by maximizing MI between the global and augmented local features. To determine the similar part in augmented local features and global features. For multi-scale mutual information, the model learned features by maximizing MI within features from different layers with different scales. The MI between multi-scale features in the same images was higher than in different images. For the encoder, AMDIM improved the encoder based on the ResNet-base framework to control receptive files. The result is worse when there was too much overlap within the features of positive sample pairs. #### 2.2.2.1 Contrastive Predictive Coding (CPC)[43, 51] Contrastive Predictive Coding[52, 53] focused on sequential data and utilizes useful information of previous sequential components of the data to predict the future sequential signal. During the predictive coding, the information of image content was embedded. Using autoregressive models, CPC encoded key shared information within different parts of the previous sequential signal to high-level latent space, and this was used to predict future that conditionally relies on the same shared information. This resulted in keeping a similar representation from the same images encoding more global and common features, and discarding low-level information and local variations, such as the noise. Additionally, the use of probabilistic contrastive loss for learning high-dimensional representations in latent embedding space maximized useful information for predicting future samples. Based on the ideas of NCE, CPC proposed InfoNCE and its relationship with MI. That is, minimizing the InfoNCE loss enabled maximizing a lower bound on MI between representations that were encoded. #### 2.2.3 Instance-instance contrast learning Under instance contrast learning[54] category, instance comparisons were used from two points of view. The first was to design or modify contrastive loss functions and use specific structures for training SSL (see Section 2.3.1). The second was to directly compare instances to derive distinctive information within the instances (see Sections 2.3.2 and 2.3.3). #### SSI design on contrastive loss function-based variation and specific structures Within many strategies of designing SSL model, we discuss two ideas based on either the varied contrastive loss functions or specified structure in the subcategories. #### SSI design on contrastive loss function-based variation When contrastive loss functions are designed or modified based on the principle of Equation 1, they had been applied to many different tasks for specified learning approaches. The five learning approaches introduced in this section are (1) multimodal learning, (2) local representation learning, (3) multi-scale learning, (4) texture representation learning, and (5) structural representation learning. (1) For multimodal learning, most papers conducted SSL research on only one modality dataset. Hence, some studies have started working on multimodal SSL training to learn more meaningful semantic information that might compensate for each other. For computer vision, multimodality could group different types of resources, such as text and image, or different types of data formats, such as CT, X-ray, and MRI. (2) For local representation learning, most of the common instance-instance contrast learning methods concentrated only on extracting image-level global consistency between instances but neglect explicitly learning the distinctive local consistency within the instances. Distinctive local representations played a vital role in obtaining structural information for dense or per-pixel prediction tasks, including segmentation. (3) For multi-scale learning, some medical data were large, such as histology images. Such large images as input for training the network slowed down the calculation and increased the training time. Hence, for the domain of histopathology, some studies used relatively small areas or objects, such as nuclei, to predict whole histology images. However, some works utilized a variety of sizes of input for the training model and Yoo et al.[55] demonstrated how multi-scale local activations could enhance visual representation based on CNN activations. Finally, some SSL works designed the contrastive loss for learning (4) texture representation and (5) structural representation, respectively. #### SSI design on specific structures Except for the design and modification of contrastive loss functions and the selected sample strategies, some works focused on the specific structures for training SSL, such as Siamese-based learning, and teacher-student-based learning. For the Siamese network learning, a Siamese neural network included two or more identical subnetworks which were used to estimate the similarity between two samples by two feature extractors with shared weights, and were utilized in many applications, such as the prediction of camera poses[56] and lip poses[57]. A large number of batch sizes or negative pairs applied in common SSL methods made them more difficultly be implemented on 3D medical datasets. Chen et al.[58] proved that the Siamese network could be used to avoid such problems on a 2D network. And, without relying on larger batch sizes or negative pairs, the Siamese network enabled to keep the spatial relationship in the embedding space through contrastive loss. For the Teacher-student-based learning, Teacher-student learning was a transfer learning approach in which the student network was taught by the teacher's network to predict the same result as the teacher's. A small network, the student network, could be learned by the labels produced by a complex model, the teacher network. Moreover, the Mean Teacher model, an extended model based on the teacher-student, was implemented for the medical image analysis tasks to average model weights to aggregate information after every step instead of every epoch. The Mean Teacher model also provided more robust intermediate representations since the weight averages captures all layer outputs, not just the top output. #### SSI.2.3.2 Instance-based discrimination There were a variety of techniques designed for collecting negative samples to compare with a positive sample in the training process, such as Memory Bank, Momentum Encoder PIRL[59], SimCLR[20], MoCo[19, 60, 61], and BYOL[22]. Though for different purposes, these methods could be considered to create dynamic dictionaries. In these dictionaries, the "queries" and "keys" were obtained from data, e.g., patches or images, which were embedding representations created through the query and key encoder networks, respectively. These encoders could be any CNNs[62]. SSL trained encoders to execute dictionary look-up: an encoded "query" should be comparable to its corresponding key while being distinct from others. The definition of query and key could be different. For example, Wu et al.[63] grouped a key and a query as a negative pair if they come from a different image and otherwise as a positive sample pair. However, Ye et al.[64] selected two random "views" of the same image using random data augmentation to create a positive pair. It is worth to notice that inconsistency was a big challenge in this method. Inconsistency existed between the query and key embedding representation. Specifically, inconsistency occurred when calculating the contrastive loss between the positive features from the query encoder that was updated each epoch and the negative features saved in the memory that was updated from several previous epochs. Hence, many approaches were proposed to solve this inconsistency. He et al.[19] hypothesized that it was possible to create consistent and large dictionaries during the training process and that in the dictionary, the keys should be represented through the similar or same encoder to provide consistency in comparisons to the query. Based on the principle of contrastive loss, the number of negative samples significantly affected the accuracy, which was proven by Nozawa et al.[65]. In one batch, it included an original image, its augmented example, and many negative samples. The numbers of negatives sampled depended on the batch size and the large batch size means we could contain more negative samples. However, the batch size was limited by the GPU memory size. The memory bank was designed to address this problem by accumulating and regularly updating many embeddings of negative samples that resulted from the key encoder without increasing the batch size but with less gradient calculation from the encoded key query during training. Pretext-Invariant Representation Learning (PIRL) learned invariant representations by using a memory bank based on a pretext task related to solving the jigsaw puzzle. Although memory banks could contain a larger number of negative samples, inconsistency existed between the query and key embedding representations that resulted from the query and key encoders, respectively. To address the inconsistency problem, MoCo decoupled the batch size from the negative samples by replacing the memory bank with a moving-averaged encoder called the momentum encoder. This momentum encoder was built as a dictionary-like queue that progressively replaced samples by enqueueing the current mini-batch and dequeuing the oldest mini-batch in this queue. The benefit of removing the oldest mini-batch that was outdated was to maintain consistency with the newest samples from the query encoder. By doing this, the number of negative samples could be increased without expanding the batch size. In brief, MoCo decreased the dependency on mini-batch size and utilized a momentum encoder to update the queue that involves previously processed samples to create contrastive pair encodings. This was defined as follows: \[\theta_{k}\gets m\theta_{k}+(1-m)\theta_{q} \tag{5}\] where the momentum coefficient, \(m\), made the key encoder, \(\theta_{k}\), slowly progress, driven by the query encoder, \(\theta_{q},\ (1-m)\). He et al.[19] proved that the performance was the best when m is 0.99 because this setting updated the key encoder slowly through a large part of the previous key encoders and a small part of the newest query encoder. This could keep a large and consistent dictionary that facilitates contrastive learning to train a visual representation encoder. Based on MoCo, the same team further designed MoCo v2[60] by adding an MLP projection head, data augmentation, and a cosine learning rate schedule. #### 2.2.3.2 SimCLR[20] SimCLR was an end-to-end learning architecture and learned feature representations by maximizing the agreement between dissimilar augmented views of the same input via a contrastive loss calculation[66]. Through experiments, the results of SimCLR showed four components that affect the quality of contrastive representation learning. The combination of data augmentation, random cropping, and colour distortion was shown to be better than other combinations or single transformations. Moreover, compared to supervised contrastive learning, unsupervised contrastive learning obtained greater advantages from longer training, larger batch sizes, and stronger data augmentation. However, similar to supervised learning, contrastive learning obtained an advantage from a deeper and wider framework. It is worth noticing that the introduction of the nonlinear projection head significantly improved the learning representations during training. Based on SimCLR, the same team further improved three steps for designing a semi-supervised learning framework called SimCLR v2[67]. #### 2.2.3.2.3 Contrastive Multiview Coding (CMC)[49] Unlike DIM, CPC, and AMDIM using one view of the image, CMC worked on images that were acquired in more than one view. The goal of CMC was to learn feature representations with information shared between various sensory channels obtained from the same image. Specifically, CMC used NCE-based softmax cross-entropy loss to learn feature embeddings by maximizing MI between various views from the same scene. A 4-view dataset, NYU RGBD[68], from the same scene, was brought together in embedding space as positive samples, but the views from different scenes were pushed apart as the negative sample. CMC also proposed "core view" and "full graph" paradigms. The full graph outperforms not only because more cross-view learning can get better representation but also because full graph can deal with missing information of views. #### 2.2.3.2.4 Bootstrap Your Own Latent (BYOL)[22] Some contrastive learning methods in Section 2.3.2, such as SimCLR and MoCo, relied heavily on many negative samples for learning the discriminative features. Hence, those methods were sensitive to selecting data augmentation policies and require many trials to determine good data augmentation[69, 70]. Moreover, SimCLR required a long training time on large datasets, out of 3200 epochs on the 1.2 million ImageNet images[71], to obtain improved performance. Unlike SimCLR, BYOL used mean squared error (MSE) rather than a contrastive loss, so as to rely less on the availability of large-scale negative samples. #### 2.2.3.3 Cluster-based discrimination In computer vision, the clustering algorithm was a class of unsupervised learning techniques that have been largely researched and applied. Although clustering techniques were the first stage of success in classifying images, relatively few papers introduced to apply them to CNNs end-to-end training on large scale datasets[72, 73]. A problem is that clustering techniques were primarily built on linear models for calculating the top of fixed features, and they seldom ever function when the features must be simultaneously learned. Based on the clustering technique, DeepCluster was designed to simultaneously learn the features' cluster assignments and the neural network's parameters. More specifically, they iteratively clustered the features with a normal clustering algorithm, _k_-means, and utilized the cluster assignments as supervision signals to learn the parameters of the network. Unlike context instance contrast, clustering had the benefits of needing little domain knowledge and no particular signal from the inputs. In addition, some contrastive learning methods highly depended on the online calculation of many pairwise feature comparisons. Hence, the authors of SwAV[74] designed an online algorithm with a cluster-based idea to reduce the amount of computation. SwAV employed a "swapped" prediction technique in which the cluster assignments of one view were predicted based on the representation of another view. This method could work in large and small batch sizes without needing a momentum encoder or a large memory bank. A multi-crop technique was designed by making use of smaller-sized images to boost views without raising a training's memory or processing demands. #### Temporal contrast Medical imaging datasets, of CT or MR images, sometimes have follow-up scans with spatial or structural information. A sequence of CT or MR images, such as from left to right or from top to bottom of the patient's body, assists in learning more semantic representations. Compared to 2D data, videos or image sequences have richer information that allows to learn better feature representation through SSL. There are three common types of 3D SSL, including finding the similarity of adjacent frames, tracking the objects, and correcting the temporal order. #### Finding similarities of adjacent frames First, adjacent frames should share similar features[75]. By training CNNs to learn the similarities within neighbourhood frames, contextual semantic representations could be learned. Moreover, temporal continuity[76] in sports activities, such as playing table tennis, and the characteristic of frames expressing a swing action should also be smooth. In this case, in the same sequence, the adjacent frames selected within a small design range were closer in embedding space than, frames selected from distant timesteps, as shown in **Fig. 7**. In addition to learning from the same video, Sermanet et al.[77] also learned from multi-view (multiple modalities) videos to obtain viewpoint and agent invariant feature representations. In this case, positive paired images obtained simultaneously with different viewpoints were closer in the embedding space than negative paired images obtained from a dissimilar time in the same sequence. #### Tracking the objects Second, based on visual tracking-provided supervision for training models, Wang et al.[78] learned visual representations by unsupervised tracking within thousands of unlabelled moving videos. More specifically, two frames connected by a track should share a visual representation in feature space, such as cycling, because they probably corresponded to the same target of the moving object or were part of the object. Based on this idea, Walker et al.[79] utilized CNNs to learn similar objects that shared similar visual representations, and [80, 81] researched human poses. In this case[78], designed a ranking loss function to encourage, in feature space, the first two frames connected through a track to be much closer than the first frame and a random frame. #### Correcting for the temporal order Third, it was a method to learn visual representation through an unsupervised sequential verification task, which corrected frame order from a sequence of video frames[82, 83, 84]. In this case, the correct order was a positive sample, and the wrong order was a negative sample, as shown in **Fig. 8**. Figure 7: The selection of positive samples and negative samples from a set of adjacent frames. ## 3 Predictive Contrastive SSL applied to medical images Contrastive SSL has been broadly applied and optimized for medical images. There were four forms of contrastive SSL commonly applied to medical images: contrastive learning estimation, context-instance contrast learning, instance-instance contrast learning and temporal contrast SSL. ### Predicting learning for medical image analysis #### Relative position SSL based on the relative position approach was also used in the medical area[85] for learning useful semantic features by utilizing image context restoration. Architecture with the combination of multiple SSL methods was used, including relative position prediction[33], colourization[86], exemplar CNNs[87], and inpainting[88]. In particular, the relative position was used to find the relationship between the central patch and eight nearby patches within a selected 3\(\times\)3 selected patch grid. Inspired by the work of context prediction of adjacent patches[33], Blendowski et al.[89] proposed self-supervised 3D context feature learning, which included a new idea of image-intrinsic spatial offset relations with a heatmap regression loss. Jana et al.[90] used image context restoration[85] as the pretext task for checking non-alcoholic fatty liver disease that leaded to granular textural changes in the liver and could progress to liver cancer. Since one of the signs of non-alcoholic fatty liver disease was texture change in the liver, Chen et al.[85] encouraged the network to learn neighbouring pixel information for downstream tasks, including fibrosis and NAS score prediction. Based on [33], Li et al.[91] analysed the issue of COVID-19 severity assessment by training the SSL model to predict the relative location between two patches of the same CT slice. Fashi et al.[92] utilized the primary site information as pseudo-labels and modified the histopathology patch order for the training feature extractor. The added supervised contrastive learning loss boosted more robust feature representations for WSI classification. #### Solving jigsaw puzzles Based on solving jigsaw puzzles, SSL was applied to learn useful semantic features by blending patches from various medical imaging modalities[93]. This multimodal jigsaw puzzle task first drew random puzzle patches from dissimilar medical imaging modalities and combined them into the same puzzle. Combining these medical imaging modalities at the data level encouraged the model to derive modality-agnostic representations of the images and derive modality-invariant views of the objects, including tissues and organ structures. The learned feature representations from many medical imaging modalities could contain cross-modal information, which combined complementary information across the modalities. Taleb et al.[93] augmented multimodal data using cross-modal generation techniques to address modality imbalance problems in real-world clinical situations. In addition, their two modality experiments showed that the proposed multimodal puzzles learn powerful representations, even when the modalities were non-registered. One was on prostate segmentation of two MRI modalities, and another was liver segmentation of both CT and MRI modalities. By increasing performance on downstream tasks and data efficiency, it summed up that the multimodal jigsaw puzzle created better semantic representations when comparing the performance on each modality independently. Later, the same team proposed multimodal puzzle solving as a proxy task to assist feature representation learning from multiple image modalities[94]. Navarro et al.[95] compared and assessed the robustness and generalizability of both SSL and fully supervised learning networks on downstream tasks, including pneumonia detection in X-ray images and segmentation of various organs in CT images. By solving jigsaw puzzles on those Figure 8: The positive example (correct order) and the negative example (incorrect order) from a sequence of video framesare trained to learn the semantic representations. medical datasets, they summarized that they efficiently learned feature mapping of object parts and their spatial arrangement through SSL. Based on the idea of a jigsaw puzzle-solving strategy, Manna et al.[96] learned spatial context-invariant features from magnetic resonance video clips to check knee medical conditions. They mentioned that the first work applied SSL to class imbalanced multilabel MR video datasets. Based on the jigsaw puzzles transformation[34], Li et al.[97] designed a self-supervised network by modifying two processes. The first was to increase the variety of permutations, and the second was to merge the jigsaw puzzles pretext task into the end-to-end semi-supervised framework. They applied the proposed semi supervised learning method to two medical image segmentation tasks, including nuclei[98, 99, 100] segmentation and skin lesion[101, 102, 103] segmentation. To classify cervix images as normal against cancerous, Chae et al.[104] presented a new patch of SSL based on puzzle pretext tasks to predict the relative position. Because they found that the pivotal area of the image to search for cervix cancer was highly potential around the centre and the irrelevant parts were near the periphery. In the domain of histopathology, based on the relative patch algorithm, Santilli et al.[105] implemented domain adaptation from the skin to breast spectra because of the low-level resemblance in the outline between skin tissue and breast cancer. They applied a relative patch pretext task for training on skin data to learn positional relations among divided patches of an input sample and then transferred the learned weights to the following downstream task, breast cancer classification. Zhuang et al.[106, 107] and Tao et al.[108] inspired by the jigsaw puzzle, proposed a novel 3D proxy task by playing a Rubik's cube, called the Rubik's cube recovery. Since the jigsaw puzzle was designed for 2D data, the Rubik's cube recovery was introduced for 3D volumetric data. During the Rubik's cube recovery process, rich feature information from 3D medical images was obtained, including cube rearrangement and cube rotation. This enforced the model to learn the features invariant from both translational and rotational perspectives. It is worth noting that the difficulty increased when the cube rotation operation was added to the Rubik's cube recovery, as it encouraged networks to exploit more spatial information. Li et al.[109] extended the Rubik's cube by adding a random masking operation for obtaining feature representations from the COVID-19 and negative CT volumes. #### Rotation Li et al.[110] observed that each fundus image included obvious structures, such as the optic disc and blood vessels, that were sensitive to orientations. Hence, they proposed a rotation-oriented collaborative approach to learning complementary information, including rotation-related and rotation-invariance features. With these two pretext tasks, vessel structures in fundus images and the discriminative features for retinal disease diagnosis were learnt. In addition to the rotation pretext task, Yang et al.[111] applied elastic transformation prediction[112], to cross-modality liver segmentation from CT to MR. Inspired by [113, 114, 115]. Liu et al.[115] presented SSL based on a 3D feature pyramid network for assisting multi-scale pulmonary nodule detection. Dong et al.[116] classified focal liver lesions by utilizing several relative position pretext tasks, such as predicting the relative position between patches of an input, predicting the rotation, or solving a jigsaw puzzle. Imran et al.[117] presented a new semi-supervised multiple-task model utilizing self-supervision and adversarial training to classify and segment anatomical structures on spine X-ray images. Several pretext tasks were used several SSL simultaneously for medical imaging analysis, such as the studies that worked on the combination of rotation prediction[35] and jigsaw puzzle assembly[34]. However, Tajbakhsh et al.[118] combined two different types of SSL, such as a rotation (contrastive SSL) and reconstruction[119] and colorization[120] (generative SSL), on retinal images for diabetic retinopathy classification. In histopathology, Koohbanani et al.[121] utilized and combined various self-supervised tasks for domain-specific and domain agnostic purposes to obtain contextual, multiresolution, and semantic features in pathology images. Vats et al.[122] adopted those two pretext tasks for wireless capsule endoscopy diagnosis. ### Contrastive learning estimation for medical image analysis To focus on abnormalities, Liu et al.[123] introduced a learnable alignment module into contrastive learning to alter all input samples to be geometrically canonical. More specifically, after extracting high-level feature representations of the image pair, the highly structured character of inputs was used to calculate the L1 distance between corresponding pixels on the positive and negative images. The result could be seen as an indication of possible lesion location on the latter. Their model could alleviate the difference in scales, angles, and displacements of X-ray samples created under bad scan conditions. They demonstrated that the learned features represent localization information that enabled better identification and localization of downstream tasks, including infiltration, mass and pneumothorax diagnosis. #### Contrastive learning ##### 3.2.1.1 Triplet loss for medical application Xie et al.[124] proposed a novel SSL framework with scale-wise triplet loss and count ranking loss, to encourage neural network to automatically learn the information of nuclei quantity and size from the raw data for nuclei segmentation. #### 3.2.1.2 Noise-contrastive estimation (NCE) loss[42] and InfoNCE[43] for medical image analysis Sun et al.[125] presented a context-aware self-supervised representation learning approach for learning antonym-specific and subject-specific representations at the patch and graph levels, respectively. Interestingly, they utilized InfoNCE loss to learn patch-level textural features and contrastive learning objectives for learning graph-level representation. They also took advantage of MoCo, including a queue of data samples and a momentum update scheme to enhance the number of negative samples during training. The features learned through the proposed method demonstrated better performance in staging lung tissue abnormalities associated with COVID-19 than those learned by other unsupervised baselines, such as MedicalNet, Models Genesis, and MoCo. Most existing methods that used the maximization of MI as contrastive loss utilized image pairs for training; however, Zhang et al.[126] made use of image-text pairs. Their work enhanced visual representation learning of medical images by taking advantage of the combined information from textual data and image pairs. Through a bidirectional contrastive objective loss between those two different modalities, this approach depended on maximizing the agreement between real medical representation image-text pairs and randomly chosen pairs. More specifically, bidirectional contrastive objective losses were utilized similarly to the InfoNCE loss. Minimizing this loss encourages encoders to reserve the MI between real representation image-text pairs. Punn et al.[127] utilized the Barlow Twins framework to pre-train an encoder through redundancy reduction, similar to the InfoNCE objective, to learn feature representation over four biomedical imaging segmentation tasks, including cell nuclei, breast tumour, skin lesion, and brain tumour. Except for InfoNCE-based contrastive loss based on the MoCo framework, Kaku et al.[128] added additional two losses, mean squared error (MSE) and Barlow Twins (BT). By minimizing the MSE of feature representations between the intermediate layer or using BT to make their cross-correlation matrix closer to an identity matrix, the model was encouraged to learn augmentation-invariant feature representations that were not only focused on the final layer of the encoder but also extracting the intermediate layers. Their results showed performance was better than MoCo on three medical datasets, including breast cancer histopathology, NIH chest X-ray and diabetic retinopathy. Taher et al.[129] found instance-based objectives learned the most discriminative global feature representations, which might not be sufficient to discriminate medical images. Hence, inspired by the integration of generative and discriminative approaches, Preservational Contrastive Representation Learning (PCRL)[130], Taher et al.[129] developed an SSL framework, context-aware instance discrimination, to encourage instance discrimination learning with context-aware feature representations. #### 3.2.2 Context-instance contrast learning for medical image analysis ##### 3.2.2.1 Maximizing MI for medical image analysis ##### 3.2.2.1.1 Deep Infomax (DIM)[48] and Augmented Multiscale DIM (AMDIM)[50] Chen et al.[131] combined two different types of self-supervised methods, one from the context-instance category, DIM, and another from the instance-instance category, SimCLR[20], for learning disease concept embedding. They utilized the proposed model to extract medical information from electronic health records and disease retrieval. ##### 3.2.2.1.2 Contrastive Predictive Coding (CPC)[43] Stacke et al.[132] implemented and evaluated CPC on histopathology. After experimenting with some model and data-specific parameters on CPC models on histopathology images, those models were estimated for linear tumour classification on three tissue types. This work summarized the restriction of the learned representation for linear tumour classification on histopathology images because only low-level features in the first CPC layers were used. The diversity of distribution of the histology dataset makes little difference for linear tumour classification on histopathology images. Taleb et al.[133] extended this idea to a 3D CPC version. Instead of the time sequence dataset used in CPC, 3D CPC utilized a feature representation set obtained from patches cropped from the upper or left part of the 2D image sample to predict the encoded feature representations of the remaining part, lower or right part. In addition, they also developed a 3D version for rotation prediction, relative patch location, jigsaw puzzles, and exemplar networks. They demonstrated that the feature representations learned from 3D models were more accurate and efficient for solving downstream tasks than training the models from scratch and pretraining them on 2D slices. Zhu et al.[134] investigated the feature complementarity within multiple SSL approaches and presented a greedy algorithm to add multiple proxy tasks. More specifically, based on the assumption that a weaker correlation indicated a higher complementarity between two features, they calculated the correlation measure between the features created by different proxy tasks and then utilized the greedy algorithm to iteratively include a proxy task in the current task pool to form a multitask SSL framework. They applied it to the 3D medical volume brain haemorrhage dataset by adding multiple proxy tasks, including 3D rotation, Models Genesis[135], 3D CPC, and the Rubik's cube. After locating the potential lesions through super voxel estimation utilizing simple linear iterative clustering, Zhu et al.[136] calibrated CPC to learn 3D visual representation. More specifically, calibrating the CPC scheme on the sub volumes cropped from super voxels embedded the rich contextual lesion information into 3D neural networks. Cerebral haemorrhage classification and benign and malignant nodule classification were implemented using the proposed method on the brain haemorrhage and lung cancer datasets, respectively. ### Instance-instance contrastive learning for medical image analysis 2.3.1 SSL design on contrastive loss function-based variation and specific structures for medical image analysis #### 3.2.3.1 SSL design on contrastive loss function-based variation Based on the principle of the contrastive learning loss function, some papers worked on selecting positive and negative samples. For example, Jian et al.[137] combined a multi-layer network and VGG-16 to discriminate images with helicobacter pylori infection from images without helicobacter pylori infection well. However, some papers modified the principle of the contrastive learning loss function for particular applications, such as the following five applications. (1) Learning multimodality for medical applications--Holmberg et al.[138] proposed a new large-scale and cross-modality SSL in the field of ophthalmology. This SSL pretext task encoded shared information between two high-dimensional modalities, including infrared fundus photography and optical coherence tomography. The fundus representation learned from the SSL pretext task contains disease-relevant features that were efficient for downstream diabetic retinopathy classification and retinal thickness measurement. However, the audio and video data used for training SSL could be seen, e.g., in [139]. In detail, by assuming that there was dense correspondence between the ultrasound video and the relevant narrative diagnosis/interpretation speech audio of the sonographer, Jiao et al.[139] proposed SSL with multimodal input, including ultrasound video-speech raw data. Interestingly, to learn domain-agnostic feature representation, Tamkin et al.[140] designed the model architecture and objective to pretrain on six unlabelled datasets. Those datasets from various domains include text, natural images, medical imaging, multichannel sensor data, speech recordings and paired text and images. (2) Learning local representation for medical applications--Xie et al.[141] also focused on local regions by utilizing spatial transformation to create dissimilar augmented views of the same input. This encouraged consistent latent feature representations of the same region from different views of the same input image and assured such consistency by minimizing a local consistency loss. The proposed algorithm was for pretraining to initialize a downstream network and improve four publicly available CT datasets, including two tumours and 11 different types of primary human organs. Chaitanya et al.[142, 143] not only used global contrastive learning but also proposed a local version of contrastive learning. In particular, the local version of contrastive learning loss encouraged feature representations of local areas in an image to be similar with different transformations but dissimilar to different local areas in the same image. The combination of global and local contrastive learning benefited the downstream MRI segmentation task. One similar work proposed by Ouyang et al.[144, 145] employed super pixels pseudo labels and was devised for the tuning-free few-shot segmentation task, including cardiac segmentation of MRI dataset, and organ segmentation of abdominal MRI and CT dataset. Furthermore, the same team[146] designed a local pixel-wise contrastive loss to learn discriminative pixel-level feature representations. This enabled the model to learn better inter-class separability and intra-class compactness for the segmented classes on three public medical datasets with two anatomies, including cardiac and prostate. Yan et al.[147] proposed a pixel-level contrastive learning framework with a coarse-to-fine architecture to learn both local and global information and designed customized negative sampling strategies. More specifically, the global embedding was trained to discriminate various body parts on a coarse scale, assisting the local embedding to concentrate on a smaller region to distinguish finer features. The learned embeddings were applied in different downstream areas, such as landmark detection and lesion matching, on various radiological image modalities, including 3D CT and 2D X-ray of varying body parts, such as the chest, hand, and pelvis. (3) Learning multi-scale information for medical applications--in histopathology, Sahasrabubde et al.[148] proposed a self-supervised method for nuclei segmentation on whole-slide histopathology images. They utilized scale classification as a self-supervision signal under the hypothesis that the texture and size of nuclei could be seen as the level of magnification at which a patch was obtained. Sun et al.[149] introduced a multi-scale SSL framework to precisely segment tissues for a multi-site paediatric brain MR dataset with motion/Gibbs artifacts. (4) Learning texture representation for medical applications--Chen et al.[150] proposed a new computer-aided diagnosis approach with contrastive texture learning loss to learn cervical optical coherence tomography images' texture features. (5) Learning structural representation for medical applications--Tang et al.[151] estimated the similarity between original and augmented images through the designed structural similarity loss for enhancing medical image classification. #### 3.2.3.1.2 SSL design on specific structures Recently, Siamese network and Teacher-student were the popular structures applied in medical area. Siamese network learning for medical applications--Spitze et al.[152] utilized a Siamese network to calculate spatial distances between image patches sampled randomly from the cortex in random sections of the same brain. Learning to discriminate several cortical brain areas through their model implicitly indicated that the designed pretext task was suitable for high resolution cytoarchitectonic mapping. Due to the benefits of decreasing the calculational expense of 3D medical imaging, Li et al.[153] extended a 2D Siamese network to a 3D Siamese network to avoid using negative pairs or large batch sizes. Their proposed SSL coped with an imbalance problem that assisted the learned radiomics features for two downstream classification tasks, including discrimination of the level of brain tumours on the MRI dataset and the stage of lung cancer on the CT dataset. Ye et al.[154] applied a Siamese network on stereo images for accessing depth in robotic surgery. For kidney segmentation from abdominal CT volumes, Dhere et al.[155] used a Siamese CNN to classify whether a given pair of kidneys belonged to the same side. They designed a proxy task by utilizing the anatomical asymmetry of kidneys, and the slight variation in shape, size, and spatial location between the left and right kidneys varied slightly. Moreover, some patients were scanned many times in a so-called longitudinal manner to track therapy or to estimate changes in the disease state. Hence, some studies on longitudinal information of the scans were used for training a Siamese network to compare the embeddings of scans from the same person or different persons. To pre-train on the example of T2-weighted sagittal lumbar MRIs, Jamaludin et al.[156] utilized SSL with a Siamese CNN trained through the two losses described as follows: (1) a contrastive loss on the pairs of images scanned from the same patient (i.e., longitudinal information) at different points in time and on the pairs of images of different patients, and (2) a classification loss was used to predict vertebral bodies' level and disc degeneration radiological grading. Rivail et al.[157] presented a self-supervised method based on a Siamese network for modelling disease progression from longitudinal data, such as longitudinal retinal optical coherence tomography. Taking advantage of a generic time-specific task, this self-supervised model learned to evaluate the time interval between pairs of scans obtained from the same patient. Teacher-student learning for medical applications--Li et al.[158] designed a new SSL approach based on the teacher-student architecture to learn distinguishing representations from gastric X-ray images for a downstream task, gastritis detection. One of the student-teacher frameworks, Mean Teacher[159], was integrated by Liu et al.[160] in the pretraining process for semi-supervised fine-tuning for thorax disease multilabel classification. Park et al.[161] used information distillation between teacher and student framework and the vision transformer model for chest X-ray diagnosis, including tuberculosis, pneumothorax, and COVID-19. You et al [162, 163] also demonstrated the distillation framework improved on medical image synthesis, registration and enhancement on the Left Atrial Segmentation Challenge (LA) and the NIH pancreas CT dataset. Later, they also proposed another semi-supervised approach that used stronger data augmentation and understood the nearest neighbours whose anatomical characteristics were homogeneous from the same class but distinct for other classes in unlabelled and clinically unbalanced circumstances [164]. #### 3.2.3.2 Instance-based discrimination for medical image analysis ##### 3.2.3.2.1 Memory bank Momentum encoder and Momentum Contrast (MoCo)[19] The model[165] that incorporated PIRL and transfer learning could learn the invariance property for skin lesion analysis and the results outperformed those obtained only using transfer learning or only using SSL. Taking advantage of MoCo while reducing dependency on batch size, Sowrirajan et al.[166] utilized it as a fundamental framework for reducing two constraints caused during training on the X-ray image. These two constraints were large X-ray image sizes and high computational requirements. The proposed MoCo-CXR model that adjusted the data augmentation strategy used in MoCo obtained high-quality feature representations and transferable initializations for the following detection of pathologies on chest X-ray images and across different chest X-ray datasets. Several works used MoCo for COVID-19 diagnosis. Sriram et al.[167] applied MoCo to the COVID-19 adverse event prediction task from both single and multiple images and oxygen requirements prediction. To learn meaningful and unbiased visual representations for decreasing the risk of overfitting, He et al.[168] integrated contrastive SSL training on a similar dataset into transfer learning. Zhu et al.[169] utilized the combination of rotation and division as the supervisory signal on the SSL framework for COVID-19 classification on the small shot scenario. Based on the MoCo v2 algorithm, hierarchical pretraining, applied by Reed et al.[170], consistently converged to learn representations for experimenting on 15 of the 16 diverse datasets, spanning visual domains, including medical, driving, aerial, and simulated images. For medical datasets, they checked whether any of the five conditions were in each image of the CheXpert dataset[171] and classified 4-way pneumonia on the Chest-X-ray-kids dataset[172]. Hierarchical retraining was a way to train models on datasets that were gradually more similar to the target dataset. Liang et al.[173] also employed MoCo v2 as the base for conducting a neural architecture search to search for an optimal local architecture from its data. They applied it to CheXpert-14[171] and ModelNet40[174] for five classification tasks, including pleural effusion, atelectasis, consolidation, edema, and cardiomegaly. Interestingly, to train the encoder that could extract feature representation from the panoramic radiograph of the jaw, Hu et al.[175] utilized MoCo v2 to train the feature extractor on massive healthy samples. The Joint with localization consistency loss and patch-covering data augmentation strategy could improve the model's reliability. Wu et al.[176, 177] integrated contrastive learning with federated learning[178, 179, 180] to collaboratively learn a shared image-level representation. Federated learning trained an algorithm within different decentralized edge devices to learn a shared model and each device kept local data samples without exchanging them. They experimented on 3D cardiac MRI images using MoCo architecture for local contrastive learning. Dong et al.[181] also federated SSL based on MoCo for COVID-19 detection. He et al.[182] combined a new surrogate loss proposed by Yuan et al.[183] with MoCo-based SSL for computer-aided screening of COVID-19 infected patients utilizing radiography images. This novel surrogate loss maximised the area under the receiver operating characteristic curve (AUC), and this combination facilitated vital metrics while also keeping model trust. Saillard et al.[184] implemented MoCo v2 on histology images from The Cancer Genome Atlas dataset for microsatellite instability prediction in gastric and colorectal cancers. Tomar et al.[185] applied a Style Encoder to the SSL framework utilizing volumetric contrastive loss through Momentum Contrast[19]. Style Encoder was designed to encourage content-invariant image-level feature representation that gathered similar yyled images and dispersed dissimilar ryled images. #### 3.2.3.2.2 SimCLR[20] Azizi et al.[186] proposed a new method, Multi-Instance Contrastive Learning (MICLe), to classify two kinds of medical images, dermatology on camera images and multilabel on chest X-ray images. Unlike the traditional pretrained model, this work pretrained to the model on unlabelled ImageNet using SimCLR. Then, this work used MICLe to perform self-supervised pretraining on unlabelled medical images to create moderate positive pairs. Finally, supervised fine-tuning was performed on labelled medical images. Gazda et al.[187] proposed a self-supervised deep neural network that combined SimCLR and MoCo to first pretrain on an unlabelled CheXpert dataset of chest X-ray images and then transferred the pretrained representations to downstream tasks, including COVID-19 and pneumonia detection tasks, that is, the classification of respiratory diseases. In the histopathology domain, based on SimCLR, Ciga et al.[188] discovered that the combination of multiple multiorgan datasets with several types of staining and resolution properties enhanced the quality of the learned features. Li et al.[189] addressed whole-slide image classification by training the feature extractor SimCLR. Interestingly, for SimCLR training, they used patches as inputs extracted from the whole slide image and were densely cropped without overlap, which could be seen as an individual input. Ciga et al.[190] also implemented SimCLR for breast cancer detection in histopathology. Mojab et al.[191] verified the proposed model, a SimCLR-based framework with transfer learning, on real-world ophthalmic imaging datasets for glaucoma detection. Schirris et al.[192] utilized a SimCLR-based feature extractor pre-trained on histopathology tiles and extended DeepMIL[193] classification framework for Homologous Recombination Deficiency (HRD) and Microsatellite Instability (MSI) classification on colorectal and cancer dataset. Zhao et al.[194] added the Fast Mixed Hard Negative Sample Strategy to rapidly synthesise more hard negative samples[195] through a convex combination for training. The proposed model was pre-trained in a self-supervised way on the Chest X-ray of Pneumonia dataset and fine-tuned in a supervised way on the COVID-CT dataset. Wicaksono et al.[196] combined two types of contrasting learning, rotation, and jigsaw puzzle from context contrastive instance category and SimCLR v1 from instance contrastive learning, for the human embryo image classification task. Based on SimCLR, Manna et al.[197] also proposed the asymptotic study of the lower bound of the designed novel loss function to test the MRNet dataset, which composed magnetic resonance videos of the human knee. You et al.[198] presented two learning strategies for the volumetric medical image segmentation task. One used a voxel-to-volume contrastive algorithm to obtain global information from 3D images, and the other used local voxel-to-voxel distillation to better utilize local signals in the embedding space. Yao et al.[199] were motivated by contrastive learning[200, 200], which localized the object landmark with only one labelled image available in a coarse-to-fine fashion to create pseudo-annotation for training a terminal landmark detector. The proposed model demonstrated the high-performance cephalometric landmark detection, comparable to the popular fully supervised approaches utilizing more than one training image. Ali et al.[201] used 3D SimCLR during pretraining and the Monte Carlo dropout during prediction on two tasks, including 3D CT pancreas tumour and 3D MRI brain tumour segmentation. Inglese et al.[202] followed a similar optimization method of SimCLR to train an SSL network for distinguishing between two diagnostically different systemic lupus erythematosus patient groups. To learn task-agnostic properties, such as texture and intensity distribution, from heterogeneous data, Zheng et al.[203] first aggregated a dataset from various medical challenges. Then, they presented hierarchical SSL based on SimCLR with contrasting and classification strategies to provide supervision signals for image-, task-, and group-level pretext tasks. On the downstream tasks, they segmented the heart, prostate, and knee on the MRI dataset and the liver, pancreas, and spleen on the CT dataset. #### 3.2.3.3 Cluster-based discrimination for medical application Abbas et al.[204] proposed a new SSL mechanism, 4S-DT, assisted coarse-to-fine transfer learning according to a self-supervised sample decomposition of unannotated chest X-ray input. Super sample decomposition[205] was a pretext task that trained networks using cluster assignments as pseudo labels. The coarse transfer learning utilized an ImageNet pre-trained CNN model for classifying pseudo-labelled chest X-ray images, creating chest X-ray related convolutional features. Fine transfer learning was used in downstream training tasks from the chest X-ray recognition tasks to COVID-19 detection tasks. In histopathology, Abbet et al.[206] conducted research on learning cancerous tissue areas that could be utilized to enhance prognostic stratification for colorectal cancer. They presented an SSL method that combined the learning of tissue region representations and a clustering metric to extract their underlying patterns. Mahapatra et al.[377] utilized one of the deep clustering methods[208], named SwAV, without using class attribute vectors commonly used for natural images. They proved the effectiveness of the proposed model across different datasets with at least three disease classes. Chaves et al.[209] evaluated five SSL methods, including InfoMin, MoCo, SimCLR, BYOL, and SwAV, for diagnosing skin lesions; they compared those SSL methods and three self-supervised pipelines on five test datasets with in-distribution and out-distribution scenarios. They summarized that self-supervision is competitive both in increasing accuracies and decreasing outcomes' variability. Chen et al.[210] developed an SSL strategy to perform joint deep embedding and cluster assignment for fMRI tractography white matter fiber clustering. Ciga et al.[211] utilized a two-step pretraining on three popular contrastive techniques, SimCLR, BYOL and SWAV, to validate better performance on two natural and three medical images, including ChestX-ray8, breast ultrasound, and brain tumour MRI. Islam et al.[212] pre-trained and compared models within fourteen different SSL approaches for pulmonary embolism classification on CT pulmonary angiography scans. #### Temporal contrastive SSL for medical image analysis Temporal contrastive SSL learned feature representation by grabbing the spatial or structural information between adjacent frames. Sequential images utilized two kinds of a way as self-supervision for the training model, such as the objects shown in the adjacent frames or the process of correcting frame order. ##### 3.2.4.1 Finding similarities of adjacent frames for medical image analysis One of the most common applications of temporal contrastive SSL was with finding the similarity in adjacent frames. This enabled the mode to learn contextual semantic representations. In histopathology, Gildenblat et al.[213] utilized the image characteristic that spatially adjacent histopathological tissue image slices were more similar to one another than distance slices, which was used to train on a Siamese network for learning image similarity. In another application, due to the cardiac MR scans composed of different angulated planes relative to the heart, Bai et al.[214] learned feature representation, through the proposed model, from information automatically defined by the heart chamber view planes. That information included anatomical positions and the relative orientation of long-axis and short-axis views could be used to create a pretext task for SSL training. Kragh et al.[215] implemented a self-supervised video alignment method, temporal cycle consistency[216], to obtain temporal similarities between embryo videos, and this information to predict pregnancy possibility. By utilizing the position information in volumetric medical slices, Zeng et al.[217] provides a new position contrastive learning framework to produce contrastive data pairs. The framework can successfully get rid of false negative pairings in the currently common contrastive learning techniques for medical segmentation. ##### Tracking objects for medical image analysis Lu et al.[218, 219] designed a pretext task to predict the density map of fibre streamlines that were the representations of generic white matter pathways for white matter tracts. They took advantage of two characteristics of the fibre streamlines. These fibre streamlines could be calculated with fibre tracking obtained automatically with tractography, and the density map of fibre streamlines was acquired as the number of streamlines cross each voxel. In short, fibre streamlines were jointed line segments with directions and could be seen as white matter pathways that provide supervision. To segment white matter tracts on diffusion magnetic resonance imaging scans, learned features of white matter tracts through the designed pretext task could predict the density map of fibre streamlines from the training data obtained through tractography. ##### 3.2.4.3 Correcting frame orders from 3D medical images The process of correcting frame orders from shuffled frames assisted the model in learning feature representation. Zhang et al.[220] utilized spatial context information in 3D CT and MR volumes as a source of supervision created by solving the tasks of transversal 2D slice ordering for fine-grained body part recognition. Nguyen et al.[221] also demonstrated that predicting the 2D slice order in a sequence could obtain both spatial and semantic features for downstream tasks, the detection of organ segmentation, and intracranial hemorrhage. Jiao et al.[222] corrected the order of a reshuffled fetal ultrasound video. By utilizing the characteristics of the tube-like structure of axons, Klinghoffer et al.[223] learned feature representation by training the model to predict the permutation that was utilized to reformulate the slices of each input 3D microscopy subvolume for axon segmentation. The design of the pretext task, resolution sequence prediction[224], was inspired by the approach in which a pathologist looked for cancerous regions in whole-slide images. More specifically, a pathologist zoomed in and out several times to inspect the tissue at high to low resolution to acquire the details of individual cells and the surrounding area. Srinidhi et al.[224] utilized multiresolution contextual information as a supervisory signal to train a designed SSL network. This network learned visual representations by predicting the order of sequences of resolution that could be generated from the multiresolution histology whole slide image patches. ## 4 Conclusions and future directions This study reviews the state-of-the-art contrastive SSL algorithms on natural images, along with their novel adaptations for medical imaging data. We cover fundamental problems when implementing SSL in medical areas and its future directions. **4.1** Pretext tasks of SSL can create implicit supervisory signals from unlabelled datasets to perform unsupervised learning close to, or even equal to, that of human labelling. The pretext tasks we survey are all manually created by experts, and require both domain and machine learning skills, together with a comprehensive set of experiments. We believe there is an opportunity to frame the pretext task creation as an optimisation problem, which is conceptually comparable to the pursuit of the best architecture for a deep learning challenge. Furthermore, learning a reliable representation from medical images will not be optimal by simply adopting pretext tasks that have been developed on natural images. Hence, such methods require to be further modified and improved to suit the nature of medical images and enable extracting robust representation. **4.2** Similar to pretext tasks, augmentation techniques used in contrastive SSL methods that are designed and optimised for natural images may not be suitable for medical images. As an example, medical images that are already grayscale would not be transformed in a meaningful way by colour jittering or random grey scale, which are common techniques applied to natural images. The effects of various additional augmentations and their combinations should be studied in further research. **4.3** Sampling strategies are one of the reasons for the success of mutual information-based systems, as noted by Tschannen et al.[44]. Sampling strategies may affect contrastive SSL methods, such as MoCo and SimCLR, that need huge amounts of negative samples. Hence, how to decrease the reliance on sampling strategies is still an appealing and unsolved problem. A suitable negative sample can be built based on the properties of medical images, and from there, more valuable data features can be extracted[225, 226]. There needs to be further investigate how to create negative samples and how to better adapt SSL to downstream tasks to enhance the performance of SSL approaches in the medical imaging domain. Moreover, along with data augmentation, the redesign of the contrastive loss function plays a crucial role in the performance. Some researchers work on designing contrastive loss functions for their particular purposes in medical areas and related to e.g., multimodal learning[138, 139, 227], local representation learning[141], multi-scale learning, and texture[150] or structural[151] representation learning. ## Appendix \begin{table} \begin{tabular}{|l|l|l|} \hline \hline \multicolumn{2}{|c|}{2021[96]} & \multicolumn{2}{|c|}{(abnormality, ACL tear, and meniscus tear)} \\ \hline Li et al., & \multirow{2}{*}{\begin{tabular}{c} (1) MoNuSeg dataset \\ (2) ISIC dataset \\ \end{tabular} } & \begin{tabular}{c} [Histopathological images] \\ (1) Nuclei segmentation \\ (2) Skin lesion segmentation \\ \end{tabular} \\ \cline{2-3} & \begin{tabular}{c} Chae et al., \\ 2021[104] \\ \end{tabular} & \multirow{2}{*}{\begin{tabular}{c} Cervix image dataset \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Cervical cancer classification \\ \end{tabular} } \\ \cline{1-1} \cline{3-4} & Santilli et al., & & \\ ## Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third-party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit [http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/).
2310.05252
Equivalence between individual and group strategy-proofness under stability
This paper studies the (group) strategy-proofness aspect of two-sided matching markets under stability. For a one-to-one matching market, we show an equivalence between individual and group strategy-proofness under stability. We obtain this equivalence assuming the domain satisfies a richness condition. However, the result cannot be extended to the many-to-one matching markets. We further consider a setting with single-peaked preferences and characterize all domains compatible for stability and (group) strategy-proofness.
Pinaki Mandal
2023-10-08T18:02:06Z
http://arxiv.org/abs/2310.05252v1
# Equivalence between individual and group strategy-proofness under stability+ ###### Abstract This paper studies the (group) strategy-proofness aspect of two-sided matching markets under stability. For a one-to-one matching market, we show an equivalence between individual and group strategy-proofness under stability. We obtain this equivalence assuming the domain satisfies a richness condition. However, the result cannot be extended to the many-to-one matching markets. We further consider a setting with single-peaked preferences and characterize all domains compatible for stability and (group) strategy-proofness. **Keywords:** Two-sided matching; Stability; Strategy-proofness; Group strategy-proofness; Single-peaked preferences **JEL Classification:** C78; D71; D82 Introduction The theory of two-sided matching markets has interested researchers for its relevance to the design of real-world institutions, such as assigning graduates to residency programs (National Resident Matching Program) or students to schools (Boston Public Schools). In this paper, we deal with the simplest case - the _marriage problem_(Gale and Shapley, 1962), a well-known one-to-one matching market. In this market, there are two finite disjoint sets of agents, "men" and "women". Each agent on one side of the market has a strict preference over the agents on the other side and the _outside option_, where the outside option denotes the possibility of remaining unmatched. A matching between men and women is selected based on the agents' preferences, where each agent on one side of the market can be matched with at most one agent on the other side. The _deferred acceptance (DA) rule_(Gale and Shapley, 1962) is the salient rule for such a market due to its theoretical appeal. 1. It is a _stable_ matching rule (see Gale and Shapley (1962)).1 A matching is stable if it is _individually rational_ and no pair of agents, one on each side, would rather be matched to each other than to their present match. Footnote 1: In real-world applications, empirical studies have shown that stable mechanisms often succeed whereas unstable ones often fail. For a summary of this evidence, see Roth (2002). 2. For the proposing side, not only it is _strategy-proof_ but also _group strategy-proof_(see Dubins and Freedman (1981)). A matching rule is strategy-proof if truthful revelation of preferences is a weakly dominant strategy for the agents. Group strategy-proofness, a stronger condition than strategy-proofness, ensures that no group of coordinated agents can be strictly better off by misreporting their preferences. However, the DA rule is not strategy-proof for all agents (in fact, no stable matching rule is; see Roth (1982)), and consequently, is not group strategy-proof for all agents. Our motivation behind this paper is twofold. First, analyze the structure of manipulative coalitions for the DA rule, and second, identify the conditions for the DA rule to be group strategy-proof. We are focusing on group strategy-proofness, not just on strategy-proofness; what use would it be to guarantee that no single agent could cheat if a few of them could jointly manipulate? As we have mentioned earlier, Dubins and Freedman (1981) show that no coalition of men can manipulate the _men-proposing DA (MPDA) rule_, while Roth (1982) shows that the MPDA rule is not even strategy-proof for women. Therefore, whenever the MPDA rule is manipulable by a coalition, there must be at least one woman in that coalition. In Theorem 1, we show that the manipulative coalition not only contains at least one woman but must be a group of women, extending the result of Dubins and Freedman (1981). We further show that whenever a coalition manipulates the MPDA rule, every woman in the market weakly benefits while every man in the market weakly suffers (Proposition 1), and the set of unmatched agents remains the same (Proposition 2). One key implication of Proposition 2 is that an unmatched agent cannot be a part of manipulating the DA rule. We next identify the conditions for the DA rule to be group strategy-proof. Alcalde and Barbera (1994) identify a restriction on the domain, called _top dominance_, and show that top dominance for women is a sufficient condition for the MPDA rule to be strategy-proof.2 In Proposition 3, we show that it is also sufficient for the MPDA rule to be group strategy-proof. As it turns out, this coincidence is not an implication of top dominance but rather a property of the DA rule. In particular, we find an equivalence between strategy-proofness and group strategy-proofness for the DA rule. We obtain this equivalence assuming the domain satisfies a richness condition, called _unrestricted top pairs_(Alva, 2017), for at least one side of the market. The richness condition roughly requires that for every ordered pair of outcomes for an agent, there is an admissible preference that ranks them first and second. For example, if every strict preference is admissible for every man, the corresponding domain satisfies unrestricted top pairs for men. Another result of ours shows that on a domain satisfying unrestricted top pairs for at least one side of the market, if there exists a stable and strategy-proof matching rule, it must be unique and is the DA rule (Lemma C.1). Combining all these results, we have our main result - the equivalence between strategy-proofness and group strategy-proofness under stability (Theorem 2). Barbera et al. (2016) show such an equivalence for general private good economies (which also encompasses the marriage problem). They obtain their result assuming a richness condition of the domain and two conditions of the rule (neither of them is stability). However, there is no connection between our richness condition and theirs (i.e., neither of them implies the other), and they assume (group) strategy-proofness only for one side of the market. Therefore, our equivalence result cannot be deduced from their result. Footnote 2: Top dominance for women is also a necessary domain restriction for the MPDA rule to be strategy-proof under two domain conditions (see Theorem 4 in Alcalde and Barberá (1994)). So far, all the results assume that the agents can have strict but otherwise arbitrary preferences. However, in many circumstances, preferences may well be restricted. A natural restriction is as follows: the agents on each side are ordered based on a certain criterion, say age, and the preferences respect these orders in the sense that as one moves away from his/her most preferred choice, his/her preference declines. Such restriction is known as _single-peakedness_(Black, 1948). In Theorem 3, we characterize all single-peaked domains compatible for stability and (group) strategy-proofness under two domain conditions; namely, _cyclical inclusion_ and _anonymity_. Cyclical inclusion roughly requires that for every pair of outcomes for an agent, if there exists an admissible preference that prefers the first outcome to the second, then there is another admissible preference that prefers the second outcome to the first. Anonymity requires every man to have the same set of admissible preferences, and so for the women. Unlike Theorem 2, Theorem 3 does not show equivalence between strategy-proofness and group strategy-proofness under stability. It rather shows that under cyclical inclusion and anonymity, the set of single-peaked domains compatible for stability and strategy-proofness is the same as that for stability and group strategy-proofness. Finally, in Section 5, we discuss to which extent we can generalize our results to many-to-one matching markets. We briefly describe a well-studied market - the _college admissions problem_(Gale and Shapley, 1962), and show the impossibility of extending our results to this market by means of an example. The layout of the paper is as follows. In Section 2, we introduce basic notions and notations that we use throughout the paper, describe our model, define matching rules and discuss their standard properties, and present the DA rule. Section 3 presents our results. Section 4 considers a setting with single-peaked preferences. In Section 5, we discuss to which extent we can generalize our results to many-to-one matching markets. Finally, the Appendix contains the proofs. ## 2 Preliminaries ### Basic notions and notations For a finite set \(X\), let \(\mathbb{L}(X)\) denote the set of all strict linear orders over \(X\).3 An element of \(\mathbb{L}(X)\) is called a _preference_ over \(X\). For a preference \(P\in\mathbb{L}(X)\) and distinct \(x,y\in X\), \(x\;P\;y\) is interpreted as "\(x\) is preferred to \(y\) according to \(P\)". For \(P\in\mathbb{L}(X)\), let \(R\) denote the weak part of \(P\), i.e., for any \(x,y\in X\), \(x\;R\;y\) if and only if \(\big{[}x\;P\;y\;\text{or}\;x=y\big{]}\). Furthermore, for \(P\in\mathbb{L}(X)\) and non-empty \(X^{\prime}\subseteq X\), let \(\tau(P,X^{\prime})\) denote the most preferred element in \(X^{\prime}\) according to \(P\), i.e., \(\tau(P,X^{\prime})=x\) if and only if \(\big{[}x\in X^{\prime}\;\text{and}\;x\;P\;y\;\text{for all}\;y\in X^{\prime} \setminus\{x\}\big{]}\). For ease of presentation, we denote \(\tau(P,X)\) by \(\tau(P)\). Footnote 3: A _strict linear order_ is a semiconnex, asymmetric, and transitive binary relation. ### Model There are two finite disjoint sets of agents, the set of _men_\(M=\{m_{1},\ldots,m_{p}\}\) and the set of _women_\(W=\{w_{1},\ldots,w_{q}\}\). Let \(A=M\cup W\) be the set of all agents. Throughout this paper, we assume \(p,q\geq 2\). Let \(\emptyset\) denote the _outside option_ - the null agent. Each man \(m\) has a preference \(P_{m}\) over \(W\cup\{\emptyset\}\), the set of all women and the outside option. The position in which he places the outside option in the preference has the meaning that the only women he is willing to be matched with are those whom he prefers to the outside option. Similarly, each woman \(w\) has a preference \(P_{w}\) over \(M\cup\{\emptyset\}\). We say that woman \(w\) is _acceptable_ to man \(m\) if \(w\;P_{m}\;\emptyset\), and analogously, man \(m\) is _acceptable_ to woman \(w\) if \(m\;P_{w}\;\emptyset\). We denote by \(\mathcal{P}_{a}\) the set of admissible preferences for agent \(a\in A\). Clearly, \(\mathcal{P}_{m}\subseteq\mathbb{L}(W\cup\{\emptyset\})\) for all \(m\in M\) and \(\mathcal{P}_{w}\subseteq\mathbb{L}(M\cup\{\emptyset\})\) for all \(w\in W\). A _preference profile_, denoted by \(P_{A}=(P_{m_{1}},\ldots,P_{m_{p}},P_{w_{1}},\ldots,P_{w_{q}})\), is an element of the Cartesian product \(\mathcal{P}_{A}:=\prod\limits_{i=1}^{p}\mathcal{P}_{m_{i}}\times\prod\limits_ {j=1}^{q}\mathcal{P}_{w_{j}}\), that represents a collection of preferences - one for each agent. Furthermore, as is the convention, \(P_{-a}\) denotes a collection of preferences of all agents except for \(a\). Also, for \(A^{\prime}\subseteq A\), let \(P_{A^{\prime}}\) denote a collection of preferences of all agents in \(A^{\prime}\) and \(P_{-A^{\prime}}\) a collection of preferences of all agents not in \(A^{\prime}\). ### Matching rules and their stability A _matching_ (between \(M\) and \(W\)) is a function \(\mu:A\to A\cup\{\emptyset\}\) such that * \(\mu(m)\in W\cup\{\emptyset\}\) for all \(m\in M\), * \(\mu(w)\in M\cup\{\emptyset\}\) for all \(w\in W\), and * \(\mu(m)=w\) if and only if \(\mu(w)=m\) for all \(m\in M\) and all \(w\in W\). Here, \(\mu(m)=w\) means man \(m\) and woman \(w\) are matched to each other under the matching \(\mu\), and \(\mu(a)=\emptyset\) means agent \(a\) is unmatched under the matching \(\mu\). We denote by \(\mathcal{M}\) the set of all matchings. A matching \(\mu\) is _individually rational_ at a preference profile \(P_{A}\) if for every \(a\in A\), we have \(\mu(a)\ R_{a}\ \emptyset\). A matching \(\mu\) is _blocked_ by a pair \((m,w)\in M\times W\) at a preference profile \(P_{A}\) if \(w\ P_{m}\ \mu(m)\) and \(m\ P_{w}\ \mu(w)\). A matching is _stable_ at a preference profile if it is individually rational and is not blocked by any pair at that preference profile. A _matching rule_ is a function \(\varphi:\mathcal{P}_{A}\rightarrow\mathcal{M}\). For a matching rule \(\varphi:\mathcal{P}_{A}\rightarrow\mathcal{M}\) and a preference profile \(P_{A}\in\mathcal{P}_{A}\), let \(\varphi_{a}(P_{A})\) denote the match of agent \(a\) by \(\varphi\) at \(P_{A}\). **Definition 1**.: A matching rule \(\varphi:\mathcal{P}_{A}\rightarrow\mathcal{M}\) is _stable_ if for every \(P_{A}\in\mathcal{P}_{A}\), \(\varphi(P_{A})\) is stable at \(P_{A}\). ### Incentive properties of matching rules In practice, matching rules are often designed to satisfy incentive properties. Two well-studied such requirements are _strategy-proofness_ and _group strategy-proofness_. **Definition 2**.: A matching rule \(\varphi:\mathcal{P}_{A}\rightarrow\mathcal{M}\) is * _strategy-proof_ if for every \(P_{A}\in\mathcal{P}_{A}\), every \(a\in A\), and every \(\tilde{P}_{a}\in\mathcal{P}_{a}\), we have \(\varphi_{a}(P_{A})\ R_{a}\ \varphi_{a}(\tilde{P}_{a},P_{-a})\). * _group strategy-proof_ if for every \(P_{A}\in\mathcal{P}_{A}\), there do not exist a set of agents \(A^{\prime}\subseteq A\) and a preference profile \(\tilde{P}_{A^{\prime}}\) of the agents in \(A^{\prime}\) such that \(\varphi_{a}(\tilde{P}_{A^{\prime}},P_{-A^{\prime}})\ P_{a}\ \varphi_{a}(P_{A})\) for all \(a\in A^{\prime}\). If a matching rule \(\varphi\) on \(\mathcal{P}_{A}\) is not group strategy-proof, then there exist \(P_{A}\in\mathcal{P}_{A}\), a set of agents \(A^{\prime}\subseteq A\), and a preference profile \(\tilde{P}_{A^{\prime}}\) of the agents in \(A^{\prime}\) such that \(\varphi_{a}(\tilde{P}_{A^{\prime}},P_{-A^{\prime}})\ P_{a}\ \varphi_{a}(P_{A})\) for all \(a\in A^{\prime}\). In such cases, we say that \(\varphi\)_is manipulable at \(P_{A}\) by coalition \(A^{\prime}\) via \(\tilde{P}_{A^{\prime}}\)_. Note that a coalition can be a singleton, and thus, group strategy-proofness implies strategy-proofness. Notice that all agents in the manipulative coalition should be strictly better off from misreporting. We consider this requirement compelling, since it leaves no doubt regarding the incentives for each member of the coalition to participate in a collective deviation from truthful revelation. ### Deferred acceptance _Deferred acceptance (DA) rule_(Gale and Shapley, 1962) is the salient rule in our model for its theoretical appeal. 1. It is a stable matching rule. In fact, it is the stable matching rule optimal for the agents on the "proposing" side (see Gale and Shapley (1962) for details). 2. For the "proposing" side of the market, not only it is strategy-proof but also group strategy-proof (see Dubins and Freedman (1981) for details). There are two types of the DA rule: the _men-proposing DA (MPDA) rule_ - denoted by \(D^{M}\), and the _women-proposing DA (WPDA) rule_. In the following, we provide a description of the MPDA rule at a preference profile \(P_{A}\). The same of the WPDA rule can be obtained by interchanging the roles of men and women in the MPDA rule. _Step 1_.: Each man \(m\) proposes to his most preferred acceptable woman (according to \(P_{m}\)).4 Every woman \(w\), who has at least one proposal, tentatively keeps her most preferred acceptable man (according to \(P_{w}\)) among these proposals and rejects the rest. Footnote 4: That is, if the most preferred woman of a man is acceptable to that man, he proposes to her. Otherwise, he does not propose to anybody. _Step 2_.: Every man \(m\), who was rejected in the previous step, proposes to his next preferred acceptable woman. Every woman \(w\), who has at least one proposal including any proposal tentatively kept from the earlier steps, tentatively keeps her most preferred acceptable man among these proposals and rejects the rest. This procedure is then repeated from Step 2 till a step such that for each man, one of the following two happens: (i) he is accepted by some woman, (ii) he has proposed to all acceptable women. At this step, the proposal tentatively accepted by women becomes permanent. This completes the description of the MPDA rule. **Remark 1**(Gale and Shapley, 1962).: On the unrestricted domain \(\mathbb{L}^{p}(W\cup\{\emptyset\})\times\mathbb{L}^{q}(M\cup\{\emptyset\})\), both the DA rules are stable. Results ### Structure of manipulative coalitions for the MPDA rule Dubins and Freedman (1981) show that no coalition of men can manipulate the MPDA rule on the unrestricted domain, while Roth (1982) shows that no stable matching rule on the unrestricted domain is strategy-proof.5 In view of these results, it follows that whenever the MPDA rule is manipulable by a coalition, at least one woman must be in that coalition. It turns out that the coalition not only contains at least one woman, but must be a group of women. Footnote 5: Roth (1982) proves this result in a setting without outside options and with an equal number (at least three) of men and women. However, the result can be extended to our setting (i.e., with outside options and with arbitrary values (at least two) of the number of men and the number of women). See Example 1 for a stronger result. **Theorem 1**.: _Suppose a coalition \(A^{\prime}\subseteq A\) manipulates the MPDA rule at some preference profile. Then, \(A^{\prime}\subseteq W\)._ The result of Dubins and Freedman (1981) follows from Theorem 1. **Corollary 1** (Dubins and Freedman, 1981).: _On the unrestricted domain \(\mathbb{L}^{p}(W\cup\{\emptyset\})\times\mathbb{L}^{q}(M\cup\{\emptyset\})\), no coalition of men can manipulate the MPDA rule._ Our next result concerns how a manipulation affects the MPDA rule from the agents' point of view. While every woman in the market weakly benefits from a successful misreporting, each man weakly suffers. Notice that Theorem 1 follows from this result. **Proposition 1**.: _On an arbitrary domain \(\mathcal{P}_{A}\), suppose the MPDA rule \(D^{M}\) is manipulable at \(P_{A}\in\mathcal{P}_{A}\) by coalition \(A^{\prime}\subseteq A\) via \(\tilde{P}_{A^{\prime}}\in\prod\limits_{a\in A^{\prime}}\mathcal{P}_{a}\). Then,_ * \(D^{M}_{m}(P_{A})\;R_{m}\;D^{M}_{m}(\tilde{P}_{A^{\prime}},P_{-A^{\prime}})\) _for all_ \(m\in M\)_, and_ * \(D^{M}_{w}(\tilde{P}_{A^{\prime}},P_{-A^{\prime}})\;R_{w}\;D^{M}_{w}(P_{A})\) _for all_ \(w\in W\)_._ Our last result in this subsection says that the set of unmatched agents does not get affected by manipulation. **Proposition 2**.: _On an arbitrary domain \(\mathcal{P}_{A}\), suppose the MPDA rule \(D^{M}\) is manipulable at \(P_{A}\in\mathcal{P}_{A}\) by coalition \(A^{\prime}\subseteq A\) via \(\tilde{P}_{A^{\prime}}\in\prod\limits_{a\in A^{\prime}}\mathcal{P}_{a}\). Then, for every \(a\in A\),_ \[D^{M}_{a}(P_{A})=\emptyset\iff D^{M}_{a}(\tilde{P}_{A^{\prime}},P_{-A^{\prime} })=\emptyset.\lx@note{footnote}{Throughout, ``$A\implies B$'' means that ``$A$ implies $B$'', and ``$A\iff B$'' means that ``$A$ if and only if $B$''.}\] A key implication of Proposition 2 is that an unmatched agent cannot be a part of a manipulation. Note that by symmetry, Proposition 2 also holds for the WPDA rule. Proposition 2 cannot be deduced from a result of McVitie and Wilson (1970), where they show that the set of unmatched agents remains the same across the stable matchings at a preference profile. To see this, note that in Proposition 2, the matching \(D^{M}(P_{A})\) is not stable at \((\tilde{P}_{A^{\prime}},P_{-A^{\prime}})\), and the matching \(D^{M}(\tilde{P}_{A^{\prime}},P_{-A^{\prime}})\) is not stable at \(P_{A}\) in general. ### Equivalence between strategy-proofness and group strategy-proofness Alcalde and Barbera (1994) identify a restriction on the domain, called _top dominance_, and show that top dominance for women is a sufficient condition for the MPDA rule to be strategy-proof. Our next result, which extends their result, says it is also sufficient for the MPDA rule to be group strategy-proof. Before stating this result, we first present the notion of top dominance. **Definition 3** (Top dominance).: A domain of preference profiles \(\mathcal{P}_{A}\) satisfies _top dominance for women_ if for every \(w\in W\), \(\mathcal{P}_{w}\) satisfies the following property: for every \(x\in M\) and every \(y,z\in M\cup\{\emptyset\}\), if there exists a preference \(P\in\mathcal{P}_{w}\) with \(x\;P\;y\;P\;z\) and \(y\;R\;\emptyset\), then there is no preference \(\tilde{P}\in\mathcal{P}_{w}\) such that \(x\;\tilde{P}\;z\;\tilde{P}\;y\) and \(z\;\tilde{R}\;\emptyset\). We define _top dominance for men_ in an analogous manner. **Proposition 3**.: _Let \(\mathcal{P}_{A}\) be an arbitrary domain of preference profiles. If \(\mathcal{P}_{A}\) satisfies top dominance for women, then the MPDA rule is stable and group strategy-proof on \(\mathcal{P}_{A}\)._ As we can see, for the MPDA rule, strategy-proofness coincides with group strategy-proofness when the domain satisfies top dominance for women. This coincidence is not an implication of top dominance, but rather a property of any stable matching rule. This is what we show next. We first introduce a richness condition, called _unrestricted top pairs_(Alva, 2017), of the domain that features in our result. **Definition 4** (Unrestricted top pairs).: A domain of preference profiles \(\mathcal{P}_{A}\) satisfies _unrestricted top pairs for men_ if for every \(m\in M\), 1. for every \(w,w^{\prime}\in W\), there exists \(P\in\mathcal{P}_{m}\) such that \(w\;P\;w^{\prime}\;P\;z\) for all \(z\in(W\cup\{\emptyset\})\setminus\{w,w^{\prime}\}\), 2. for every \(w\in W\), there exists \(\tilde{P}\in\mathcal{P}_{m}\) such that \(w\;\tilde{P}\;\emptyset\;\tilde{P}\;z\) for all \(z\in W\setminus\{w\}\), and 3. there exists \(P^{\prime}\in\mathcal{P}_{m}\) such that \(\tau(P^{\prime})=\emptyset\). For example, whenever the sets of admissible preferences for men are unrestricted, the corresponding domain satisfies unrestricted top pairs for men. We define _unrestricted top pairs for women_ in an analogous manner. We now present our main result of this paper. It shows the equivalence between strategy-proofness and group strategy-proofness for any stable matching rule under our richness condition. **Theorem 2**.: _Let \(\mathcal{P}_{A}\) satisfy unrestricted top pairs for at least one side of the market. Then, any stable matching rule on \(\mathcal{P}_{A}\) is strategy-proof if and only if it is group strategy-proof._ Note that the domain satisfying unrestricted top pairs for at least one side of the market is a sufficient condition for the equivalence between strategy-proofness and group strategy-proofness under stability, not for the existence of a stable and (group) strategy-proof matching rule. In fact, if the domain satisfies unrestricted top pairs for both sides, no stable matching rule is strategy-proof (see Example 1), and therefore, Theorem 2 is vacuously satisfied. **Example 1**.: Suppose the domain \(\mathcal{P}_{A}\) satisfies unrestricted top pairs for both sides. Consider the preference profiles presented in Table 1. For instance, \(w_{1}w_{2}\ldots\) denotes a preference that ranks \(w_{1}\) first and \(w_{2}\) second (the dots indicate that all preferences for the corresponding parts are irrelevant and can be chosen arbitrarily). Here, \(m_{k}\) denotes a man other than \(m_{1},m_{2}\) (if any), and \(w_{l}\) denotes a woman other than \(w_{1},w_{2}\) (if any). Note that only man \(m_{1}\) changes his preference from \(P_{A}^{1}\) to \(P_{A}^{2}\) and only woman \(w_{1}\) changes her preference from \(P_{A}^{1}\) to \(P_{A}^{3}\). For ease of presentation, let \(\mu\) denote the matching \(\big{[}(m_{1},w_{1}),(m_{2},w_{2}),(a,\emptyset)\;\;\forall\;\;a\in A\setminus \{m_{1},m_{2},w_{1},w_{2}\}\big{]}\) and \(\tilde{\mu}\) the matching \(\big{[}(m_{1},w_{2}),(m_{2},w_{1}),(a,\emptyset)\;\;\forall\;\;a\in A\setminus \{m_{1},m_{2},w_{1},w_{2}\}\big{]}\) in this example. The sets of stable matchings at \(P_{A}^{1}\), \(P_{A}^{2}\), and \(P_{A}^{1}\) are \(\{\mu,\tilde{\mu}\}\), \(\{\mu\}\), and \(\{\tilde{\mu}\}\), respectively. Fix a stable matching rule \(\varphi\) on \(\mathcal{P}_{A}\). If \(\varphi(P_{A}^{1})=\mu\), then \(w_{1}\) can manipulate at \(P_{A}^{1}\) via \(P_{w_{1}}^{3}\). If \(\varphi(P_{A}^{1})=\tilde{\mu}\), then \(m_{1}\) can manipulate at \(P_{A}^{1}\) via \(P_{m_{1}}^{2}\). This implies \(\varphi\) is not strategy-proof on \(\mathcal{P}_{A}\). \(\Diamond\)
2306.08090
Geometric Active Disturbance Rejection Control of Rotorcraft on $SE(3)$ with Fast Finite-Time Stability
This article presents a tracking control framework enhanced by an extended state observer for a rotorcraft aerial vehicle modeled as a rigid body in three-dimensional translational and rotational motions. The system is considered as an underactuated system on the tangent bundle of the six-dimensional Lie group of rigid body motions, $SE(3)$. The extended state observer is designed to estimate the resultant external disturbance force and disturbance torque acting on the vehicle. It guarantees stable convergence of disturbance estimation errors in finite time when the disturbances are constant and finite time convergence to a bounded neighborhood of zero errors for time-varying disturbances. This extended state observer design is based on a H\"{o}lder-continuous fast finite time stable differentiator that is similar to the super-twisting algorithm, to obtain fast convergence. A tracking control scheme that uses the estimated disturbances from extended state observer for disturbance rejection, is designed to achieve fast finite-time stable tracking control. Numerical simulations are conducted to validate the proposed extended state observer and tracking control scheme with disturbance rejection. The proposed extended state observer is compared with other existing research to show its supremacy.
Ningshan Wang, Reza Hamrah, Amit K. Sanyal, Mark N. Glauser
2023-06-13T19:14:58Z
http://arxiv.org/abs/2306.08090v1
Geometric Active Disturbance Rejection Control of Rotorcraft on \(\mathrm{SE}(3)\) with Fast Finite-Time Stability ###### Abstract This article presents a tracking control framework enhanced by an extended state observer for a rotorcraft aerial vehicle modeled as a rigid body in three-dimensional translational and rotational motions. The system is considered as an underactuated system on the tangent bundle of the six-dimensional Lie group of rigid body motions, \(\mathrm{SE}(3)\). The extended state observer is designed to estimate the resultant external disturbance force and disturbance torque acting on the vehicle. It guarantees stable convergence of disturbance estimation errors in finite time when the disturbances are constant and finite time convergence to a bounded neighborhood of zero errors for time-varying disturbances. This extended state observer design is based on a Holder-continuous fast finite time stable differentiator that is similar to the super-twisting algorithm, to obtain fast convergence. A tracking control scheme that uses the estimated disturbances from extended state observer for disturbance rejection, is designed to achieve fast finite-time stable tracking control. Numerical simulations are conducted to validate the proposed extended state observer and tracking control scheme with disturbance rejection. The proposed extended state observer is compared with other existing research to show its supremacy. Geometric Control, Extended State Observer, Fast Finite-Time Stability, Unmanned Aerial Vehicle ## I Introduction Small-scale rotorcraft unmanned aerial vehicles (UAVs) have become increasingly popular in various applications, such as security and monitoring, infrastructure inspection, agriculture, wildland management, package delivery, and remote sensing. However, these UAVs are frequently exposed to dynamic uncertainties and disturbances caused by turbulence induced by airflow around structures or regions. Therefore, it is crucial to ensure robust flight control performance in such challenging environments, with guaranteed stability margins even in the presence of dynamic disturbances and uncertainties. To this end, this article describes robust tracking control schemes for a rotorcraft UAV under disturbances and uncertainties. Recent research articles on rotorcraft the UAV tracking control schemes use different methods to tackle the adverse effects of disturbances and uncertainties during the flight. Torrente et al. [1] use Gaussian processes to complement the nominal dynamics of the multi-rotor in a model predictive control (MPC) pipeline. Hanover et al. [2] use an explicit scheme to discretize the dynamics for the nonlinear MPC solved by optimization. Bangura et al. [3] use the propeller aerodynamics as a direct feedforward term on the desired thrust to re-regulate the thrust command of the rotors. Craig et al. [4] implement a set of pitot tubes onto the multi-rotor aircraft to directly sense the aircraft's airspeed. With the knowledge of propeller aerodynamic characteristics, the airspeed is then utilized to obtain the disturbance forces and torques as feedforward terms to enhance control performance. Bisheban et al. [5] use artificial neural networks to obtain disturbance forces and torques with the kinematics information of the aircraft and then use the baseline control scheme based on the work by Lee et al. [6] in their tracking control scheme design. The methods used in these research articles either need high computational efforts [1, 2, 5] or require precise modeling of the aerodynamic characteristics of the rotorcraft propellers [3, 4], to obtain satisfactory control performance against disturbances. A promising control technique to maintain the control performance against disturbances and uncertainties is active disturbance rejection control (ADRC), which can be traced back to the dissertation by Hartlieb [7]. In an ADRC scheme, we first obtain an estimation of the unknown disturbance from a disturbance observer (DO) or an extended state observer (ESO) and then utilize it in the control design to reject the disturbance. ADRC and ESO are formally introduced in combination in [8], where the ESO is used to obtain disturbance estimates for disturbance rejection. Other than ESO, disturbance observer (DO) [9], and unknown input observer (UIO) [10] can also give disturbance estimates for a disturbance rejection control scheme. ADRC schemes are widely used for rotorcraft UAV control. In the research articles by Shao et al. [11], the disturbance estimation from asymptotically stable (AS) ESOs is employed to enhance surface trajectory tracking control scheme for a multi-rotor UAV in the presence of parametric uncertainties and external disturbances. Liu et al. [12] propose fixed-time stable disturbance observers (FxTSDO) and fault-tolerance mechanisms and utilize them in their translation and attitude control scheme. Mechali et al. [13] present FxTS ESOs for the same purpose. Wang et al. [14] implement incremental nonlinear dynamics inversion (INDI) control combined with a sliding-mode observer (SMO) for disturbance estimation and rejection. Jia et al. [15] employ the disturbance model obtained by Faessler et al. [16], and then estimate the drag coefficient as a parameter. This disturbance model is also employed by Moeini et al. [17]. Cui et al. [18] use an adaptive super-twisting ESO for the disturbance estimation. Bhale et al. [19] carry out disturbance estimation with the discrete-time finite-time stable (FTS) disturbance observer by Sanyal [20].
2310.15041
Manipulation Mask Generator: High-Quality Image Manipulation Mask Generation Method Based on Modified Total Variation Noise Reduction
In artificial intelligence, any model that wants to achieve a good result is inseparable from a large number of high-quality data. It is especially true in the field of tamper detection. This paper proposes a modified total variation noise reduction method to acquire high-quality tampered images. We automatically crawl original and tampered images from the Baidu PS Bar. Baidu PS Bar is a website where net friends post countless tampered images. Subtracting the original image with the tampered image can highlight the tampered area. However, there is also substantial noise on the final print, so these images can't be directly used in the deep learning model. Our modified total variation noise reduction method is aimed at solving this problem. Because a lot of text is slender, it is easy to lose text information after the opening and closing operation. We use MSER (Maximally Stable Extremal Regions) and NMS (Non-maximum Suppression) technology to extract text information. And then use the modified total variation noise reduction technology to process the subtracted image. Finally, we can obtain an image with little noise by adding the image and text information. And the idea also largely retains the text information. Datasets generated in this way can be used in deep learning models, and they will help the model achieve better results.
Xinyu Yang, Jizhe Zhou
2023-10-23T15:40:00Z
http://arxiv.org/abs/2310.15041v1
Manipulation Mask Generator: High-Quality Image Manipulation Mask Generation Method Based on Modified Total Variation Noise Reduction ###### Abstract In artificial intelligence, any model that wants to achieve a good result is inseparable from a large number of high-quality data. It is especially true in the field of tamper detection. This paper proposes a modified total variation noise reduction method to acquire high-quality tamperned images. We automatically crawl original and tamperned images from the Baidu PS Bar. Baidu PS Bar is a website where net friends post countless tampered images. Subtracting the original image with the tampered image can highlight the tampered area. However, there is also substantial noise on the final print, so these images can't be directly used in the deep learning model. Our modified total variation noise reduction method is aimed at solving this problem. Because a lot of text is slender, it is easy to lose text information after the opening and closing operation. We use MSEER (Maximally Stable Extremal Regions) and NMS (Non-maximum Suppression) technology to extract text information. And then use the modified total variation noise reduction technology to process the subtracted image. Finally, we can obtain an image with little noise by adding the image and text information. And the idea also largely retains the text information. Datasets generated in this way can be used in deep learning models, and they will help the model achieve better results. total variation, automated data crawling, maximally stable extremal regions ## I Introduction With the advances in image editing techniques and user-friendly editing software, low-cost tampered or manipulated image generation processes have become widely available [4]. Non-professional users can easily edit and tamper images without leaving obvious visual traces [5]. Some tampered images are amusing and harmless, But some can damage the reputation of others and even spread rumors to cause panic. Meanwhile, traditional detection methods play a limited role in processing these images. Therefore, deep learning is more needed in this realm. On the other hand, deep learning models have demonstrated their power in many applications. For example, convolution neural networks (CNNs) achieve promising performance in many computer vision and natural language processing applications [6]. Deep learning provides a novel approach to identifying features for tempered regions, which inherently represent characteristics of the tempered regions appearing in the dataset [14]. Compared to traditional image detection methods, deep learning models can automatically learn the features and patterns of images from a large amount of data and can more accurately recognize and classify these features and designs, thus achieving higher accuracy. And they can learn more complex features and patterns, thus better coping with various deformations and distortions in image tampered, improving the robustness and stability of the algorithm. Also, Deep learning models can increase the number of layers and parameters to improve the complexity and performance of the algorithm, thus better adapting to different image-tampered scenarios and application requirements. Besides, Deep learning models automatically warn parameters according to different tampered scenarios and datasets, thus better adapting to various image-tampered detection tasks. Meanwhile, faster processing speed is also one of its advantages: they can use high-performance hardware such as GPUs to accelerate computing, thus achieving faster image-tampered detection speed to meet the requirements of real-time and high efficiency. In image-tampered detection, deep learning requires a large number of high-quality datasets for training and validation. Large-scale, high-quality datasets play an essential role in the deep learning era, which act as the catalyst stimulating and accelerating technique development [18]. However, Some databases for CMFD(copy-move forgery detection) already exist, but they are not suited for evaluating post-processing methods [15]. Firstly, current datasets suffer from homogeneity and insufficient sample size, making it difficult for deep learning algorithms to predict new tampered techniques in practical applications. Secondly, existing datasets are often generated through simulated tampered behavior, which may only partially reflect the tampered situation in real-world scenarios. Therefore, collecting and annotating real-world datasets is vital but requires significant time and cost. We list the five main datasets and their shortcomings in Table 1. DFACTO is a novel dataset for image and face manipulation detection and localization [17]. CASIA v1.0 is one of the more commonly used datasets for image tampered detection, containing a small number of manually tampered images and synthetically tampered images generated by copying and pasting. The database is made publicly available for researchers to compare and evaluate their proposed tampering detection techniques [12]. However, the sample size is relatively small, and covering all possible tampered scenarios is difficult. NIST 2016 was used in the image tampered detection competition held by the National Institute of Standards and Technology (NIST) in 2016. It contains a large number of tampered images and real-world images, but it is private, and only competition participants can use it, making it difficult for researchers to access. Although COVERAGE includes both manually tampered images and real-world images, the dataset is still being expanded, and the sample size is relatively small, making it challenging to meet the needs of deep learning algorithms. CASIA v2.0 is a large-scale face dataset released by the Chinese Academy of Sciences. The datasets still had some problems, such as imbalanced and limited subject distribution, image quality and metadata, and Potential for overfitting. These issues may limit the generalization ability of machine learning models trained solely on CASIA v2.0. We will show our results in Fig.1. Considering the increasing demand for public databases for image forensic [12], we crawl images from websites with tampered images on the network, such as the Baidu PS Bar. It is a perfect source of original and tampered images. Most users will ask others to help them modify the picture they offer. In that case, Under their posts, there are often a large number of tampered images. We need to save the original and tampered images from different posts. That makes us collect lots of data in a short time. The usual way to find the difference between the tampered image and the original image is to subtract the tampered graph from the original one. However, the user, in the process of modifying the image, tried to make a universe change to the image. That will make the result of subtraction have a lot of noise, even full-screen noise. Using total variation noise reduction can effectively solve this problem. This method combines character recognition technology and some switching operations, which can significantly retain text information and reduce noise. In general, the contribution of our paper contains: * We propose a modified total variation noise reduction method. Total variation noise reduction can remove various types of noise, which will help us obtain a high-quality mask. * We design a way to automatically obtain a large number of datasets, which will help us quickly get a large number of available data. ## II Related Works ### _Automated Data Crawling_ The dataset in the image tampered detection field is usually small because manual annotation is required. A growing wealth of information and increasingly sophisticated interfaces necessitate automated processing [7] in recent years. The conventional web scraping methods for data collection include Web Crawlers, API Calls, RSS Subscriptions, and Database access. API, or Application Programming Interface, is a set of software tools that allows different applications to interact with each other. Through an API, we can obtain the desired data from other applications and return the data in a machine-readable format. RSS is a standard protocol for obtaining updates from blogs, news websites, and other sources. You can get updated content in real time by subscribing to RSS feeds. What we have applied is web crawlers. The web crawler is a program or software that traverses the Web and downloads web documents in a methodical, automated manner [8]. Crawlers usually access the target website's pages according to predetermined rules, extract the data and save it locally or in a database. On the network, there are plenty of Fig. 1: We list four groups of images, from left to right, the original image, the tampered image, the noise image after subtracting the two images, and finally, our processing result image. We can see that our method accurately finds the tampered area. The meaning of Chinese characters: pay 50**r** to remove the watermark. tampered images. We need to collect tampered images and their original images to get the data that could be used in the deep learning model. PS websites can meet this need. The PS bar in Baidu Post is our first choice. ### _Total Variation_ The total variation has been introduced in Computer Vision by Rudin, Osher, and Fatemi [3] as a regularizing criterion for solving inverse problems [9]. It has many applications in image processing, computer vision, and image analysis. The total variation can be understood as the L1 norm of the gradient in the image, that is, the sum of the absolute values of the adjacent differences of the pixel values. The smaller the total variation, the higher the smoothness of the image. That is, the smaller the pixel value change in the image, and _vice versa_. It means that there are more details and texture information in the image. In image processing, total variation is often used as a regularization term to constrain the solution of optimization problems, such as image noise reduction, image segmentation, image enhancement, etc. Total variation regularization can be achieved by adding a total variation term to the objective optimization function. Introducing the total variation regularization term can effectively suppress noise and protect image edge details from obtaining better image processing results. In addition to two-dimensional images, total variation can be extended to three-dimensional images, video, and other signal processing problems with wide application value. Arguably, the success of TV-based regularization lies in a good balance between the ability to model piece-wise smooth images and the difficulty of the resulting optimization problems [10]. ## III Proposed Methods ### _Maximally Stable Extremal Regions_ Our general working pipeline is depicted in Fig 2. For the commencement, we crawl tons of raw data from the website with tampered images. Only tampered images of the same size as the original can be processed. After the two images are subtracted and converted into a gray image, we will get an image with much noise. Usually, words in the image are obvious but thin. They are critical modified parts. However, processing them directly with total variation noise reduction technology will lead to the loss of this information. So we must pre-process the image with text recognition to retain this information. As the intensity contrast of text to its background is typically significant, and a uniform intensity or color within every letter can be assumed, MSER is a natural choice for text detection [11]. The SIFT and SURF algorithms proposed by Lowe and Bay efficiently achieve feature detection with scale and rotation invariance, but these features are not affine-invariant. For various image regions with different shapes, affine-invariance is achieved by region rotation and size normalization. MSER is one of the most influential algorithms in region detection. The concept can be explained as follows. Imagine all possible three holdings of a gray-level image i. We will refer to the pixels below a threshold as 'black' and to those above or equal as 'white.' If we were shown a movie of threshold images, with the frame corresponding to a threshold, we would see first a white image. Subsequently, black spots corresponding to local intensity minimal will appear and grow. At some point, regions corresponding to two local minimal will merge. Finally, the last image will be black. The set of all connected components of all frames of the movie is the set of all maximal regions; Fig. 2: Modified total variation noise reduction architecture to obtain high-quality mask. Many original data is crawled from websites containing tampered images by crawlers. The original and tampered images are converted into gray images and then subtracted. Contour detection is used to extract the text part, and the gray image will be iterated one hundred times by total variation noise reduction. Then the image will be binarized and processed by switching operations. Words part and binary image add up to a high-quality mask. minimal regions could be obtained by inverting the intensity of I and running the same process [2]. Its mathematical principles are as follows: \[v(i)=\frac{|Q_{i+\Delta}-Q_{i-\Delta}|}{|Q_{i}|} \tag{1}\] ### _Non-maximum Suppression_ We can put much-suspected text information in part of the box through MSER technology. However, dozens of boxes will be around the same part of the situation. These boxes overlap, which seriously interferes with our selection of the best effective region. So we use the Non-maximum Suppression method to choose the best one. Pedro F. Felzenszwalb [1] described an object detection system that represents highly variable objects using mixtures of multiscale deformable part models in 2009. These models are trained using a discriminative procedure that only requires bounding boxes for the objects in a set of images [1]. Their proposed technology is a component-based object detection system that uses a hybrid representation of multiscale deformable component models. In this detection system, NMS removes overlaps between multiple components. We take NMS to remove overlaps, too. In target detection, NMS is a post-processing method for removing overlapped bounding boxes. After applying the bounding box prediction method described above, we have a set of detection D for a particular object category in an image. A bounding box and a score define each detection. We sort the detection in D by score and greedily select the highest-scoring ones while skipping detection with bounding boxes that are at least 50% covered by a bounding box of a previously selected detection [1]. Through this theory, we can remove overlaps like Fig. 3 We can obtain many bounding boxes from MSER. In terms of evaluation metrics for bounding box regression, Intersection over Union (IoU) is the most popular metric [16]. \[IOU=\frac{A\cap B}{A\cup B} \tag{2}\] We set the intersection ratio \(>=0.4\) to delete the comparison block diagram, leaving the highest score block diagram. Go to the block diagram below the threshold, sort the remaining block diagram, select the block diagram with a high confidence value, and repeat the intersection and comparison process. ### _Modified Total Variation Noise Reduction_ Rudin et al. [3] observed in 1990 that the total variation of noise-contaminated images is significantly more significant than that of noise-free images. Total variation (TV) methods are very effective for recovering "blocky," possibly discontinuous, images from noisy data [13]. The total variation is defined as the integral of the gradient amplitude. Limiting the total variation limits the noise. The total variation of the image is minimized subject to constraints involving the noise statistics. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time-dependent partial differential equation on a manifold determined by the constraints, as \(t\rightarrow\infty\), the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast [3]. In the data we obtain, the signal that may be false details has a high total variation. That is, the integral of the absolute gradient of the signal is high. According to this principle, the total change of the signal is reduced so that it is closely matched with the original signal. The unwanted details are removed while retaining important details, such as edges. The mathematical definition of the total variation is as follows: \[\inf_{u}I(u),\quad I(u):=\int_{\Omega}|\nabla u|\mathrm{d}x\ \mathrm{d}y\] However, in digital images, there is no continuous function. The whole image is composed of pixels, and they are discrete. We use pixel \(x_{t}\) and the next pixel \(x_{t+1}\) to represent \(x_{t+1}\to x_{t}\). The formula is modified like this: \[V(y)=\sum_{i,j}\sqrt{\left|y_{i+1,j}-y_{i,j}\right|^{2}+\left|y_{i,j+1}-y_{i,j }\right|^{2}}\] In this way, we can process the masks which have massive noise. For the processed image, we calculate each pixel's first-order and second-order partial derivatives in the x and y directions by looping. To make the difference between the result and the original image not too large, we add a fidelity term when solving this gradient minimum. The value of each point is calculated according to the formula, and then we iterate the above operation one hundred times. The final image still can't be directly used because it's even more blurred. Using two erosion operations on this image, then binarizing it with a threshold of 15, we can get an image that has less noise. There are still some sporadic pixels we don't need. We Fig. 3: The results of NMS. MSER will give us several rectangular frames, some of which are too large, some are too small, and some overlap too much, as shown in the left diagram. Our NMS technology reduces the multiple frames of the above into one of the most representative frames, as the right image shows. can get an image with far less noise by processing them with 8 erosion operations and 2 dilation operations. Besides, inputting different values to the constants in the fidelity term will result in different processed images. Adding up the white parts of the total variation denoising image and the same image processed by MSER, we can obtain the high quality which could be used in the deep learning model. ## IV Experiments and Results ### _Maximally Stable Extremal Regions Experiments and Results_ Experiments are conducted on the tampered image dataset crawled from Baidu Post. The dataset has more than 45000 images. We've grouped images that crawled down from the same post into one category. The first image is regarded as an original image by us. The left images are almost tampered images based on it. We throw out images that aren't the same size as the original, for most of these images are irrelevant emoticons. MSER can help us easily box out the words in the mask. There are 71 bounding boxes before NMS. And there are 41 bounding boxes after NMS. We can see the result in Fig.4 The words we boxed out are visible. In OCR image text recognition, most of the characters can be identified. Because of the lack of tampered images with words, we randomly selected 50 images to detect OCR text and make the results into a table. The text size in the picture is divided into five categories in Fig. 5. We will show the rate of text recognition in the following table II. The tiny and small section rate is low because, in the tampered graph, small words are greatly affected by tampering. This kind of word will be significantly deformed after the subtraction operation, resulting in not being recognized by OCR. For larger fonts, only a few extremely deformed characters can not be recognized. But all the characters are converted into white parts in the final mask. Even the fonts that cannot be identified can be marked by our algorithm. ### _Modified Total Variation Noise Reduction Experiments and Results_ After extracting the required text information, we continue to perform total variation noise reduction on the original image. We combine the graphic part obtained by the total variation noise reduction processing with the text part obtained by the MSER processing. The results were corroded twice to obtain the final results. We deploy the program on our server for 24-hour automatic image acquisition and processing to get high-quality masks. Part of the final results are shown in Figure 6. Obviously, our operation greatly reduces the noise and retains the text information so that the deep learning model can use the mask. ### _Advantages compared to existing methods_ * Large-Scale. Our raw data contains 46509 images, while the commonly used tampering detection dataset CASIA Fig. 4: MSER can accurately frame the part with the same gray value, most of which are text, and a few parts that are not text are other contents that have been tampered with. We can see 71 bounding boxes in the left graph. After NMS, there are only 41 boxes, as the right shows. The meaning of Chinese characters: pay 50% to remove the watermark. Fig. 5: The size of the character. Different font sizes will be affected to varying degrees in the subtraction of the two images. As shown above, we divide the font size into five specifications and calculate the accuracy of our method in different font sizes. Fig. 6: The final result graph was obtained by combining the text part and the graphic part. The meaning of Chinese characters: pay 50% to remove the watermark. v1.0, CASIA v2.0, Columbia, COLUMB, FORENSICS only contains 1725, 12323, 1845, 358, and 288 images, respectively. Such a larger scale can help better exploit the full potentials of more advanced model architectures [19]. * Continuous Regeneration. Commonly used data uses random Gaussian noise, JPRG compression, and random flipping to make the data set one more, restricting the increase in the data. On the contrary, net friends could supply countless raw data for us. What we need to do is crawl their images on the internet. * Covering a Wide Range. Many datasets have their particular limitation: The splicing forgery region of the CASIA data set is a small and delicate object. The splicing forgery region of COLUMB is superficial, large, and meaningless. But the dataset we build has a wide range. The types of tamper depend on how many netizens can offer. * High-Speed Generation. Our experiments are on a 16g RAM, i7-11800H CPU, RTX 3060 GPU machine. An average size image takes around 3 minutes to process into our mask. The large image needs more time, and the small image only needs less than 1 minute. In this way, we can automatically get a lot of data in a very short time. Our work efficiency will increase exponentially if multiple devices are processed in parallel. ## V Conclusions In this paper, we build the Modified Total Variation Noise Reduction architecture to automatically crawl raw data from the Internet and make it a high-quality mask for deep learning. This technology that can obtain many high-quality masks quickly is expected to help our deep learning model achieve better learning results. We want to apply this technology to tampering detection, which requires many datasets. If the deep learning model has enough data, the harm of tampering with images in the future may be significantly reduced. ## VI Acknowledgement Numerical computations of this paper are supported by Hefei advanced computing center.
2309.00249
Suicidal Pedestrian: Generation of Safety-Critical Scenarios for Autonomous Vehicles
Developing reliable autonomous driving algorithms poses challenges in testing, particularly when it comes to safety-critical traffic scenarios involving pedestrians. An open question is how to simulate rare events, not necessarily found in autonomous driving datasets or scripted simulations, but which can occur in testing, and, in the end may lead to severe pedestrian related accidents. This paper presents a method for designing a suicidal pedestrian agent within the CARLA simulator, enabling the automatic generation of traffic scenarios for testing safety of autonomous vehicles (AVs) in dangerous situations with pedestrians. The pedestrian is modeled as a reinforcement learning (RL) agent with two custom reward functions that allow the agent to either arbitrarily or with high velocity to collide with the AV. Instead of significantly constraining the initial locations and the pedestrian behavior, we allow the pedestrian and autonomous car to be placed anywhere in the environment and the pedestrian to roam freely to generate diverse scenarios. To assess the performance of the suicidal pedestrian and the target vehicle during testing, we propose three collision-oriented evaluation metrics. Experimental results involving two state-of-the-art autonomous driving algorithms trained end-to-end with imitation learning from sensor data demonstrate the effectiveness of the suicidal pedestrian in identifying decision errors made by autonomous vehicles controlled by the algorithms.
Yuhang Yang, Kalle Kujanpaa, Amin Babadi, Joni Pajarinen, Alexander Ilin
2023-09-01T04:44:49Z
http://arxiv.org/abs/2309.00249v1
# Suicidal Pedestrian: Generation of Safety-Critical Scenarios for Autonomous Vehicles ###### Abstract Developing reliable autonomous driving algorithms poses challenges in testing, particularly when it comes to safety-critical traffic scenarios involving pedestrians. An open question is how to simulate rare events, not necessarily found in autonomous driving datasets or scripted simulations, but which can occur in testing, and, in the end may lead to severe pedestrian related accidents. This paper presents a method for designing a suicidal pedestrian agent within the CARLA simulator, enabling the automatic generation of traffic scenarios for testing safety of autonomous vehicles (AVs) in dangerous situations with pedestrians. The pedestrian is modeled as a reinforcement learning (RL) agent with two custom reward functions that allow the agent to either arbitrarily or with high velocity to collide with the AV. Instead of significantly constraining the initial locations and the pedestrian behavior, we allow the pedestrian and autonomous car to be placed anywhere in the environment and the pedestrian to roam freely to generate diverse scenarios. To assess the performance of the suicidal pedestrian and the target vehicle during testing, we propose three collision-oriented evaluation metrics. Experimental results involving two state-of-the-art autonomous driving algorithms trained end-to-end with imitation learning from sensor data demonstrate the effectiveness of the suicidal pedestrian in identifying decision errors made by autonomous vehicles controlled by the algorithms. ## I Introduction Autonomous driving (AD) is a captivating field of research that holds great potential for enhancing household mobility, optimizing traffic efficiency, and ensuring safety. In recent years, AD has gained considerable attention, and remarkable advancements have been made. Two approaches have emerged: modular driving systems that design and train each sub-module separately according to its functions [1, 2], and end-to-end models that directly perform decisions based on raw sensor inputs [3, 4]. However, despite these advancements, deploying AD on a large scale remains a significant challenge. A crucial reason for this is the difficulty, danger, and time-consuming nature of testing and validating autonomous vehicles (AVs), particularly in scenarios involving pedestrian safety. Several datasets [5, 6, 7] have been provided for AV testing. However, most of these manually collected data contain few safety-critical scenarios, rendering a severe overestimation of the safety performance of the testing vehicle. Other popular practices for AV testing focus on generating traffic scenarios [8, 9, 10]. While these practices enrich the range of test scenarios and expedite the validation process, they are often limited to specific scenes, such as highways or intersections, and do not adequately consider pedestrian interactions. In this paper, we propose a method for automatically generating pedestrian-related, safety-critical traffic scenarios specifically for AV testing. By optimizing the pedestrian's behavior in the scene, we guide the pedestrian to exhibit suicidal actions with the intention of colliding with the moving car, thereby forcing the vehicle to take emergency actions. To achieve this, we formulate the suicidal pedestrian as a reinforcement learning (RL) agent and train it using a model-free RL algorithm. Additionally, to enable the pedestrian to adapt to various scenes, we design an observation space based on pedestrian characteristics and impose constraints on the initial distance between the test vehicle and the pedestrian. To demonstrate the effectiveness of our approach in generating suicidal pedestrian-based scenarios, we conduct extensive experiments in different environments, employing diverse driving policies. The main contributions of this paper are as follows: 1. Proposing a method for generating pedestrian-related safety-critical traffic scenarios dedicated to AV testing. 2. Designing a suicidal pedestrian as an RL agent that aims to collide with the AV under test and training the agent using a model-free RL algorithm. 3. Generalizing the suicidal pedestrian to test various driving policies in different environments after training Fig. 1: Overview of the proposed suicidal pedestrian traffic scenario. The autonomous vehicle (AV) is controlled by an autonomous driving (AD) algorithm, which takes inputs from various sensors, producing low-level control commands to drive the vehicle safely. The pedestrian, modeled as a reinforcement learning (RL) agent, observes the location and velocity of the vehicle, and tries to hit the car with an adversarial policy learned from reward feedback. it against a specific driving agent in limited situations. 4. Experimentally demonstrating the effectiveness of our suicidal pedestrian in identifying AV decision failures through testing with two state-of-the-art AD algorithms. ## II Related work ### _Traffic Scenario Generation for Vehicle Testing_ Traffic scenario generation for vehicle testing aims to construct diverse traffic situations using simulators in order to expedite and streamline the AV testing process because real-world testing can be dangerous and expensive, particularly for safety-critical scenarios. Recently, a lot of research has been presented on traffic scenario generation [8, 9, 10, 11]. In [8], an adversarial driving scenario for AV testing is proposed, which involves training an adversarial car using Bayesian optimization and modeling the unknown cumulative performance of the test agent as a Gaussian process. Another work [11] models a three-agent environment to test AVs for detecting decision errors and improve their performance through a two-step training framework. The first step involves training an adversarial vehicle to identify failures in the test cars, while the second step focuses on retraining the autonomous car based on these failure states to enhance its robustness. Furthermore, authors in [9] propose an intelligent testing environment to validate the statistical capacity boundary of AVs in an accelerated mode. By removing non-safety-critical states and reconnecting critical ones, the Markov decision process (MDP) is modified to contain only relevant information, thereby densifying the training data and reducing the time required for AV testing. However, these studies pay little attention to pedestrians, limiting their applicability. More recent works have further studied the behavior of pedestrians. In [12], pedestrians are trained to cross roads through crosswalks when a test vehicle approaches. However, the pedestrian trajectory is pre-scripted, constraining the proposed method from being generalized to other environments. Drawing inspiration from various existing pedestrian models [13, 14], a pedestrian-placement model [15] learns to adversarially synthesize test scenarios to increase the likelihood of collisions with a test AV given a fixed driving policy and pedestrian behavior model. However, this approach was not evaluated against state-of-the-art AD algorithms. ### _Reinforcement Learning_ RL algorithms guide agents to interact with the environment and to learn behaviors through a trial-and-error style without explicit human supervision. The RL problems are modeled as MDPs and the objective of RL is to maximize the rewards in the MDP by learning how to act. Specifically, for a given MDP, RL algorithms aim to learn an optimal policy \(\pi^{*}(s)\) that maximizes the expectation of the cumulative discounted return for every state \(s\in\mathcal{S}\): \[\max_{\pi}\mathbb{E}\bigg{[}\sum_{t=0}^{T}\gamma^{t}R_{t+1}\bigg{|}s\bigg{]}, \tag{1}\] where \(T\) is the time horizon, \(\gamma\) the discount factor, and \(R\) the reward function that at each step \(t\) depends on the action \(a_{t}\) taken by the policy \(\pi\) in state \(s_{t}\). In the AD area, RL algorithms have been widely used either for developing new driving systems [16, 17] or for generating traffic scenarios [18, 19, 20]. We model the traffic scenario as an MDP and train our suicidal pedestrian using a model-free RL algorithm, PPO [21]. ## III Method Our work focuses on the generation of safety-critical traffic scenarios involving pedestrians to facilitate the testing of AVs in urban settings. As illustrated in Fig. 1, the generated scenarios contain two agents: the AV being tested and the suicidal pedestrian (Section III-A). The AV is controlled by some state-of-the-art driving algorithms, which take sensor observations as inputs and produce low-level commands such as steering angle and acceleration to ensure safe driving. On the other hand, the pedestrian, modeled as an RL agent and trained with a model-free RL algorithm (Section III-B), observes the location and motion of the AV and attempts to collide with the car, thereby causing the AV failure. Generating the training scenarios involving the pedestrian and the target vehicle is a non-trivial process. If the pedestrian and vehicle are too close to each other, causing a collision can be very easy, and if they are far away or move in opposite directions, it is very difficult. To address this issue, we constrain the set of initial states (Section III-B). ### _Walking as a Markov Decision Process_ One of the central challenges addressed in this paper is formulating the testing scenario as a Markov decision process (MDP). Given that the testing AV is already well-trained with fixed driving policies, our focus lies on modeling the pedestrian. Consequently, it becomes crucial to precisely define the state space \(S\), action space \(A\), and reward function \(R\) for the pedestrian. The state transition dynamics are implicitly determined by the simulator once the aforementioned three elements are established. #### Iii-B1 State Space The state input for our suicidal pedestrian agent captures how the agent perceives the environment. Since the pedestrian can successfully collide with the car by knowing the vehicle's position and velocity information, a finite-dimensional vector containing this information suffices for our collision-seeking suicidal pedestrian. Additionally, we consider how to represent the position and velocity. It can either be directly represented in world coordinates or in a relative form by describing it in the pedestrian coordinate system via coordinate transformation. In this paper, we adopt the relative representation due to its rotation and translation invariance, which enhances the generalization capability of our suicidal pedestrian. Therefore, we use the following state space: \[s=[\alpha,d,\beta,v] \tag{2}\] where \(\alpha\) is the angle of direction and \(d\) the distance to the target vehicle from the pedestrian, \(\beta\) is the relative direction in which the target vehicle is moving and \(v\) is the relative scalar speed. #### Iii-A2 Action Space The action of our suicidal pedestrian is determined by the forward direction angle and the scalar velocity. The forward direction angle, ranging from \([-\pi,\pi]\), specifies what direction the pedestrian will walk toward, while the scalar velocity ranging from [0, 3.5] in \(m/s\) describes how fast the pedestrian is. Notably, since the input state is represented in the pedestrian coordinate, the output action is also represented in this coordinate. However, both the pedestrian agent and the AV move in the environment defined in the world coordinate. Therefore, the pedestrian action, especially for its forward direction angle, must be transformed back to the world coordinates. #### Iii-A3 Reward Functions The reward function plays an essential role in training the pedestrian policy. We have considered two types of rewards: * Reward \(R_{1}\) which aims to maximize the collision rate without considering the velocity of the vehicle: \[R_{1}=\begin{cases}1,&\text{if hit the vehicle}\\ 0,&\text{otherwise}\end{cases}\] * Reward \(R_{2}\) which encourages the pedestrian to generate the most hazardous collisions by encouraging collisions when the vehicle is driving at high speeds: \[R_{2}=\begin{cases}\max(3,1.5v_{c}),\,\text{if hit the front of vehicle}\\ \max(1,0.5v_{c}),\,\text{if hit other parts of vehicle}\\ 0,\,\text{otherwise}\end{cases}\] where \(v_{c}\) is the velocity of the vehicle when the collision happens. The shaped reward forms a natural curriculum and helps learn complex and unpredictable behaviors, such as exploiting occluded areas, required to fool the AD algorithms into dangerous frontal collisions. ### _Policy Optimization_ We use a continuous-action model-free RL algorithm Proximal Policy Optimization (PPO) [21] to train the suicidal pedestrian. We estimate the advantage function with GAE [22], and use the hyperparameters described in Table I. At the beginning of each episode, we spawn the pedestrian close to the vehicle within a sector area from \(-60^{\circ}\) to \(60^{\circ}\) based on the forward direction of the vehicle, with the distance varying from 7m to 30m. This creates a task of a suitable difficulty level. When the distance is lower, it is easier to hit the vehicle. On the other hand, a larger distance allows the pedestrian to learn more complex behaviors, thus enhancing the diversity of the generated traffic scenarios. ## IV Experiments The experiments aim to demonstrate the effectiveness of our designed suicidal pedestrian used for generating safety-critical traffic scenarios for AV testing. To this end, we first train our suicidal pedestrian against one simple but effective rule-based AV. Later we evaluate the trained pedestrian in different environments to verify its ability to create collisions. Finally, we test two state-of-the-art AD algorithms with our trained suicidal pedestrian, exposing their decision errors when dealing with pedestrian-related traffic scenarios. ### _Experimental Setup_ We use CARLA [23] open-source urban driving simulator to train and validate the designed suicidal pedestrian, as well as evaluate some state-of-the-art AD algorithms. We use Town 1 and Town 2 provided by CARLA to build our training and test environments. These towns contain T-intersections and two-lane roads. We chose these towns because T-intersections can provide more complex traffic scenarios and two-lane roads are the main road structure in residential areas where pedestrians are more likely to appear. We train our suicidal pedestrian against the default CARLA AV (behavior agent) with the two different reward functions, \(R_{1}\) and \(R_{2}\), and perform three training runs. We set the episode length to 600 timesteps and run the simulator at a speed of 20 timesteps per second. This means each episode lasts 30s unless the suicidal pedestrian collides with the vehicle. Moreover, considering the speed difference, each control command for the pedestrian is repeated for 20 timesteps, equal of 1s of simulation. At the same time, we update the command for AV every timestep to avoid accidents caused by delayed controls. We use the OpenAI Gym [24] framework to wrap up our designed suicidal pedestrian. We train the pedestrian with the PPO [21] implementation from stable-baselines3 [25]. We evaluate the performance using the following metrics: * _Collision rate_: an overall performance metric which specifies how often the pedestrian can result in a collision with the target vehicle. * _Moving collision rate_: collision rate when the target vehicle is moving. * _Front collision rate_: collision rate with the front part of the target vehicle. ### _Training Against the CARLA AV Agent_ Fig. 2 shows that training of the pedestrian policy converges after 70000 steps when trained using both rewards \(R_{1}\) and \(R_{2}\). The value of the mean episode length indicates that using \(R_{2}\) results in a policy that is more aggressive in searching for and hitting the vehicle. Table II shows the performance of our suicidal pedestrian. One can see that both reward functions perform well in \begin{table} \begin{tabular}{l l} \hline **Parameter** & **Value** \\ \hline No. total training steps & 70000 \\ No. epochs when optimizing the surrogate loss & 10 \\ No. env. steps to run per update & 150 \\ Batch size & 64 \\ Learning rate for actor and critic networks & \(3\times 10^{-4}\) \\ Discount factor & 0.98 \\ \(\lambda\) for Generalized Advantage Estimate (GAE) & 0.95 \\ Objective clipping value & 0.2 \\ Value loss coefficient & 0.5 \\ Entropy regularization coefficient & 0.01 \\ \hline \end{tabular} \end{table} TABLE I: The PPO hyperparameters guiding the pedestrian to hit the target AV. \(R_{2}\) generally yields better performance than \(R_{1}\). Note that more than half of all collisions happen when the AV does not stop in time, which corresponds to more hazardous scenarios. As for the collision areas, the front part of the vehicle receives almost \(80\%\) of collisions. We can also see that the pedestrian agent generalizes well to a new town. The performance of both reward functions declines only slightly, by approximately \(5\%\), when the suicidal pedestrian is deployed to a previously unknown environment. Note that the collision rate does not reach 100% and we see two reasons for that. First, the pedestrian uses a coordinate-based state and it may be blocked by environmental objects that are not included in the state. Second, sometimes the pedestrian fails to predict the future trajectory of the AV. In Fig. 3, we visualize some typical behaviors of the suicidal pedestrian. ### _Testing SOTA AD Algorithms with the Suicidal Pedestrian_ We test two state-of-the-art AD algorithms with our suicidal pedestrian, LAV [26] that plans using predicted future trajectories for all traffic participants, and InterFuser [27] that has a safety controller relying on a predicted object density map to avoid collisions. Note that the pedestrian was trained against the CARLA behavior agent, and it is evaluated with LAV and InterFuser without any adaptations. Table III describes the performance of LAV and InterFuser against the suicidal pedestrian. The results show that the pedestrian can generate collisions both with LAV and InterFuser, showing potential weaknesses of these two driving algorithms. InterFuser has a much lower moving collision rate than LAV and CARLA, which suggests most crashes of InterFuser are not severe, while LAV is more likely to cause hazardous consequences when a collision happens. We visualize some failures of LAV and InterFuser when \begin{table} \begin{tabular}{c|c c c|c c c} \hline & \multicolumn{3}{c|}{**Train town (Town 2)**} & \multicolumn{3}{c}{**Test town (Town 1)**} \\ \hline **Reward** & **Collision rate** & **Front collision rate** & **Moving collision rate** & **Collision rate** & **Front collision rate** & **Moving collision rate** \\ \hline \(R_{1}\) & \(0.84\pm 0.05\) & \(0.77\pm 0.04\) & \(0.42\pm 0.09\) & \(0.79\pm 0.00\) & \(0.73\pm 0.05\) & \(0.43\pm 0.04\) \\ \(R_{2}\) & \(\textbf{0.90}\pm 0.03\) & \(\textbf{0.82}\pm 0.05\) & \(\textbf{0.55}\pm 0.01\) & \(\textbf{0.86}\pm 0.02\) & \(\textbf{0.80}\pm 0.04\) & \(\textbf{0.54}\pm 0.05\) \\ \hline \end{tabular} \end{table} TABLE II: Performance of the suicidal pedestrian against the CARLA AV agent. The best results from the point of view of the suicidal pedestrian are shown in bold. Fig. 3: Typical behaviors of the pedestrian trained with \(R_{2}\). Red arrows represent the pedestrian direction. Top row: The pedestrian directly hits the vehicle from the central front, as the car fails to predict the pedestrian’s movement. Bottom row: The pedestrian crashes into the car from the side. Fig. 2: Average rewards (left) and average episode lengths (right) during training of the suicidal pedestrian using \(R_{1}\) (green) and \(R_{2}\) (blue). The solid line represents the mean return, and the light-colored area represents the standard deviation. All plots are smoothed by the moving average over 9 data points. \begin{table} \begin{tabular}{c|c c c|c c c} \hline & \multicolumn{3}{c|}{**Train town (Town 2)**} & \multicolumn{3}{c}{**Test town (Town 1)**} \\ \hline **Method** & **Pedestrian reward** & **Collision rate** & **Moving collision rate** & **Pedestrian reward** & **Collision rate** & **Moving collision rate** \\ \hline CARLA behavior & \(4.57\pm 0.15\) & \(\textbf{0.90}\pm 0.03\) & \(0.55\pm 0.02\) & \(4.29\pm 0.29\) & \(\textbf{0.86}\pm 0.02\) & \(0.54\pm 0.05\) \\ LAV [26] & \(\textbf{3.56}\pm 0.26\) & \(0.93\pm 0.02\) & \(0.47\pm 0.04\) & \(3.94\pm 0.26\) & \(0.90\pm 0.03\) & \(0.61\pm 0.01\) \\ InterFuser [27] & \(3.77\pm 0.38\) & \(0.94\pm 0.02\) & \(\textbf{0.32}\pm 0.06\) & \(\textbf{3.82}\pm 0.16\) & \(0.92\pm 0.02\) & \(\textbf{0.37}\pm 0.02\) \\ \hline \end{tabular} \end{table} TABLE III: Evaluation results of two state-of-the-art AD algorithms using the suicidal pedestrian trained with reward \(R_{2}\). The best results from the point of view of the driving policy are shown in bold. Fig. 4: Visualization of two collision episodes with LAV. We present three (concatenated) camera images (top), detection and motion predictions (bottom left), and predicted road geometries (bottom right) for the two episodes. Left: LAV detects the pedestrian as a vehicle. Right: LAV fails to find the pedestrian due to insufficient fusion of images. Fig. 5: Visualization of a failure case of InterFuser in which the AV does not perform any actions to avoid collisions due to failing to predict the trajectory of the pedestrian. We present camera images (top row), detected traffic scenes at the current timestep (middle row) and predicted traffic scenes at the next two timesteps (bottom row). The yellow rectangle in the last two rows represents the ego vehicle, while white rectangles represent other detected objects. Green dots are the future trajectory of the ego vehicle. dealing with our suicidal pedestrian to understand their weaknesses better. Fig. 4 illustrates two typical errors of LAV: incorrect detection and unsuccessful detection. In incorrect detection, LAV detects the pedestrian as a vehicle, thus applying unreasonable dynamic models to the pedestrian to predict the corresponding trajectories. In unsuccessful detection, LAV fails to detect the pedestrian due to vision failure. Interestingly, both errors can happen at different stages of one episode. Fig. 5 illustrates a typical failure case of InterFuser, in which the vehicle does not perform any actions to avoid collisions with the suicidal pedestrian even if the pedestrian is detected. This failure suggests that Interfuser performs well in detection, but potential improvements should be applied to its prediction and decision-making modules. ## V Discussion & Future work This paper proposes a suicidal pedestrian model to generate safety-critical traffic scenarios for AD testing. We model the pedestrian as an RL agent and train it using a model-free PPO algorithm. Furthermore, we perform experiments to validate its effectiveness in generating collision scenarios. Finally, testing results of two state-of-the-art AD algorithms illustrate our suicidal pedestrian can significantly help find driving algorithm decision errors. Our work can be extended to having more pedestrians and cars in the simulations. Another direction would be to consider different goal-conditioned pedestrians to generate more varying behaviors to address the limitation of only using the suicidal pedestrian with limited behavior diversity. Moreover, we can augment our state representation with the locations of other objects, or we could use image inputs or object-based representations to replace the hand-crafted state vector, thus allowing the pedestrian to plan movements according to the surroundings, avoid obstacles, or take advantage of obstacles to surprise the drivers.
2308.00606
Determining the ability for universal quantum computing: Testing controllability via dimensional expressivity
Operator controllability refers to the ability to implement an arbitrary unitary in SU(N) and is a prerequisite for universal quantum computing. Controllability tests can be used in the design of quantum devices to reduce the number of external controls. Their practical use is hampered, however, by the exponential scaling of their numerical effort with the number of qubits. Here, we devise a hybrid quantum-classical algorithm based on a parametrized quantum circuit. We show that controllability is linked to the number of independent parameters, which can be obtained by dimensional expressivity analysis. We exemplify the application of the algorithm to qubit arrays with nearest-neighbour couplings and local controls. Our work provides a systematic approach to the resource-efficient design of quantum chips.
Fernando Gago-Encinas, Tobias Hartung, Daniel M. Reich, Karl Jansen, Christiane P. Koch
2023-08-01T15:33:41Z
http://arxiv.org/abs/2308.00606v2
Determining the ability for universal quantum computing: Testing controllability via dimensional expressivity ###### Abstract Operator controllability refers to the ability to implement an arbitrary unitary in \(SU(N)\) and is a prerequisite for universal quantum computing. Controllability tests can be used in the design of quantum devices to reduce the number of external controls. Their practical use is hampered, however, by the exponential scaling of their numerical effort with the number of qubits. Here, we devise a hybrid quantum-classical algorithm based on a parametrized quantum circuit. We show that controllability is linked to the number of independent parameters, which can be obtained by dimensional expressivity analysis. We exemplify the application of the algorithm to qubit arrays with nearest-neighbour couplings and local controls. Our work provides a systematic approach to the resource-efficient design of quantum chips. ## I Introduction Universal quantum computing [1] requires controllability on the quantum processing unit, so that every quantum logic gate can be implemented. A common layout in hardware platforms such as those based on superconducting circuits achieves this by combining two-qubit couplings with local drives for each qubit of the array [2; 3]. While effective, this approach becomes demanding for larger arrays, due to both the physical space needed for each control as well as the associated calibration. Controllability tests can help identify less resource-intensive architectures that are still capable of performing the same quantum gates [4]. Controllability in general studies the dynamics that can be implemented in a quantum system driven by a set of controls [5; 6; 7]. In particular, a system is pure-state controllable if it can reach all final states. Alternatively, an (evolution) operator controllable system is capable of implementing every unitary gate, a necessary feature for universal quantum computing. Tests for these two different types of controllability rely on computing the rank of the dynamical Lie algebra of the Hamiltonian [5] or utilize methods based graph theory [8; 9; 4; 10]. For small system sizes, the tests can be carried out analytically [11; 12; 13; 14]. For some high- and infinite-dimensional systems, controllability can be determined using induction arguments [9; 15; 16; 17]. Beyond these special cases, a numerical approach is possible in principle [4], but is limited by the exponential scaling of the Hilbert space dimension with respect to the number of qubits. In other words, the accuracy and feasibility of controllability tests for increasing system size suffer from the curse of dimensionality. Here, we present a hybrid quantum-classical controllability test, for both pure-state and operator controllability of qubit arrays. The hybrid method we propose evaluates the controllability of the qubit array by measurements on a quantum device, either the system to be studied with an extra ancilla qubit or one that mimics the dynamics of the original system. This opens up a new way of designing controllable qubit arrays with fewer resources, helping to address the issue of scalability. To do so, we harness the computational power of quantum circuits to extract information directly from the qubit array under study. Parametric quantum circuits constitute the basis of many algorithms, for example variational algorithms for solving computationally hard optimization problems [18; 19]. The circuits consist of a set of parametric gates that can be used to measure a cost function. After a classical optimization, the parameters are updated to give a new cost value, continuing the feedback loop of the algorithm. It is necessary to include enough independent optimization parameters to reach the best possible solution. However, minimizing the number of parametric gates and circuit depth is also key in the era of noisy quantum devices [20]. In order to reduce the noise of the circuit while maintaining its optimization capability, every redundant parameter should be identified and removed from the circuit. This goal is related to the dimensional expressivity of the circuit and can be achieved with dimensional expressivity analysis [21; 22], a hybrid quantum-classical algorithm to systematically find redundant parameters. In order to leverage dimensional expressivity analysis to test for controllability, we define a parametric quantum circuit based on the architecture of a given qubit array with local controls and qubit couplings. We then use dimensional expressivity analysis to quantify the number of independent parameters which is related to the controllability of the original qubit array. We provide a complete description of how to carry out the hybrid controllability test on a quantum circuit, opening the possibility of obtaining information of the controllability of a quantum device before it is built. The manuscript is organized as follows. The basic concepts of controllability analysis and parametric quantum circuits are briefly reviewed in section II. The pure-state controllability test is presented in section III, including its derivation, definition and showcase examples. Section IV extends the test to operator controllability, making use of the Choi-Jamiolkowski isomorphism. Section V concludes. ## II Theoretical background To define controllability tests for qubit arrays, we combine the notions of system controllability and circuit expressivity. For the sake of a self-contained presentation, we briefly recap the basic concepts in this section. ### Controllability We consider quantum systems linearly coupled to external controls. They are described by traceless Hamiltonians of the form \[\hat{H}(t)=\hat{H}(t;u_{1},...u_{m})=\hat{H}_{0}+\sum_{j=1}^{m}u_{j}(t)\hat{H}_ {j}, \tag{1}\] where \(u_{j}(t)\) are the controls and \(\hat{H}_{j}\) are the control operators. The Hamiltonian (1) generates the time evolution operator \(\hat{U}(t)\) such that for any state \(\ket{\psi(0)}\) in the Hilbert space \(\mathcal{H}\), \(\ket{\psi(t)}=\hat{U}(t)\ket{\psi(0)}.\) Given an initial state \(\ket{\psi_{0}}\), the set of all final states \(\ket{\psi(T)}\) that can be reached in finite time \(0<T<\infty\) with controls \(u_{j}(t)\) is called the reachable set \(\mathcal{R}_{\ket{\psi_{0}}}\) of the system. The system is said to be pure-state controllable if \(\mathcal{R}_{\ket{\psi_{0}}}=\mathcal{S}^{\mathcal{H}}\) (with \(\mathcal{S}^{\mathcal{H}}\) the unit sphere on \(\mathcal{H}\)), i.e., if every normalized state is reachable from any initial state \(\ket{\psi_{0}}\)[5]. For physical systems this condition is not dependent on the initial state, which means that pure-state controllability is also independent of the state in which the system is initialized. Indeed, the evolution operators \(\hat{U}\) that can be implemented on such systems form a group. This implies that for every evolution \(\hat{U}\) in the group, \(\hat{U}^{-1}\) must also be contained in the group of feasible evolutions. If we assume that every state \(\ket{\phi}\in\mathcal{H}\) can be reached from a certain initial state \(\ket{\psi_{0}}\), then for every state \(\ket{\phi}\) there exists an evolution \(\hat{U}_{\ket{\psi_{0}},\ket{\phi}}\) such that \(\ket{\phi}=\hat{U}_{\ket{\psi_{0}},\ket{\phi}}\ket{\psi_{0}}\). Therefore given any initial state \(\ket{\phi_{i}}\) and final state \(\ket{\phi_{f}}\) we can always generate an evolution \[\ket{\phi_{f}}=\hat{U}_{\ket{\psi_{0}},\ket{\phi_{f}}}\hat{U}_{\ket{\psi_{0}}, \ket{\phi_{i}}}^{-1}\ket{\phi_{i}} \tag{2}\] In particular this proves that if all states are reachable from a certain initial state in a closed system, every state is reachable from any other state. Pure-state controllability is the relevant type of controllability when we are interested in are state transfers, i.e., evolving the system from an initial state to a certain target state. It is equivalent to proving that all state transfers are possible in a system. This, however, is not the strongest type of controllability that can be defined. Pure-state controllability is sufficient to guarantee that there will always be evolution operators \(\hat{U}_{\ket{\psi_{0}},\ket{\psi_{f}}}\) to connect any two states \(\ket{\psi_{0}}\) and \(\ket{\psi_{f}}\), yet not enough to ensure that it is possible to generate every operation \(\hat{U}\) in the special unitary group \(SU\left(d\right)\), where \(d=\dim(\mathcal{H})\). Pure-state controllability does not guarantee that simultaneous state-to-state transfers are always possible. To study this property we consider the so-called operator controllability. A system with controls as defined in (1) and Hilbert space dimension \(d\) is operator controllable if for every unitary evolution \(\hat{U}_{target}\in SU(d)\) there exist a final time \(T\geq 0\), a phase angle \(\varphi\in[0,2\pi)\) and a set of controls \(\{u_{j}\}_{j=1}^{m}\) such that \(\hat{U}_{target}=e^{i\varphi}\hat{U}(T;u_{1},...u_{m})\). Note that for both types of controllability there are no restrictions on the final time \(T\leq\infty\) at which state transfers, respectively unitary operations, are implemented. Consequently, this time \(T\), while always finite, can be arbitrarily large. The question of controllability only inquires whether it is possible at all to perform the desired dynamics. Similarly, it does not impose any restrictions on the maximum amplitude that the controls \(u_{j}(t)\) from (1) can take. Finite amplitude is a physical restriction that impacts the final time required to perform the different operations, but does not mathematically change the controllability of the system. If the Hamiltonian of the system is known, there exist algebraic and numerical tests tailored for both types of controllability [4; 5; 23; 24; 25]. ### Dimensional expressivity Parametric quantum circuits have multiple applications, as they constitute the base for variational quantum algorithms [26]. Their design and study are pivotal factors in the efficiency of the algorithms. In particular, parameter dependence and the set of final states that can be produced are two key topics that determine the capability of the algorithms. Lacking some necessary parametric gates leads to unsuccessful algorithms, whereas including too many dependent parameters is detrimental for the purpose of optimization. We introduce here notions and definitions related to these issues that are relevant for the controllability tests. A parametric quantum circuit is a protocol implemented on a set of qubits that are initialized in a state \(\ket{\psi_{0}}\). It consists of a sequence of logic gates \(\hat{G}_{j}\), some of which depend on real parameters \(\vartheta_{k}\). We consider a parametric quantum circuit as the map \(C(\vec{\vartheta})\) that identifies an array of parameters \(\vec{\vartheta}\) in the parameter space \(\mathcal{P}\ni\vec{\vartheta}\) with \[C(\vec{\vartheta})=\hat{G}_{m}(\vec{\vartheta})...\hat{G}_{0}(\vec{\vartheta}) \ket{\psi_{0}}. \tag{3}\] \(C(\vec{\vartheta})\) implicitly depends on the circuit's initial state \(|\psi_{0}\rangle\)[27]. An example of a parametric quantum circuit is found in Figure 1. Note that the amount of parameters on which each gate \(\hat{G}_{j}(\vec{\vartheta})\) depends may vary from zero to the total number of parameters, e.g. \[\hat{G}_{0}(\vartheta_{1},\vartheta_{2})=\hat{P}(\vartheta_{1})\exp\left(-i \frac{\vartheta_{2}}{2}\hat{X}\right)\hat{H}\hat{P}(-\vartheta_{1}), \tag{4}\] with the phase gate \(\hat{P}\) and the Hadamard gate \(\hat{H}\). For the sake of simplicity, we have chosen units such that \(\hbar=1\). The expressivity of a parametric quantum circuit is its ability to produce states that are representative of the full Hilbert space of the system [28; 29]. Here, we focus on the dimensional expressivity \(expr_{dim}\), i.e. the dimension of \(C\left(\mathcal{P}\right)\) as a real differentiable manifold [21]. As such, the maximal dimensional expressivity for a circuit with complex Hilbert space dimension \(d\) is \(\max(expr_{dim})=2d-1\), which accounts for the real variables of the complex \(d\)-dimensional Hilbert space minus the normalization constraint. Another important point is the concept of redundant parameters. In a quantum circuit \(C(\vec{\vartheta})\), a parameter \(\vartheta_{j}\) is considered redundant if small perturbations on \(\vartheta_{j}\) produce final states on \(C(\vec{\vartheta})\) that can also be achieved by keeping \(\vartheta_{j}\) constant and varying the rest of the parameters \(\vartheta_{k}\) as needed [22]. Minimizing the number of redundant parameters is therefore a relevant matter in the design of parametric quantum circuits. Fewer redundant parameters may result in more resource-efficient circuits that can produce the same manifold of states. If a parameter \(\vartheta_{1}\) is redundant with another parameter \(\vartheta_{2}\), then the converse is also true. We are free to choose one of the two parameters to remain constant while varying the other one at will. The latter is then called independent. Mathematically, the dimensional expressivity of a circuit \(C(\vec{\vartheta})\) is also equal to the number of elements in the maximal set of independent parameters in the circuit. While the cardinality of the maximal set for a certain circuit \(C(\vec{\vartheta})\) is fixed, there may exist multiple maximal sets. Locating and eliminating redundant parameters results in a minimal circuit with the same local dimension in the manifold of reachable states. Redundant parameters and dimensional expressivity are studied through the real Jacobian \(J_{C}\) of \(C(\vec{\vartheta})\). Assuming a total of \(N\) parameters, it takes the form \[J_{C}(\vec{\vartheta})=\left(\begin{array}{ccc}|&&|\\ \mathfrak{R}\partial_{1}C(\vec{\vartheta})&\cdots&\mathfrak{R}\partial_{N}C( \vec{\vartheta})\\ |&&|\\ |&&|\\ \mathfrak{I}\partial_{1}C(\vec{\vartheta})&\cdots&\mathfrak{I}\partial_{N}C( \vec{\vartheta})\\ |&&|\\ \end{array}\right), \tag{5}\] where the elements \(\partial_{k}C\) represent the partial derivatives of \(C\) with respect to \(\vartheta_{k}\). By definition, the dimensional expressivity is equal to the rank of \(J_{C}(\vec{\vartheta})\). In terms of \(J_{C}\), a parameter \(\vartheta_{j}\) is redundant with respect to the other parameters \(\{\vartheta_{i}\}_{i\neq j}\) at a point \(\vec{\vartheta}\) if the \(j\)-th column of \(J_{C}(\vec{\vartheta})\) is linearly dependent with respect to the set of all the other columns of \(J_{C}(\vec{\vartheta})\), i.e. if the rank of \(J_{C}(\vec{\vartheta})\) as a matrix remains the same after removing the \(j\)-th column. A systematic approach, for an ordered array of parameters \(\vec{\vartheta}\), relies on the partial real Jacobians \(J_{C,n}(\vec{\vartheta})\) \[J_{C,n}(\vec{\vartheta})=\left(\begin{array}{ccc}|&&|\\ \mathfrak{R}\partial_{1}C(\vec{\vartheta})&\cdots&\mathfrak{R}\partial_{n}C( \vec{\vartheta})\\ |&&|\\ |&&|\\ \mathfrak{I}\partial_{1}C(\vec{\vartheta})&\cdots&\mathfrak{I}\partial_{n}C( \vec{\vartheta})\\ |&&|\\ \end{array}\right), \tag{6}\] containing only the first \(n\) columns of \(J_{C}(\vec{\vartheta})\). If \(\partial_{1}C(\vec{\vartheta})\neq 0\) then \(\vartheta_{1}\) is independent and we initialize the set of independent parameters as \(\mathcal{N}_{1}:=\{\vartheta_{1}\}\); otherwise, \(\mathcal{N}_{1}:=\emptyset\). Then we can iterate over the following step. If \(\mathrm{rank}(J_{C,k+1}(\vec{\vartheta}))>\mathrm{rank}(J_{C,k}(\vec{\vartheta }))\), then \(\vartheta_{k+1}\) is independent and we update the set of independent parameters \(\mathcal{N}_{k+1}=\mathcal{N}_{k}\cup\{\vartheta_{k}\}\). Else, \(\vartheta_{k+1}\) is redundant and \(\mathcal{N}_{k+1}=\mathcal{N}_{k}\). After all \(N\) parameters have been checked, the set \(\mathcal{N}_{N}\) is a maximal set of independent parameters and its cardinality is the dimensional expressivity of the circuit. The redundant parameters can be then removed from the circuit by setting them to a suitably chosen constant value. The dimensional expressivity analysis follows this approach and provides an efficient method to find a maximal set of independent parameters on a quantum circuit [21; 22]. As a hybrid quantum-classical algorithm, it mixes measurements on the actual circuit and classical computations for the ranks. Instead of calculating the ranks of \(J_{C,n}\), this method retrieves the entries of the matrices \[S_{C,n}(\vec{\vartheta})=J_{C,n}^{T}(\vec{\vartheta})J_{C,n}(\vec{\vartheta}), \tag{7}\] which are \(n\times n\) matrices whose rank equals the one of \(J_{C,n}(\vec{\vartheta})\). The elements of \(S_{C,n}(\vec{\vartheta})\) can be determined via measurements on the circuit with the inclusion of a single ancilla qubit, no matter the number of qubits in the original circuit [21]. ## III Pure-state controllability test using dimensional expressivity This section introduces the novel connection between the dimensional expressivity of quantum circuits and the pure-state controllability of quantum systems. We present the design of a circuit associated to a controlled system that allows us to check its pure-state controllability. We include two examples to showcase its functionality. ### Circuit expressivity and pure-state controllability We consider a qubit array with Hamiltonian (1). We identify the drift \(\hat{H}_{0}\) as the time-independent part, which includes the local free-qubit Hamiltonians and some time-independent couplings between them. Similarly, the operators \(\hat{H}_{j}\) with \(1\leq j\leq m\) are coupled to the \(m\) different external controls acting on the system. In order to use dimensional expressivity analysis to determine controllability of a qubit array, it is necessary to define a parametric quantum circuit that can be run on the system, according to the different controls at disposal. If we can show that all normalized states in the Hilbert space are reachable from a certain initial state using only gates generated by the system's controls, we have proven pure-state controllability. A straightforward choice for the possible parametric gates in the circuit is \[\hat{R}_{j}(\alpha):=\exp\left(-i\,\frac{\alpha}{2}\hat{H}_{j}\right),\qquad 0 \leq j\leq m, \tag{8}\] i.e. rotations around either the drift \(\hat{H}_{0}\) or the control operators \(\hat{H}_{j}\) (1). The gates \(\hat{R}_{0}(\alpha)\) can be implemented by letting the system evolve under its time-independent drift Hamiltonian \(\hat{H}_{0}\) for a certain time \(t=\frac{\alpha}{2}\). For the other gates, \(\hat{R}_{j}(\alpha)\) with \(j\geq 1\), we make use of the local controls. In these gates the \(\hat{H}_{0}\) contribution can be neglected by assuming that the controls can be chosen such that \(\|u_{j}(t)\hat{H}_{j}\|\gg\|\hat{H}_{0}\|\). A realistic approach to the \(\hat{R}_{j}(\alpha)\) implementation is to consider short rotations with intense controls \(u_{j}(t)\), so that the \(\hat{H}_{0}\) contribution is insignificant in comparison. The amplitude of \(u_{j}(t)\) is usually adjusted externally and it has no imposed restriction. We want to design a parametric quantum circuit \(C_{PSC}(\vec{\vartheta})\), starting with an arbitrary initial state \(|\psi_{0}\rangle\in\mathcal{H}\) and exclusively composed of the rotation gates \(\hat{R}_{j}(\vartheta_{k})\). We then use dimensional expressivity analysis to measure the dimensional expressivity of the system. If it is maximal, i.e. \(expr_{dim}=2d-1\) for \(\dim(\mathcal{H})=d\), we have a manifold of reachable states with local real dimension \(2d-1\). This manifold is a subset of \(\mathcal{H}\). We now prove that it is in fact the whole unit sphere of \(\mathcal{H}\). If we assume that the gates \(\hat{R}_{j}(\alpha)\) are cyclic and that every parameter \(\vartheta_{k}\) is used in a single rotation gate in the circuit, we can treat each \(\vartheta_{k}\) as if it had periodic boundaries, i.e. \(\vartheta_{k}\in\mathbb{S}^{1}\). For an array of \(n\) parameters \(\vec{\vartheta}\) the parameter space verifies \[\mathcal{P}\cong\underbrace{\mathbb{S}^{1}\times\cdots\times\mathbb{S}^{1}}_{ n}\cong\mathbb{T}^{n}. \tag{9}\] This implies that \(\mathcal{P}\) is a connected, compact set without boundary. Assume a circuit \(C_{PSC}(\vec{\vartheta})\) that has maximal dimensional expressivity. Then, the manifold of reachable states \(C_{PSC}(\mathcal{P})\subseteq\mathcal{H}\) is a connected, compact manifold without boundary and with maximal local real dimension. Consequently \(C_{PSC}(\mathcal{P})=\mathcal{S}^{\mathcal{H}}\subset\mathcal{H}\). Thus, the system is pure-state controllable. So far, we have found a sufficient condition for pure-state controllability. We now want to identify a condition for non-controllable systems. To this end, we need to prove that there are some states that are not reachable by any of the possible dynamics that we can implement with the different operators \(\hat{H}_{j}\) and their nested commutators. Hypothetically, we could do a sequence of the rotation gates (8) around the drift, the control operators and their nested commutators and test if all of them are linearly independent. However, generating the exponential of the commutator of two control operators (or one control operator and the drift) \(\exp\!\left\{i\,\beta[\hat{H}_{j},\hat{H}_{k}]\right\}\) is no trivial task. It may require optimal control to generate a specific rotation for the exact angle \(\beta\) and the chosen commutator \([\hat{H}_{j},\hat{H}_{k}]\). Instead, we access the different commutators by concatenating a series of multiplications, as in the Baker-Campbell-Hausdorff formula: \[\exp\left(i\,\alpha\hat{A}\right)\exp\left(i\,\beta\hat{B}\right)= \exp\left(i\alpha\hat{A}+i\beta\hat{B}-\frac{1}{2}\alpha\beta[ \hat{A},\hat{B}]\right.\] \[-\frac{i\,\alpha^{2}\beta}{12}[\hat{A},[\hat{A},\hat{B}]] \tag{10}\] \[+\left.\frac{i\,\alpha\beta^{2}}{12}[\hat{B},[\hat{A},\hat{B}]] \cdots\right).\] Assume that we have a parametric quantum circuit consisting of a sequence of \(n\) rotations, \[C_{seq}^{n}(\vec{\vartheta}):=\exp\left(-i\,\vartheta_{n}\hat{A}_{n}\right) \cdots\exp\left(-i\,\vartheta_{1}\hat{A}_{1}\right)|\psi_{0}\rangle \tag{11}\] Figure 1: Three-qubit example of the parametric circuit \(C_{PSC}(\vec{\vartheta})\) (14) for testing pure-state controllability with initial state \(|000\rangle\) in the qubits’ logical basis. Each layer (only two displayed in the diagram) includes an entangling gate \(\hat{R}_{0}\) and a sequence of local gates \(\hat{R}_{j}\) (with \(j\geq 1\)), one for every control present in the qubit array. with \(\hat{A}_{j}\in\{\hat{H}_{k}\}_{k=0}^{m}\,\forall 1\leq j\leq n\). We can use Eq. (10) multiple times on the exponential sequence on the right-hand side of Eq. (11) to express it as a single exponential dependent on \(\vec{\vartheta}\), the different operators \(A_{j}\) and their nested commutators. Assume as well that the dimensional expressivity in the circuit \(expr_{\mathrm{dim}}(C_{seq}^{n}(\vec{\vartheta}))=d_{n}\) is less than the maximum possible. We define a new parametric circuit by adding one more rotation to the chain of operations, \[C_{seq}^{n+1}(\vec{\vartheta},\vartheta_{n+1}):=\exp\left(-i\,\vartheta_{n+1} \hat{A}_{n+1}\right)C_{seq}^{n}(\vec{\vartheta}). \tag{12}\] If the dimensional expressivity of \(C_{seq}^{n+1}\) and \(C_{seq}^{n}\) are the same for every \(\vartheta_{n+1}\in\mathbb{R}\) and every \(\hat{A}_{n+1}\in\{\hat{H}_{k}\}_{k=0}^{m}\), then the number of linearly independent \(\partial_{j}C(\vec{\vartheta})\) remains the same. In other words, we are not able to find more linearly independent operators and thus, the dimensional expressivity of the system cannot be increased. This means that the manifold of reachable states does not have a maximal local dimension and hence there will be some states to which our initial state cannot evolve. Therefore the system is not pure-state controllable. There may be cases where, for given \(C_{seq}^{n}(\vec{\vartheta})\) and \(\hat{A}_{n+1}\), there exist two different parameters \(\vartheta_{n+1}\) and \(\tilde{\vartheta}_{n+1}\) such that \[expr_{\mathrm{dim}}\left(C_{seq}^{n+1}(\vec{\vartheta},\vartheta_{n+1}) \right)>expr_{\mathrm{dim}}\left(C_{seq}^{n+1}(\vec{\vartheta},\tilde{ \vartheta}_{n+1})\right). \tag{13}\] This is common in cases where \(\tilde{\vartheta}_{j}=0\) for every \(1\leq j\leq n+1\). Looking at Eq. (10), note that using repeated parameters (e.g. \(\alpha=\beta\)) will make the coefficients preceding the commutators have the same absolute value (e.g. \(\alpha^{2}\beta=\alpha\beta^{2}\)). This is evidently unfavorable to generate more linearly independent \(\partial_{j}C(\vec{\vartheta})\) due to the symmetries created. In principle, it would be necessary to prove that the expressivity of \(C_{seq}^{n+1}\) does not increase for any \(\vartheta_{n+1}\in\mathbb{R}\). However, as long as there exists one \(\vartheta_{n+1}\) that increases the dimensional expressivity for an operator \(\hat{A}_{n}\), the set of \(\{\tilde{\vartheta}_{n+1}\}\subset\mathbb{R}\) that would not raise the expressivity will have measure zero. This can be justified as follows. Assume that the first \(n\) parameters are independent (i.e. \(\det\left(S_{n}\right)\neq 0\)), with \(n\) less than the maximal dimensional expressivity, and that there exist some parameters that can increase the expressivity. This implies that the analytic function \(f(\vec{\vartheta}):=\det\left(S_{n+1}\right)\) is not constant \(0\). The set of parameters that would not increase the expressivity belong to \(f^{-1}(0)\). With the regular level set theorem [30], \(f^{-1}(0)\) is an \(n\)-dimensional manifold in the \((n+1)\)-dimensional parameter space \(\mathcal{P}\). Thus, the set of parameters that would not increase the expressivity has Lebesgue measure zero in \(\mathcal{P}\). In other words, by choosing \(\vartheta_{n+1}\) randomly we increase the dimensional expressivity with probability \(1\). The next section uses these ideas to systematically design quantum circuits that can be used to determine for a controlled quantum system whether it is pure-state controllable or not. ### Controllability test Given a system with operators \(\hat{H}_{j}\) with \(0\leq j\leq m\) (cf. Eq. (1)), we define the parametric quantum circuit \[\begin{split} C_{PSC}(\vec{\vartheta})=&\Bigg{(} \prod_{j=0}^{n_{l}-1}\hat{R}_{m}(\vartheta_{j(m+1)+m})...\\ &\hat{R}_{1}(\vartheta_{j(m+1)+1})\hat{R}_{0}(\vartheta_{j(m+1)} )\Bigg{)}\ket{\psi_{0}},\end{split} \tag{14}\] where \(\ket{\psi_{0}}\) is the initial state of the circuit, \(m\) the total number of controls in the system and \(n_{l}\) the number of layers in the circuit. A diagram of this circuit is shown in Figure 1 for a three-qubit example. The initial state \(\ket{\psi_{0}}\), chosen and fixed at the start of the circuit, can be any pure state. The number of layers \(n_{l}\) should be decided at the start of the algorithm. All gates in \(C_{PSC}(\vec{\vartheta})\) are parametric with different parameters \(\vartheta_{k}\), ranging from \(\vartheta_{0}\) to \(\vartheta_{n_{l}m-1}\). Each of the \(n_{l}\) layers in the circuit has a similar architecture: It starts with the rotation \(\hat{R}_{0}\) around the drift Hamiltonian, an entangling gate if it includes time-independent qubit couplings, and then a sequence of local gates, from \(\hat{R}_{1}\) to \(\hat{R}_{m}\), that use all the different controls sorted by a chosen order. The pure-state controllability test for a system evolving under the Hamiltonian (1) is then defined as follows: If the circuit (14) reaches maximal expressivity, the system is controllable. A schematic flowchart of the pure-state controllability test is shown in Figure 2. If the maximum expressivity of \(2d-1\) for a Hilbert space with Figure 2: Flowchart for the pure-state controllability algorithm. The yellow rhomboids show the initial inputs necessary to define the circuit \(C_{PSC}(\vec{\vartheta})\). \(\dim(\mathcal{H})=d\) has not been met with \(n_{l}\) layers, another layer can be added (encompassing a full set of rotation gates with their respective new parameters) and the test can be repeated for the new circuit with \(n_{l}+1\) layers. By definition, the dimensional expressivity can only augment at the rate of one per parameter \(\vartheta_{j}\) at maximum. For a system with \(m\) controls, there are a total of \(m+1\) parameters per layer. Therefore, the minimum number of layers needed to reach maximum expressivity for \(m\) controls is \[n_{l,\,\min}=\left\lceil\frac{2d-1}{m+1}\right\rceil. \tag{15}\] Since layers may have some redundant parameters, the dimensional expressivity may not necessarily rise at the maximum rate and more layers may have to be included. Consequently, the algorithm is best started with the minimum number of layers required to achieve maximum expressivity and additional layers shall be concatenated as needed. It may as well happen that the dimensional expressivity remains the same even with the inclusion of a new layer. In this case the test stops, as the dimensional expressivity will not further increase. In instances where the dimensional expressivity reaches a plateau, it is necessary to double-check using a different array of random parameters \(\vec{\vartheta}\) and repeat this comparison with the \(n_{l}\)- and \(n_{l}+1\)-layered circuits, following the reasoning explained in section III.1. Using a random set of parameters will yield an answer on whether the expressivity can be increased or not with probability 1. If the dimensional expressivity remains at a value less than \(2d-1\) for a sufficiently large set of different random parameters, then the system is labelled not pure-state controllable and the test concludes. The algorithm will always end with an affirmative or negative result regarding pure-state controllability. The loop in Figure 2 will be exited under one of the following conditions: Either maximal dimensional expressivity is reached or a last layer exclusively composed of redundant parameters is found. In other words, the method ends when the finite upper bound of the dimensional expressivity has been reached or when the expressivity before and after the addition of a new layer remains the same. Since the dimensional expressivity is always an integer, the loop must conclude in a finite number of iterations. Parameters with repeated values in the same rotation gates (e.g. \(\vartheta_{p}=\vartheta_{q}\) on gates \(\hat{R}_{j}(\vartheta_{p})\) and \(\hat{R}_{j}(\vartheta_{q})\) for a certain \(j\)) are usually detrimental to reach maximum expressivity. A trivial example is the case of \(\vec{\vartheta}=\vec{0}\), where the maximum possible dimensional expressivity of \(C_{PSC}(\vec{0})\) is always \(m+1\), with \(m\) the number of local controls. ### Examples To illustrate the described algorithm, we consider a four-qubit array with the following Hamiltonian: \[\hat{H}_{4q}(t)=\sum_{j=0}^{3}-\frac{\omega_{j}}{2}\hat{\sigma}_{z}^{j}+\sum_{ k=0}^{2}J_{k,k+1}\hat{\sigma}_{x}^{k}\hat{\sigma}_{x}^{k+1}+\hat{H}_{ctrl}(t) \tag{16}\] The first term encompasses the free-qubit Hamiltonians and the second one contains the time-independent couplings. The qubit frequencies \(\omega_{j}\) and the coupling strengths \(J_{k,k+1}\) have been chosen to fit the ones normally used in superconducting circuits [31] and their exact value can be found in Table 1. The last operator, \(\hat{H}_{ctrl}(t)\), contains all the relevant information about the controls, including their number and type. We choose two configurations of controls to study two separate systems with Hamiltonian (16), one that is pure-state controllable and one that is not. First, we assume the controls from Eq. (16) to be \[\hat{H}_{ctrl}(t)=u_{1}(t)\hat{\sigma}_{x}^{1}+u_{2}(t)\hat{\sigma}_{x}^{2}. \tag{17}\] This system is operator controllable, as proven by the Lie algebra rank condition [5] and the graph method [4]. This in particular implies that it is also pure-state controllable. A diagram of the system may be found in Figure 3. Since the system only has two controls, each layer of the circuit will have exactly 3 gates--the entangling gate involving the drift and the two related to the local controls coupling to \(\hat{\sigma}_{x}^{1}\) and \(\hat{\sigma}_{x}^{2}\), respectively. We have chosen \(|\psi_{0}\rangle=|0000\rangle\) (in the logical basis of the free qubits) as the initial state of the circuit and \(n_{l}=11\), matching the minimal number of layers to obtain maximum dimensional expressivity (cf. Eq. (15)). For a circuit acting on a four-qubit array, it has a value of \(expr_{dim}=31\). We have generated a random set of parameters \(\vec{\vartheta}\in[0,2\pi]^{33}\) (since in this case \((m+1)\cdot n_{l}=33\)). We have classically simulated the parametric quantum circuit and calculated the \(S_{C_{PSC},n}(\vec{\vartheta})\) matrices from Eq. (7). We have both determined the redundant parameters in the circuit and estimated the dimensional expressivity. In these simulations, the maximum dimensional expressivity is steadily reached, with every layer raising it by 3. The maximum value of \(expr_{dim}=31\) is achieved \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{4}{c}{Coupling strengths (MHz)} \\ \cline{2-3} & \(J_{0,1}\) & \(J_{1,2}\) & \(J_{2,3}\) \\ & \(170\) & \(220\) & \(150\) \\ \hline \multicolumn{4}{c}{Qubit frequencies (GHz)} \\ \hline \(\omega_{0}\) & \(\omega_{1}\) & \(\omega_{2}\) & \(\omega_{3}\) \\ \(5.40\) & \(5.30\) & \(5.42\) & \(5.37\) \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters for the Hamiltonian (16). The frequencies and the coupling strengths have been chosen in a range that is common for superconducting circuits. with the first parameter of the last layer, proving that the system is pure-state controllable. In this example the minimum number of layers that we had chosen was enough to reach maximum expressivity. The same behaviour has been observed for all the different random sets of parameters \(\vec{\vartheta}\) tested. The same configuration of gates was further tested using different random initial states \(\ket{\psi_{0}}\), yielding similar results. Second, we present a system that is not pure-state controllable, whose control operators are \[\hat{H}_{ctrl}(t)=u_{1}(t)\hat{\sigma}_{x}^{0}+u_{2}(t)\hat{\sigma}_{y}^{2}+u_ {3}(t)\hat{\sigma}_{z}^{3}, \tag{18}\] cf. Figure 4. Its dynamical Lie algebra has a dimension of \(\dim(\mathcal{L})=120<\dim(\mathfrak{su}(16))=255\), which only proves that the system is not operator controllable. The system would be pure-state controllable if and only if \[\dim\left(Lie\left(\ket{\rho_{0},\mathcal{L}}\right)\right)=2\dim(\mathcal{ H})-2 \tag{19}\] with \(\rho_{0}=\ket{0000}\bra{0000}\)[5]. We confirm that the system is not pure-state controllable since \(\dim\left(Lie\left(\ket{\rho_{0},\mathcal{L}}\right)\right)=28<30\) for the current system. Even though there are more local controls than in the first example, the system is not controllable due to their positions. Similarly as before, we create a circuit with four gates (related to the drift and the three local controls) per layer. We choose a minimum number of layers \(n_{l}=8\) (different to the one before due to the different number of controls), \(\ket{\psi_{0}}=\ket{0000}\) and a set of random parameters \(\vec{\vartheta}\in[0,2\pi]\)[32]. At the end of the last layer the dimensional expressivity yields a total of 29 out of the 31 that would imply pure-state controllability. Following the flowchart depicted in Figure 2 we have added a new layer (\(n_{l}=9\)) with a new set of random parameters and repeated the dimensional expressivity analysis. According to our simulation, the new layer contains only redundant parameters (i.e. the expressivity remains at 29), which stops the algorithm and means that the system is not pure-state controllable. To verify the validity of this outcome, we have repeated the test for multiple different random sets of parameters. In every instance the same result is reached, which leads to the conclusion that the system is indeed not pure-state controllable, as discussed in section III.1. ## IV Operator controllability test using dimensional expressivity analysis Operator controllability is the relevant type of controllability for a qubit array in order to perform all quantum logic gates. Its connection to the dimensional expressivity of a circuit is less evident, since dimensional expressivity is related to the different states that can be reached. The Choi-Jamiolkowski isomorphism [32; 33] allows to bridge the gap with a map between operators on a Hilbert space \(\mathcal{H}\) and states in \(\mathcal{H}\otimes\mathcal{H}\). It is used, for example, in quantum process tomography, allowing to employ techniques from state tomography to operators [34]. Similarly, by doubling the number of qubits, we can exploit the channel-state duality between operators in the original system and states in the bipartite extended system for controllability analysis. ### Lifting pure-state to operator controllability via the Choi-Jamiolkowski isomorphism Let us assume a qubit array with Hamiltonian (1) for which we seek to determine operator controllability. This system with Hilbert space \(\mathcal{H}\) and dimension \(dim(\mathcal{H})=d\) will henceforth be referred to as the original system. We then define a bipartite extended system in \(\mathcal{H}\otimes\mathcal{H}\) composed of the original system and the same number of auxiliary qubits. To simplify the argument, we first assume no dynamics over the ancilla qubits. Later we extend our discussion to include some local Hamiltonians on the auxiliary qubits. Given any operator \(\hat{O}\in L(\mathcal{H}\otimes\mathcal{H})\), we write \(\hat{O}^{A}\) to indicate that the operator only acts non-trivially on the partition of the original system \((A)\), i.e. \[\hat{O}^{A}=\hat{Q}\otimes\mathds{1}_{d} \tag{20}\] for some operator \(\hat{Q}\). Analogously, we write \(\hat{O}^{AB}\) for operators that act non-trivially on both partitions (the original system and the auxiliary qubits). Neglecting the local contributions of the ancilla qubits, the Hamiltonian of the extended system is given by \[\hat{H}^{A}(t)=\hat{H}(t;u_{1},...u_{m})\otimes\mathds{1}_{2}^{\otimes q} \tag{21}\] where \(q\) is the number of qubits in the original system. Figure 4: Four-qubit system that is not pure-state controllable, cf. equations (16) and (18). Figure 3: Four-qubit system that is pure-state controllable, cf. Eqs. (16) and (17). We assume that the extended system can be prepared in a maximally entangled state, \[\ket{\psi_{ME}}=\sum_{i=0}^{d-1}\frac{1}{\sqrt{d}}\ket{e_{i}}\otimes\ket{e_{i}}, \tag{22}\] where \(\{\ket{e_{i}}\}_{0}^{d-1}\) is an orthonormal basis of \(\mathcal{H}\). We define the circuit on the extended system \[\begin{split} C^{A}_{OC}(\vec{\vartheta})&:=\prod_{j =0}^{k}\left(\hat{R}^{A}_{m}(\vartheta_{j(m+1)+m})...\right.\\ &\left.\hat{R}^{A}_{1}(\vartheta_{j(m+1)+1})\hat{R}^{A}_{0}( \vartheta_{j(m+1)})\right)\ket{\psi_{ME}}.\end{split} \tag{23}\] The rotations \(\hat{R}^{A}_{k}(\alpha)\) are given by the drift (\(k=0\)) and the control operators (\(1\leq k\leq m\)) of the original subsystem: \[\hat{R}^{A}_{k}(\alpha):=\exp\left(-i\,\frac{\alpha}{2}\hat{H}_{k}\otimes \mathds{1}_{2}^{\otimes q}\right),\qquad 0\leq k\leq m, \tag{24}\] with \(\hat{H}_{k}\) given in Eq. (1). A visual representation of the circuit is found in Figure 5. The parameter space \(\mathcal{P}\ni\vec{\vartheta}\) is assumed to be connected and compact without boundary (e.g. with every coordinate \(\vartheta_{i}\) being cyclic). The final state of the circuit will always be of the form \[C^{A}_{OC}(\vec{\vartheta})=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}\ket{e_{i}} \otimes\left(\hat{U}(\vec{\vartheta})\ket{e_{i}}\right), \tag{25}\] with \(\hat{U}(\vec{\vartheta})\) a unitary operator depending on the circuit's parameters. Our goal is to prove that dimensional expressivity of the extended system is enough to determine operator controllability of the original system. To this end, we make use of the Choi-Jamiolkowski isomorphism [32; 33; 35]. The map it describes is written as \[\begin{split}\Lambda(\hat{A})&:=\left(\mathds{1}_{ \mathcal{L}_{\mathcal{H}}}\otimes\hat{A}\right)(\ket{\phi}\bra{\phi})\\ &=\sum_{i,j}\ket{\psi_{i}}\bra{\psi_{j}}\otimes\hat{A}\Big{(} \ket{\psi_{i}}\bra{\psi_{j}}\Big{)}\end{split} \tag{26}\] for any operator \(\hat{A}\) in the Hilbert space of linear operators on the Liouville space and the unnormalized state \(\ket{\phi}=\sum_{i}\ket{\psi_{i}}\otimes\ket{\psi_{i}}\), with \(\{\ket{\psi_{i}}\}_{i=0}^{d-1}\) an orthonormal basis of \(\mathcal{H}\). Identifying \(\hat{A}\) in Eq. (26) with \(\hat{U}(\vec{\vartheta})\) in Eq. (25), we know that \[\begin{split}\hat{U}(\mathcal{P})&\cong\Lambda( \hat{A})\\ &=\sum_{i,j=0}^{d-1}\ket{e_{i}}\bra{e_{j}}\otimes\left(\hat{U}( \mathcal{P})\ket{e_{i}}\bra{e_{i}}\hat{U}(\mathcal{P})^{\dagger}\right).\end{split} \tag{27}\] The operators \(\hat{U}(\vec{\vartheta})\) are unitary for every \(\vec{\vartheta}\in\mathcal{P}\), hence purity-preserving. We transform the density matrix representation from Eq. (27) into a pure-state representation, resulting in \[\hat{U}(\mathcal{P})\cong\sum_{i=0}^{d-1}\ket{e_{i}}\otimes\hat{U}(\mathcal{ P})\ket{e_{i}}\cong C^{A}_{OC}(\mathcal{P}). \tag{28}\] Therefore, there exists an embedding between the evolutions \(\hat{U}(\mathcal{P})\) that are generated using a combination of rotations given by the controls and the final states of the circuit \(C^{A}_{OC}(\mathcal{P})\). A system with traceless operators as in Eq. (1) and \(\dim(\mathcal{H})=d\) is operator-controllable if and only if the manifold of the unitary evolutions that can be generated \(\hat{U}_{\hat{H}}\) is isomorphic to \(SU(d)\). Evidently, \(\hat{U}(\mathcal{P})\subseteq\hat{U}_{\hat{H}}\subseteq SU(d)\). Since the parameter space \(\mathcal{P}\) is connected and compact without boundary, \(\hat{U}(\mathcal{P})=SU(d)\) if and only if \(\dim(\hat{U}(\mathcal{P}))=\dim(SU(d))\). Thus, using Eq. (28), the system will be operator-controllable if \(\dim(C^{A}_{OC}(\mathcal{P}))=\dim(SU(d))\), i.e., if the dimensional expressivity of the circuit \(C^{A}_{OC}(\vec{\vartheta})\) is \(d^{2}-1\). From here we proceed analogously as the pure-state controllability test from section III.1. We present the outline of the operator controllability test in Figure 7. If the dimensional expressivity is less than \(d^{2}-1\), we inspect the parameters in the last circuit layer. If they all are redundant, the test ends and the system is deemed not controllable. Indeed, if all parameters in the last layer are redundant, we are unable to find more linearly independent operators in the dynamical Lie algebra of the system. If the number of linearly independent elements of the algebra (i.e. number of independent parameters) is less than \(\dim(SU(d))\), there exist some unitary operations that cannot be implemented. Therefore, the system is not operator controllable. This step must be checked with multiple arrays of random parameters \(\vec{\vartheta}\), as there may be a set of arrays of parameters with measure zero over \(\mathcal{P}\) that yield a lower value for the dimensional expressivity. The same arguments we used in section III.1 apply here, as \(C^{A}_{OC}(\mathcal{P})\) is a manifold of states in \(\mathcal{H}\otimes\mathcal{H}\). Figure 5: Parametric circuit for the extended system required to perform the operator controllability test (23) for a three-qubit system. The qubits \(q_{i}\) with \(i=0,1,2\) constitute the original system, whereas \(q_{j}\) with \(j=3,4,5\) are the ancilla qubits. If at least one parameter in the last circuit layer is independent, the test continues. We iterate by adding a new layer and calculating the circuit's expressivity. The algorithm will eventually come to an end, either with maximal value for the dimensional expressivity or with a layer of redundant parameters at the end of the circuit. We now move to a more realistic setting that incorporates dynamics in the ancilla qubits. We undertake this by including the drift of the auxiliary partition. The new Hamiltonian of the bipartite system is then \[\hat{H}^{AB}(t)=\hat{H}(t;u_{1},...u_{m})\otimes\mathds{1}_{2}^{\otimes q}+ \sum_{j=0}^{q-1}-\frac{\omega_{j}}{2}\hat{\sigma}_{z}^{j+q}, \tag{29}\] with \[\hat{\sigma}_{z}^{k}:=\mathds{1}\otimes...\otimes\mathds{1}\otimes\underbrace {\hat{\sigma}_{z}}_{k\text{ position}}\otimes\mathds{1}\otimes...\mathds{1}. \tag{30}\] It results in the following circuit to test operator controllability \[\begin{split} C_{OC}^{AB}(\vec{\vartheta}):=&\prod _{j=0}^{k}\left(\hat{R}_{m}^{A}(\vartheta_{j(m+1)+m})...\hat{R}_{1}^{A}( \vartheta_{j(m+1)+1})\right.\\ &\left.\hat{R}_{0}^{B}(\vartheta_{j(m+1)})\hat{R}_{0}^{A}( \vartheta_{j(m+1)})\right)\left|\psi_{ME}\right>,\end{split} \tag{31}\] where \[\hat{R}_{0}^{B}(\alpha):=\exp\left(i\,\frac{\alpha}{2}\,\sum_{j=0}^{q-1}\frac{ \omega_{j}}{2}\hat{\sigma}_{z}^{j+q}\right). \tag{32}\] Note that the parameters \(\vartheta_{j(m+1)}\) of the gates \(\hat{R}_{0}^{A}\) and \(\hat{R}_{0}^{B}\) in the same layer \(j\) are always the same because there is no active control over these operators--they are due to the time-independent part of the Hamiltonian. In other words, these gates are implemented by letting the system evolve a certain amount of time \(t=\vartheta_{j(m+1)}/2\). The number of parameters per layer for a system with \(m\) controls remains equal to \(m+1\), despite having an extra rotation gate per layer. A diagram of the new circuit is found in Figure 6. If we choose an orthonormal basis for the \(B\) partition consisting of the eigenstates of the ancilla qubits, then \[C_{OC}^{AB}(\mathcal{P})\cong\sum_{i=0}^{d-1}\left(\hat{U}(\mathcal{P})e^{ \varphi_{i}(\vec{\vartheta})}\left|e_{i}\right>\right)\otimes\left|e_{i} \right>.. \tag{33}\] The only difference between equations (28) and (33) is the local phases \(\varphi_{i}(\vec{\vartheta})\), which are uniquely determined for any array of parameters \(\vec{\vartheta}\). These do not change the value of the dimensional expressivity since for any array \(\vec{\vartheta}\) there exists a neighborhood in which \[C_{OC}^{A}(\vec{\vartheta})\cong C_{OC}^{AB}(\vec{\vartheta}). \tag{34}\] This implies the local dimension of the manifold of reachable states to be identical, i.e., the dimensional expressivity to be the same. Therefore, we can include the local Hamiltonians of the ancilla qubits in our calculations to describe a more realistic model and still use the Choi-Jamiolkowski isomorphism to design the parametric quantum circuit (31). ### Controllability test Once again we consider a qubit array with traceless Hamiltonian (1) and the corresponding extended system, composed of the original \(q\)-qubit array and \(q\) more auxiliary qubits. We assume the extra qubits to have arbitrary natural frequencies \(\omega_{j}\), such that the Hamiltonian of the extended system is given by Eq. (29) and the parametric quantum circuit by Eq. (31). As shown in Figure 6 Figure 7: Flowchart for the algorithm testing operator controllability. The yellow rhomboids show the initial inputs necessary to define the circuit \(C_{OC}^{AB}(\vec{\vartheta})\). Figure 6: Circuit on the extended system required to perform the operator controllability test (31) for a three-qubit system. The qubits \(q_{i}\) with \(i=0,1,2\) constitute the original system, whereas \(q_{j}\) with \(j=3,4,5\) are the ancilla qubits. The rotations \(\hat{R}_{0}^{B}\) (cf. Eq. (32)) include the free-qubit dynamics of the ancilla qubits. for a three-qubit example, for a system with \(m\) controls the circuit has exactly \(m+1\) parameters per layer. As for pure-state controllability, it is encouraged to choose a number of layers \(n_{l}\) that would a priori be sufficient to reach the maximum dimensional expressivity. In the case of operator controllability, it is \(\dim(\mathfrak{su}(d))=d^{2}-1\), with \(d\) the Hilbert space dimension of the original system, \(d=2^{q}\)[36]. Thus, the condition for the minimum number of layers to obtain the maximal dimensional expressivity is \[n_{l,\,\min}=\left\lceil\frac{d^{2}-1}{m+1}\right\rceil. \tag{35}\] With the dimensional expressivity we find the maximum number of linearly independent states in \(\mathcal{H}\otimes\mathcal{H}\) that can be generated in a neighborhood of \(C^{AB}_{OC}(\vec{\vartheta})\). This in turn yields information about the maximum number of linearly independent operators on \(\mathcal{H}\) that can be generated by the original system around the identity. Since we know that these operators belong to the Lie algebra \(\mathfrak{su}(d)\) we simply want to determine if we can span all the \(d^{2}-1\) dimensions in the algebra, i.e. having operator controllability, or not. The operator controllability of a system evolving under the Hamiltonian (1) is determined as follows: If the circuit (31) has dimensional expressivity equal to \(d^{2}-1\), then the system is operator controllable. Analogously to the pure-state controllability test, if this value for the dimensional expressivity is not reached, another layer should be concatenated at the end of the circuit. If all the new parameters in the last layer are redundant, then the system is not operator controllable (with a probability of measure 1); otherwise, the process of concatenating layers shall be repeated. The main steps of the algorithm is displayed in Figure 7. Similarly to section III.1, it is important to to ensure the validity of a result of "not operator controllable" by repeating the test for different arrays of random parameters. ### Examples In the following we consider a three-qubit array with Hamiltonian \[\hat{H}_{3q}(t)=\sum_{j=0}^{2}-\frac{\omega_{j}}{2}\hat{\sigma}_{z}^{j}+\sum_{ k=0}^{1}J_{k,k+1}\hat{\sigma}_{z}^{k}\hat{\sigma}_{z}^{k+1}+\hat{H}_{ctrl}(t). \tag{36}\] The second term, containing the time-independent two-qubit couplings, has been modified to \(\hat{\sigma}_{z}^{k}\hat{\sigma}_{z}^{k+1}\) simply to showcase a qubit interaction different from the one in the previous examples. The qubit frequencies \(\omega_{j}\) and the coupling strengths \(J_{k,k+1}\) are listed in Table 2. We take two different \(\hat{H}_{ctrl}(t)\) to study an example that is operator controllable and one that is not. The first one is given by \[\hat{H}_{ctrl}(t)=u_{1}(t)\hat{\sigma}_{x}^{0}+u_{2}(t)\hat{\sigma}_{y}^{1}+u_ {3}(t)\hat{\sigma}_{x}^{2}, \tag{37}\] see Figure 8. It is operator controllable as can easily be proven by the Lie algebra rank condition [5] and the graph method [4]. Since we have 3 controls in the original three-qubit system, the minimum number of layers needed to reach the maximum value of dimensional expressivity for the bipartite system, \(expr_{dim}=63\), is \(n_{l}=16\) according to Eq. (35). The orthonormal basis used to define the maximally entangled state \(|\psi_{ME}\rangle\) is the logical basis of the free qubits. Last, we generate a random set of parameters \(\vec{\vartheta}\in[0,2\pi]\)[64]. Maximum dimensional expressivity of 63 is found for the last parameter of the last layer, confirming that the system is operator controllable. For the second example, we choose a different set of controls, \[\hat{H}_{ctrl}(t)=u_{1}(t)\hat{\sigma}_{x}^{0}+u_{2}(t)\hat{\sigma}_{y}^{1}+u_ {3}(t)\hat{\sigma}_{z}^{2}, \tag{38}\] see Figure 9, making the system not controllable. We repeat the same procedure as before, since the number of controls is again \(m=3\). At the end of 16 layers the circuit only reaches \(expr_{dim}=31\), which is less than the 63 needed for operator controllability. We could add another layer to verify that every new rotation gate will \begin{table} \begin{tabular}{c c have a redundant parameter. However, in this case it is sufficient to inspect the rank of the matrices \(S_{n}\) from Eq. (7) in the last layers. We find that the last independent parameter appears at the end of the tenth layer, with all the remaining ones being exclusively formed by redundant parameters. This is a sufficient condition to determine that the system is not operator controllable (as long as it is verified with multiple sets of random parameters). We emphasize that it is important to corroborate every "not controllable" result with different arrays \(\vec{\vartheta}\) chosen at random. Selecting \(\vec{\vartheta}\) in a non-randomized fashion may lead to cases where the dimensional expressivity is lower than the maximum value reached with other different parameters. This would yield wrong results in terms of controllability. It is easily rationalized in terms of symmetries of the commutators \([\hat{H}_{i},\hat{R}_{k}^{A}(\vartheta_{j})]\). These are linked to the partial derivatives of the circuit \(\partial_{i}C_{OC}^{AB}(\vec{\vartheta})\) and to the dimensional expressivity of the circuit. Performing further numerical tests on the previously discussed examples, we have experimented with selecting parameters instead of choosing them at random. Wrong results with lower dimensional expressivity arose when all the parameters were chosen to be the same, e.g. \(\vartheta_{j}=1\) for every \(j\). In every instance, these problems vanished as soon as we generated a new set of random parameters. Another important issue concerns the minimum tolerance \(\tau\) used to determine the rank of the \(S_{n}\) matrices. More precisely, \(\tau\) represents the threshold at which the values of the singular value decomposition of \(S_{n}\) are considered zero. \(\tau\) is crucial to determine the different redundant parameters and the expressivity of the circuit. If \(\tau\) is too high, then some linearly independent vectors might be deemed dependent by mistake, which would revert on a wrong lower value of the circuit expressivity, potentially turning a controllable system into a fake non-controllable one. Conversely, if \(\tau\) is too small some errors might start to add up to make linearly dependent vectors look as if they were independent, falsely showing some parameters as independent. This would in turn raise the dimensional expressivity, usually above the \(d^{2}-1\) threshold that we know to be valid for the case of the operator controllability test. To avoid these cases, it is advisable to use operators with similar orders of magnitude and try different ranges for \(\tau\) depending on the order of magnitude of the operators \(\hat{H}_{j}\) from Eq. (1). If the dimensional expressivity analysis is performed on quantum hardware, the tolerance \(\tau\) will also depend on the device noise. Indeed, the accuracy of the measurements and the circuit dynamics will take a toll on the accuracy of the rank of the matrices \(S_{n}\). Inevitably, noisier devices will require higher tolerances to determine whether there are redundant parameters (i.e. whether \(\det(S_{n})=0\)) or not. ## V Discussion and Conclusions We have introduced two hybrid quantum-classical algorithms to test pure-state and operator controllability of qubit arrays. As opposed to usual Lie rank and graph methods, the presented algorithms are run directly on a quantum circuit designed to mimic the dynamics of the quantum system to be studied. We have showcased the capabilities of the procedure with four paradigmatic examples that cover all scenarios for pure-state and operator controllability. A useful application of these tests is the resource-efficient design of quantum chips. Our algorithm provides a systematic way to deduce the minimal number of local controls and qubit couplings required to maintain controllability, as a prerequisite of universal quantum computation. In other words, it allows one to identify redundant controls and thus to ease scaling up the quantum chip size. Importantly, the tests allow to obtain this information before the devices are built, as long as the associated quantum circuit can be implemented on a different device. Note that while the rank analysis of the \(S_{n}\) matrices scales with the size of the system Hilbert space, this does not pose a fundamental limitation. It can be overcome by mapping the rank computation to a quantum device. More precisely, the quantum device would then be used to find the lowest eigenvalue of \(S_{n}\) in order to determine whether a parameter is redundant or not. This permits the efficient identification of redundant parameters and the removal of their parametric gates in the circuit. Noise in the device running our hybrid algorithm will limit the accuracy of the lowest eigenvalue and thus determine the minimum threshold for an eigenvalue to be considered zero. In addition to its practical aspects, at the conceptual level, our work has revealed the close connection between the controllability of quantum systems and the dimensional expressivity of quantum circuits. In particular, this insight arises from the relation between the states that can be reached in a controllable system and the final states that can be produced in a parametric quantum circuit. The dimensional expressivity analysis allowed us to efficiently quantify the circuit expressivity. Its search for redundant parameters was essential in determining which controls contributed to reach more states in the Hilbert space. The link between the pure-state and operator controllability test is the inclusion of the Choi-Jamiolkovski isomorphism that creates a map between operators in a Hilbert space and the states of the extended bipartite space. Variational quantum algorithms have previously been used to improve the design of optimal pulses in quantum systems [37]. Quantum optimal control theory in general [6; 7] encompasses both the design of the pulse shapes, i.e., control synthesis, and controllability analysis. The controllability tests described here thus extend the use of parametric quantum circuits to the second pillar of quantum optimal control. Quantum opti mal control is also closely related to system characterization where controls can be interleaved with free evolutions [38; 39] or applied continuously [40]. In future work, it will be interesting to study systems with non-local controls, e.g. tunable two-qubit couplings. Moreover, it may be possible to expand our approach to systems other than qubit arrays. To this end, the key task will be to find a mapping from the non-qubit system to the associated quantum circuit that runs on a qubit array. The problem of mapping certain dynamics to a quantum circuit has already been a subject of extensive research, for example, when using parametric variational algorithms for calculating the electronic structure of molecules [41; 42] or their quantum dynamics [43]. Finally, an intriguing question is how the removal of redundant controls affects the minimum time at which certain dynamics can be implemented, i.e., the quantum speed limit of the system. A controllable system with a new control added can have the same or a lower minimum time for a state transfer or unitary gate. Conversely, removing redundant controls might incur a higher minimum time. Most likely, quantum device design will have to balance the requirements for controllability and operation speed. ###### Acknowledgements. We gratefully acknowledge financial support from the Einstein Research Foundation (Einstein Research Unit on Near-Term Quantum Devices) and CRC 183 (project C05).
2307.10290
Dynamical dark energy from spacetime-symmetry breaking -- late-time behaviour and phantom crossing
We investigate the late-time cosmological dynamics in a simple case of explicit spacetime-symmetry breaking. By expanding in a small symmetry-breaking coefficient we are able to write the Friedmann equations as $\Lambda$CDM + dynamical dark energy, which we show contains logarithmic dependence of the scale factor. We find that the dark energy equation of state displays divergencies and phantom behaviour for certain values of the symmetry-breaking coefficient, where the NEC is also broken. We discuss the adiabatic sound speed of dark energy and compare the model to current constraints using the Chevallier-Polarski-Linder parametrisation. Remarkably, although the constraints on the same symmetry-breaking coefficient from e.g. gravitational-wave propagation are orders of magnitude stronger than what we obtain in this paper, we are able to cut those constraints, which are more or less symmetric around zero, in half by showing that same coefficient must be negative (or zero) if one wishes to keep the NEC intact.
Nils A. Nilsson
2023-07-18T14:26:48Z
http://arxiv.org/abs/2307.10290v2
Dark-energy properties of a spacetime-symmetry breaking cosmological solution - late-time behaviour and phantom crossing ###### Abstract We investigate the late-time cosmological dynamics in a simple case of explicit spacetime-symmetry breaking. By expanding in a small symmetry-breaking coefficient we are able to write the Friedmann equations as \(\Lambda\)CDM + dynamical dark energy, which we show contains logarithmic dependence of the scale factor. We find that the dark energy equation of state displays divergences and phantom behaviour for certain values of the symmetry-breaking coefficient, where the Null Energy Condition is also broken. We also discuss the adiabatic sound speed of dark energy and compare the model to current constraints using the Chevallier-Polarski-Linder parametrisation. ## I Introduction The accelerating expansion of the Universe was first discovered using type-Ia supernovae [1; 2], and was awarded the Nobel prize in physics in 2011. Since then, significant effort has been put towards revealing the microphysics responsible for the acceleration, which to this day is not understood; this has lead to the term Dark Energy (DE). In the standard \(\Lambda\) Cold-Dark-Matter (\(\Lambda\)CDM) model, the effects of DE are described through the cosmological constant \(\Lambda\), which has negative pressure and becomes dominant once other cosmological fluids have decayed sufficiently, causing the acceleration. There is significant disagreement in the value of the cosmological constant: the difference between values obtained from the Cosmic Microwave Background [3] and quantum field theory calculations of the vacuum energy currently lies around 55 orders of magnitude, which is known as the cosmological constant problem [4]; within the \(\Lambda\)CDM model, DE makes up around 68% of the energy content of the Universe. As with other cosmic fluids, DE can be described using the barotropic index or equation of state parameter \(w\) through \(p=w\rho\), where \(p\) and \(\rho\) is the pressure and energy density, respectively. In the case of a cosmological constant, the equation of state parameter is exactly \(w=-1\), but for more general models, \(w\) may be a function of redshift. In addition to the cosmological constant problem, there is also an issue of fine-tuning of initial conditions known as the coincidence problem [5]. In order to address these outstanding issues, a number of Effective-Field Theories (EFT's) have been proposed throughout the years, usually attempting to replace the cosmological constant with a dynamical scalar field responsible for the effects of DE; amongst these EFT's, the most widely known are the quintessence [6; 7] and k-essence models [8; 1; 9], but many others exist1. DE with \(w<-1\) is known as _phantom dark energy_, the energy density of which increases with time (i.e. it has strongly negative pressure, and thus propagates against the direction of momentum) [11; 12]. If this is actually the case, our Universe may eventually end up in one of several possible future singularities [13]. We may obtain a phantom fluid by reversing the sign of the kinetic term of a scalar field Lagrangian, but it also shows up naturally in certain higher-order theories of gravity [14], Brans-Dicke theories, and scalar-field theories with non-minimal coupling [15]. Generally, phantom fields exhibit a number of undesirable features, such as classical or quantum instabilities [16], anisotropy and superluminal propagation [17], or the lack of a Lorentz-invariant vacuum [18; 19]. In theories where Lorentz invariance is allowed to be broken, it may however be possible to render superluminal modes and instabilities unobservable [20]. On the other hand, it has been shown that DE EFT's with \(w>-1\) in the local Universe generally lead to determinations of the Hubble constant which are lower than that of \(\Lambda\)CDM [21; 22], thus exacerbating the mismatch of the Hubble parameter as measured with local probes as compared to its cosmological value, known as the Hubble tension (see for example [23; 24; 25; 26]). Footnote 1: See for example [10] for a review of DE EFT’s. It has been proposed in the literature that theories which break the foundational symmetries of General Relativity (GR) may provide solutions to some of the current cosmological puzzles, including the cosmological constant problem. For example, it was proposed in [26] that dark energy may emerge naturally as a Goldstone field of a broken symmetry in the context of khronometric theories, a notion which was later tested in [27]. A related approach is that of Horava-Lifshitz gravity, which breaks Lorentz symmetry explicitly and which has been shown to contain dynamical DE with a phantom regime (for certain parameter values) [28; 29; 30]. Therefore, we investigate in this paper the DE properties of a simple case of _explicit spacetime-symmetry breaking_ in the form of a correction to the Einstein-Hilbert action [31]. This cosmological solution was found using a generic EFT framework used for testing spacetime symmetries in all sectors of the Standard Model as well as gravity [32; 33; 34], which has been extensively studied in the past decades (see [35] for an annually updated list of constraints). On the level of cosmology, this EFT has been used to study inflation [36; 37], background evolution [38; 31], the Hubble parameter tension [39], metric anisotropies [40], and more. In weak gravity, constraints on the EFT coefficients have been found using solar-system tests [41; 42; 43], short-range gravity [44; 45; 46; 47], pulsar tests [48; 49], gravitational waves [50; 51; 52], and many more. This paper is organised as follows: in Section II we introduce the field theory and the resulting cosmology; in Section III we isolate the effects of the resulting dynamical DE and study its properties; we discuss our results and conclude in Section IV. Throughout this paper we use a standard flat FLRW cosmology with mostly-plus signature, and we fix the cosmological parameters to \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}^{0}=0.3\), \(\Omega_{r}^{0}=10^{-4}\). ## II Cosmology with spacetime-symmetry breaking We can write the Lagrange density using the vierbein formalism as \[\mathcal{L}=\frac{e}{2\kappa}[R-2\Lambda+a^{\lambda\mu\nu}T_{\lambda\mu\nu}+b^ {\kappa\lambda\mu\nu}R_{\kappa\lambda\mu\nu}]+\dots. \tag{1}\] where \(R\) is the curvature scalar, \(\Lambda\) is the cosmological constant, \(e\) is the determinant of the vierbein, \(R_{\kappa\lambda\mu\nu}\) is the Riemann tensor, \(T_{\lambda\mu\nu}\) is the torsion tensor, and all dynamical terms are contained within the ellipsis. Also present are the quantities \(a^{\lambda\mu\nu}\) and \(b^{\kappa\lambda\mu\nu}\), which are the coefficients parameterising the symmetry breaking; these transform as scalars under so-called particle rotations [34]. We note that when working with some background tensor \(k_{\mu\nu}\) in the spacetime frame, using the vierbein to transform \(k_{\mu\nu}\) to the locally Lorentz frame as \(k_{ab}=e^{\mu}_{a}e^{\nu}_{\ b}k_{\mu\nu}\) results in a _different theory_ compared to using \(k_{\mu\nu}\) to contract directly with fields in the local frame. In the Riemannian limit, the torsion vanishes and we can express the theory using the metric tensor as \[\mathcal{L}=\frac{\sqrt{-g}}{2\kappa}[R-2\Lambda+b^{\kappa\lambda\mu\nu}R_{ \kappa\lambda\mu\nu}]+\dots, \tag{2}\] where the symmetry-breaking term can be decomposed according to the symmetry properties of the Riemann tensor as \[b^{\kappa\lambda\mu\nu}R_{\kappa\lambda\mu\nu}=\underbrace{-uR+s_{\mu\nu}^{( T)}R^{(T)\mu\nu}}_{=s_{\mu\nu}R^{\mu\nu}}+t^{\kappa\lambda\mu\nu}C_{\kappa \lambda\mu\nu}, \tag{3}\] where \(R^{(T)\mu\nu}\) denotes the trace-free Ricci tensor and \(C_{\kappa\lambda\mu\nu}\) the Weyl tensor. The term \(-uR\) represents the trace part of the second term; in this paper, we will consider the term \(s_{\mu\nu}R^{\mu\nu}\) as the source of symmetry breaking, i.e. with the trace intact. We arrive at the field equations by varying the action \(S=\int d^{4}x\mathcal{L}\) using the Lagrange density (2), after which we find \[\begin{split} R_{\mu\nu}&-\tfrac{1}{2}g_{\mu\nu}R+ \Lambda g_{\mu\nu}-\tfrac{1}{2}g_{\mu\nu}R^{\alpha\beta}s_{\alpha\beta}+2R_{( \mu}^{\ \alpha}s_{\nu)\alpha}\\ &+\tfrac{1}{2}\Box s_{\mu\nu}-\nabla_{\alpha}\nabla_{(\mu}s_{\nu )}^{\ \alpha}+\tfrac{1}{2}g_{\mu\nu}\nabla_{\alpha}\nabla_{\beta}s^{\alpha\beta}\\ &=\kappa T_{\mu\nu},\end{split} \tag{4}\] where \(\Box=\nabla_{\lambda}\nabla^{\lambda}\) is the covariant d'Alembertian and parentheses denote symmetrisation of indices; we also emphasise that the quantity \(T_{\mu\nu}\) denotes the stress-energy tensor for the standard matter fields and does not contain any symmetry-breaking terms. The traced Bianchi identities read \[\kappa\nabla_{\mu}T_{\ \nu}^{\mu}=-\tfrac{1}{2}R^{\alpha\beta}\nabla_{\nu}s_{ \alpha\beta}+R^{\alpha\beta}\nabla_{\beta}s_{\alpha\nu}+\tfrac{1}{2}s_{\alpha \nu}\nabla^{\alpha}R, \tag{5}\] which we can write as \(\nabla_{\mu}[kT_{\ \nu}^{\mu}-(T_{s})_{\ \nu}^{\mu}]=0\) where \((T_{s})_{\ \nu}^{\mu}\) is the contribution to the stress energy from terms proportional to \(s_{\mu\nu}\) and its derivatives. By demanding that the total right-hand side of the modified Einstein equations be conserved, i.e. _not_ imposing the usual \(\nabla_{\mu}T_{\ \nu}^{\mu}=0\), we are modifying the cosmological evolution of the matter fields proportional to the coefficients of spacetime-symmetry breaking. It should be noted that if we had imposed the on-shell conservation of \(T_{\ \nu}^{\mu}\) and \((T_{s})_{\ \nu}^{\mu}\) separately, the resulting solution would have contained divergences and other pathological behaviour [31]. It can be shown that the spatial parts of the Bianchi identities can be satisfied by assuming that the symmetry-breaking coefficient \(s_{00}\) is spatially constant in the chosen coordinate system, and we will therefore adopt \(\partial_{i}s_{00}=0\) from now on; for simplicity, we will also assume that it is a constant in time, \(\partial_{0}s_{00}=0\). We will further restrict our attention to the case when only one component of the coefficient tensor is non-zero, so we choose the ansatz \[s_{\mu\nu}=\begin{pmatrix}s_{00}&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix} \tag{6}\] Introducing the flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric as \[ds^{2}=-dt^{2}+a(t)^{2}\left[dx^{2}+dy^{2}+dx^{2}\right], \tag{7}\] we find the modified continuity equation of the form \[\hat{\rho}+3Hf(s_{00},w)\rho=0, \tag{8}\] where where \(H\equiv\dot{a}/a\) is the Hubble parameter and \(f(s_{00},w)\) is an auxiliary function defined as \[f(s_{00},w)=\frac{2(1+w-s_{00})}{2+s_{00}(3w-2)}. \tag{9}\] This modification leads to non-standard cosmological evolution of radiation (\(w=1/3\)) and cosmological constant (\(w=-1\)), which can be seen by plugging in the corresponding values of the barotropic index into Eq. (8)2. In terms of the normalised energy densities \(\Omega_{r}^{0}\) and \(\Omega_{\Lambda}^{0}\), the evolution is modified as \[\Omega_{r}^{0}a^{-4}\to\Omega_{r}^{0}a^{-4x_{r}},\quad\Omega_{\Lambda}^{0}\to \Omega_{\Lambda}^{0}a^{-x_{\Lambda}}, \tag{10}\] where \(x_{r}\) and \(x_{\Lambda}\) are polynomials in \(s_{00}\) and read \(x_{r}=(1-\frac{3}{2}s_{00})/(1-\frac{1}{2}s_{00})\), \(x_{\Lambda}=-3s_{00}/(1-\frac{5}{2}s_{00})\). The change in the evolution is "small", since \(|s_{00}|\) must be much smaller than unity3; nevertheless, the symmetry breaking induces evolution in \(\Lambda\) where previously there was none. An interesting phenomenological consequence of this modification might be its effect on the Hubble tension. Such a tension actually emerges naturally in symmetry-breaking models, as was first discussed in [54]; however, as was shown in [39], the approach we take in this paper does not affect the present Hubble tension. We find the following Friedmann equations Footnote 3: As experiment has determined that Lorentz symmetry holds to very high precision. \[\begin{split} H^{2}&=H_{0}^{2}\left[\Omega_{m}^{0}a ^{-3}+\Omega_{r}^{0}a^{-4x_{r}}+\Omega_{\Lambda}^{0}a^{-x_{\Lambda}}+\Omega_{ k}^{0}a^{-2}\right],\\ \dot{H}&+H=H_{0}\Big{[}\frac{1}{2}\Omega_{m}^{0}a^{ -3}-\Omega_{r}^{0}\frac{2(1-s_{00})}{2-s_{00}}a^{-4x_{r}}\\ &+\Omega_{\Lambda}^{0}\frac{2(1-s_{00})}{2-5s_{00}}a^{-x_{ \Lambda}}\Big{]},\end{split} \tag{11}\] where \(H_{0}\) is the value of the Hubble parameter at the present time, and the quantities \(\Omega_{\chi}^{0}\) denote the normalised densities for matter, radiation, cosmological constant, and curvature, respectively. ## III Dark energy Since any spacetime-symmetry breaking must be small, we expand the Friedmann equations (11) to second order in \(s_{00}\), after which they read \[\begin{split} H^{2}&=H_{0}^{2}\Big{[}\Omega_{m}^{0 }a^{-3}+\Omega_{r}^{0}a^{-4}+\Omega_{\Lambda}+\Omega_{k}^{0}a^{-2}\\ &+s_{00}(\Omega_{r}^{0}a^{-4}\ln a+3\Omega_{\Lambda}^{0}\ln a)+ s_{00}^{2}(\frac{1}{2}\Omega_{r}^{0}(\ln a\\ &+\ln a^{2})a^{-4}+\frac{3}{2}\Omega_{r}^{0}(5\ln a+3\ln a^{2})) \Big{]}\\ \dot{H}&+H=H_{0}^{2}\Big{[}-\frac{1}{2}\Omega_{m}^{ 0}a^{-3}-\Omega_{r}^{0}a^{-4}+\Omega_{\Lambda}^{0}\\ &+s_{00}(\frac{1}{2}\Omega_{r}^{0}(1-2\ln a)a^{-4}+\frac{3}{2} \Omega_{\Lambda}^{0}(1+2\ln a))\\ &+s_{00}^{2}(\frac{1}{4}\Omega_{r}^{0}(1-2\ln a^{2})a^{-4}+\frac {3}{4}\Omega_{\Lambda}^{0}(5+16\ln a\\ &+6\ln a^{2}))\Big{]},\end{split} \tag{12}\] and we see that the effects of the modified continuity equation can be represented by standard \(\Lambda\)CDM cosmology plus a dynamical dark-energy term with logarithmic dependence of the scale factor; for example, we can write the first equation as \[H^{2}=H_{0}^{2}[\Omega_{m}^{0}a^{-3}+\Omega_{r}^{0}a^{-4}+\Omega_{k}a^{-2}+ \Omega_{\rm DE}(a)], \tag{13}\] where \(\Omega_{\rm DE}(a)\) represents all symmetry-breaking terms along with the cosmological constant. We find the energy density \(\rho_{\rm DE}\) and pressure \(p_{\rm DE}\) of the dark energy as \[\begin{split}\rho_{\rm DE}&=\frac{H_{0}^{2}}{\kappa} \Big{[}\Omega_{\Lambda}^{0}+s_{00}(\Omega_{r}^{0}a^{-4}\ln a+4\Omega_{\Lambda }^{0}\ln a)\\ &+s_{00}^{2}(\frac{1}{2}\Omega_{r}^{0}a^{-4}(\ln a+\ln a^{2})\\ &+\frac{3}{2}\Omega_{\Lambda}^{0}(5\ln a+3\ln a^{2}))\Big{]}\\ p_{\rm DE}&=-\frac{H_{0}^{2}}{\kappa}\Big{[}3\Omega_ {\Lambda}^{0}+s_{00}(\Omega_{r}^{0}a^{-4}+3\Omega_{\Lambda}^{0}-\Omega_{r}^{0}a ^{-4}\ln a\\ &+9\Omega_{\Lambda}^{0}\ln a)+s_{00}^{2}\frac{1}{2}(\Omega_{r}^{0 }a^{-4}+15\Omega_{\Lambda}^{0}+(\Omega_{r}^{0}a^{-4}\\ &+63\Omega_{\Lambda}^{0})\ln a-(\Omega_{r}^{0}a^{-4}-27\Omega_{ \Lambda}^{0})\ln a^{2})\Big{]},\end{split} \tag{14}\] from which we obtain the dark-energy equation of state parameter \(w_{\rm DE}=p_{\rm DE}/\rho_{\rm DE}\), which can be found in Appendix A4; it can easily be checked that \(w_{\rm DE}\to-1\) as \(s_{00}\to 0\). For small values of \(a\), the \(w_{\rm DE}\) mimics that of radiation, with a formal limit \(w_{\rm DE}\to 1/3\) as \(a\to 0\). As \(a\) increases, we see that there exists a divergence when \(s_{00}\) is positive, after which \(w_{\rm DE}\) settles down to a value close to minus one, i.e. almost pure cosmological constant. For negative values of \(s_{00}\), the transition in \(w_{\rm DE}\) is smooth, and shows no divergent behaviour. We plot the behaviour of \(w_{\rm DE}\) in Figure 1 for different values of \(s_{00}\). Footnote 4: Similar equations of state, with logarithmic dependence of the scale factor, were found in [55] and [56]. Although not easily visible in Figure 1, \(w_{\rm DE}\) only reaches Figure 1: The dark-energy equation of state for different values of the coefficient \(s_{00}\). Negative values can be seen to ensure a smooth transition to \(w_{\rm DE}\approx-1\). ### The CPL parametrisation The Chevallier-Polarski-Linder parametrisation of the DE equation of state is one of the standard tools used to represent the unknown properties of DE in the late Universe [57; 58]. For values of the scale factor close to the value at the present day (\(a_{0}=1\)), we may expand the equation of state for dark energy as \[w_{\rm DE}=w_{0}+w_{a}(1-a)+w_{b}(1-a)^{2}+\dots, \tag{15}\] and we find from Eq. (16) that \[\begin{split} w_{0}=&-1-s_{00}\left(1+\frac{ \Omega_{r}^{0}}{3\Omega_{\Lambda}^{0}}\right)-s_{00}^{2}\frac{\Omega_{r}^{0}+ 15\Omega_{\Lambda}^{0}}{6\Omega_{\Lambda}^{0}}\\ w_{a}=&-s_{00}\frac{8\Omega_{r}^{0}}{\Omega_{ \Lambda}^{0}}-s_{00}^{2}\frac{\Omega_{r}^{0}(\Omega_{r}^{0}+9\Omega_{\Lambda}^ {0})}{3(\Omega_{\Lambda}^{0})^{2}},\end{split} \tag{16}\] where in \(w_{a}\) we have excluded terms of higher than second order in \(s_{00}\), which can be used to place direct limits on \(s_{00}\) from data. One of the more recent analyses [59] used a combination of the final Planck 2018 data release, Baryon Acoustic Oscillation measurements (BAO), and the Cosmic Distance Ladder (CDL) calibrated with Cepheid variable stars. The results revealed that to \(1\sigma\), \(w_{0}\) is distinctly negative (\(w_{0}=-1\) and \(w_{a}=0\) gives a pure cosmological constant), and that \(w_{a}\) is consistent with zero. The exact values are given in Table 1, and given these values of the CPL parameters together with our choice of fiducial cosmology we find that the upper limits on \(w_{0}\) are all consistent with complex values of \(s_{00}\), and we find the same for \(w_{a}\), except for the case of Planck+CDL, where we find two solutions of the order \(s_{00}\sim\pm 10\), which lies beyond the accuracy of our approximations. We find similar results for all other cases, with the exception of the lower limit of \(w_{0}\) Planck+BAO, which yields \(s_{00}\gtrsim-0.01\). These results show little sensitivity to a \(\pm 10\%\) change in \(\Omega_{m}^{0}\). ### Adiabatic sound speed Assuming for a moment that the pressure \(p_{\rm DE}\) depends on the entropy \(S\) and energy density \(\rho_{\rm DE}\), a generic variation can be written as \(\delta p_{\rm DE}(S,\rho_{\rm DE})=(\partial p/\partial S)\delta S+(\partial p /\partial\rho_{\rm DE})\delta\rho_{\rm DE}\). We can rewrite this as \(\dot{\delta}p=\delta p_{\rm na}+c_{a}^{2}\delta\rho_{\rm DE}\), where \(\delta p_{\rm na}\) is the non-adiabatic perturbation related to a variation in the entropy \(S\), and \(c_{a}^{2}\) is the adiabatic sound speed, which can be written as \[c_{a}^{2}=\frac{\dot{p}_{\rm DE}}{\dot{\rho}_{\rm DE}}. \tag{17}\] This definition follows naturally by considering the behaviour of \(\delta p_{\rm DE}\) and \(\delta\rho_{\rm DE}\) under the gauge transformation \(t\to t-\delta t\), \(\delta\rho_{\rm DE}\to\dot{\rho}\delta t\), \(\delta p_{\rm DE}\to\delta p_{\rm DE}+\dot{p}_{\rm DE}\delta t\), where only the definition of \(c_{a}^{2}\) leaves \(\delta p_{\rm na}\) gauge invariant [60; 61]. Since the adiabatic sound speed can be written using only background quantities, we can find it without resorting to perturbation theory; the result is rather lengthy and can be found in Appendix A. We find that for small values of the scale factor, \(c_{a}^{2}\to 1/3\), and then increases slightly before relaxing down smoothly down to a value close to minus unity for larger values of \(a\), as can be seen in Figure 3. This behaviour is similar to that of IR-modified Horava-Lifshitz gravity, which has also been shown to contain a type of dynamical dark energy, but where \(c_{a}^{2}\) flows from \(+1/3\to-1/3\)[28; 30]. We note that that a negative adiabatic sound speed is not a problem, as it does not describe the propagation speed, but rather the relative change between the pressure and the density. Figure 3: The adiabatic sound speed for different values of the coefficient \(s_{00}\). \begin{table} \begin{tabular}{l c c c} \hline \hline Parameter & Planck & Planck+BAO & Planck+CDL \\ \hline \(w_{0}\) & \(-1.21^{+0.33}_{-0.60}<-0.37\) & \(-0.67\pm 0.32\) & \(-0.89^{+0.32}_{-0.16}\) \\ \(w_{a}\) & \(<-0.85<0.71\) & \(-1.05^{+0.99}_{-0.77}\) & \(<-1.04<0.47\) \\ \hline \hline \end{tabular} \end{table} Table 1: Constraints on the CPL parameters from [51]. Figure 2: The dark energy equation of state as a function of the coefficient \(s_{00}\) with the scale factor fixed to \(a=50\). A negative values of \(s_{00}\) is necessary in order to avoid phantom behaviour. ### Null-energy condition The Null-Energy Condition (NEC) plays an important role in general relativity, where it is an ingredient in the Hawking-Penrose singularity theorem and the positive mass theorem. For a causal and Lorentz-invariant scalar-field theory, imposing the NEC is sufficient to guarantee stability, and NEC violation is often used as an indication of phantom behaviour. It states that for any null vector \(k^{\mu}\), the stress-energy tensor should satisfy \[T_{\mu\nu}k^{\mu}k^{\nu}\geq 0, \tag{18}\] and can be interpreted as a condition of causality in the theory; the equivalent condition reads \(\rho+p\geq 0\). Since we are working with with a model where local Lorentz invariance is broken explicitly, there is no guarantee that the dynamical dark energy discussed here will uphold causality, and therefore the NEC may be in jeopardy. We check this explicitly by plotting \(\rho_{\rm DE}+p_{\rm DE}\) for different values of the coefficient \(s_{00}\), which can be seen in Figure 4, after which it becomes clear that \(s_{00}\)_needs to be negative_ for the NEC to hold. We can come to the same conclusion by studying the modified continuity equation (8), where the NEC implies that energy density cannot increase as long as the Universe is expanding. In order for the NEC to hold here, the auxiliary function \(f(s_{00},w)\) must have the correct sign, which can easily be shown to occur only for \(s_{00}<0\) when \(w=-1\) and for all values of \(s_{00}\) when \(w=1/3\). ## IV Conclusions In this paper, we have investigated the DE properties of a cosmological solution featuring a simple type of spacetime-symmetry breaking. By writing down the modified Friedmann equations and considered the non \(\Lambda\)CDM contributions as a type of dynamical DE, we identified the effective DE equation of state \(w_{\rm DE}\). We found that \(w_{\rm DE}\) is singular at small values of the scale factor \(a\) when the spacetime-symmetry breaking coefficient \(s_{00}\) is positive, but exhibits smooth evolution for negative values of the same; we also found that the DE is phantom for large \(a\) if \(s_{00}>0\). Further, we identified the CPL parameters of our DE model and concluded that current bounds on \(w_{0}\) and \(w_{a}\) cannot be used to place competitive constraints on spacetime-symmetry breaking. We also investigated the adiabatic sound speed, which shows no discontinuities for any value of \(s_{00}\), but we concluded that the NEC is broken for \(s_{00}>0\). The main take-home message from this analysis is that in this specific realisation of explicit breaking, the EFT coefficient \(s_{00}\) needs to be negative if one wants to avoid the issues normally present in phantom-like cosmological fluids. A negative \(s_{00}\) would have consequences in other contexts: for example, \(s_{00}\) is related to the propagation speed of gravitational waves, as was discussed in [37], where the bound \(-6\cdot 10^{-15}<s_{00}<+7\cdot 10^{-16}\) was obtained from the observation of GW170817 and GRB170817A, and restricting \(s_{00}\) to negative values implies that the propagation speed of tensor modes is less than unity, i.e. _slower_ than light. Generalising the initial ansatz (6) will necessarily alter these predictions. ###### Acknowledgements. The author was financed by CNES and acknowledges support by PSL/Observatoire de Paris. The author also thanks Eoin O Colgain and Quentin G. Bailey for useful comments. ## V Appendices ## Appendix A Explicit expressions for \(w_{\rm DE}\) and \(c_{a}^{2}\) The expressions for the equation of state and adiabatic sound speed are \[w_{\rm DE}=\frac{-3a^{4}(s_{00}(5s_{00}+2)+2)\Omega_{\Lambda}^{0}+s_{00}\ln (a)\left(-9a^{4}(7s_{00}+2)\Omega_{\Lambda}^{0}+s_{00}\ln(a)\left(\Omega_{ \rm P}^{0}-27a^{4}\Omega_{\Lambda}^{0}\right)-(s_{00}-2)\Omega_{\rm P}^{0} \right)-s_{00}(s_{00}+2)\Omega_{\rm P}^{0}}{3s_{00}\ln(a)\left(3a^{4}(5s_{00}+2 )\Omega_{\Lambda}^{0}+s_{00}\ln(a)\left(9a^{4}\Omega_{\Lambda}^{0}+\Omega_{ \rm P}^{0}\right)+(s_{00}+2)\Omega_{\rm P}^{0}\right)+6a^{4}\Omega_{\Lambda} ^{0}}, \tag{19}\] Figure 4: The Null Energy Condition (NEC) in arbitrary units, which can be seen to be violated for positive values of the coefficient \(s_{00}\). and \[c_{a}^{2}=\frac{1}{3}-\frac{2\left(-39a^{4}s_{00}\Omega_{\Lambda}^{0}-36a^{4}s_{00 }\Omega_{\Lambda}^{0}\ln(a)-12a^{4}\Omega_{\Lambda}^{0}+4s_{00}\Omega_{r}^{0} \ln(a)+s_{00}\Omega_{r}^{0}+4\Omega_{r}^{0}\right)}{3\left(-15a^{4}s_{00}\Omega_ {\Lambda}^{0}-18a^{4}s_{00}\Omega_{\Lambda}^{0}\ln(a)-6a^{4}\Omega_{\Lambda}^{ 0}+4s_{00}\Omega_{r}^{0}\ln^{2}(a)+2s_{00}\Omega_{r}^{0}\ln(a)+8\Omega_{r}^{0} \ln(a)-s_{00}\Omega_{r}^{0}-2\Omega_{r}^{0}\right)}. \tag{32}\]
2310.15727
Towards Assume-Guarantee Verification of Strategic Ability
Formal verification of strategic abilities is a hard problem. We propose to use the methodology of assume-guarantee reasoning in order to facilitate model checking of alternating-time temporal logic with imperfect information and imperfect recall.
Łukasz Mikulski, Wojciech Jamroga, Damian Kurpiewski
2023-10-24T11:02:39Z
http://arxiv.org/abs/2310.15727v1
# Towards Assume-Guarantee Verification of Strategic Ability ###### Abstract. Formal verification of strategic abilities is a hard problem. We propose to use the methodology of assume-guarantee reasoning in order to facilitate model checking of alternating-time temporal logic with imperfect information and imperfect recall. model checking, assume-guarantee reasoning, strategic ability + Footnote †: journal: Information Systems ## 1. Introduction _Alternating-time temporal logic_\(\mathbf{ATL}^{*}\)(Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1998; Becker, 1999; Becker, **Composition of Modules**. The model of a MAS is given by the asynchronous composition \(M=M^{(1)}|\ldots|M^{(n)}\) that combines modules \(M^{(1)},\ldots,M^{(n)}\) into a single module \(M\)(Makarov and Krakulov, 2015). The composition is standard; it only requires the compliance of the valuations. **Traces and Words.** A trace of a module \(M\) is an infinite sequence of alternating states and transitions \(\sigma=q_{0}\alpha_{0}q_{1}\alpha_{1}\ldots\), where \(q_{0}\) is the initial state and \((q_{i},\alpha_{i},q_{i+1})\in T\) for every \(i\in\mathbb{N}\). An infinite word \(w=\sigma_{0}\sigma_{0}\sigma_{1}\ldots\in(D^{X})^{\omega}\) is _derived_ by module \(M\) with trace \(\sigma=q_{0}\sigma_{0}q_{1}\alpha_{1}\ldots\) if \(v_{i}=\lambda(q_{i})\) for all \(i\in\mathbb{N}\). An infinite word \(u=\omega_{0}\sigma_{1}\ldots\in(D^{I})^{\omega}\) is _admitted_ by \(M\) with \(\sigma=q_{0}\omega_{0}q_{1}\alpha_{1}\ldots\). Finally, \(w\) (resp. \(u\)) is derived (resp. admitted) by \(M\) if there exists a trace of \(M\) that derives (resp. admits) it. ## 3. What Agents Can Achieve _Alternating-time temporal logic_ ATL\({}^{*}\)(Bach et al., 2010; Chen et al., 2011; Chen et al., 2011) introduces _state-gie modalities_\(\langle\!\langle C\rangle\!\rangle\gamma\), expressing that coalition \(C\) can enforce the temporal property \(Y\). In this paper, we use the _imperfect information/imperfect recall_ variant without next step operator X and nested strategic modalities, denoted \(\mathsf{sATT}^{*}\) ("simple \(\mathsf{sATT}^{*}\)"). **Syntax**. Formally, the syntax of \(\mathsf{sATT}^{*}\) is defined by: \[\phi:=p(Y)\mid\neg\phi\mid\phi\wedge\phi\mid\langle\!\langle C\rangle\! \rangle;\quad\gamma:=p(Y)\mid\neg\gamma\mid\gamma\wedge\gamma\mid\gamma\cup\gamma\] where \(p:Y\to D\) for some subset of domain variables \(Y\subseteq X\). That is, each atomic statement refers to the valuation of a subset of variables used in the system. U is the "strong until" operator of LTL. The "sometime" and "always" operators F and G can be defined as usual by \(\mathsf{F}\gamma\sqsubseteq\top\,\mathsf{U}\gamma\) and \(\mathsf{G}\gamma\sqsubseteq\neg(\top\,\mathsf{U}\neg\gamma)\). **Semantics**. A _memoryless imperfect information strategy_ for agent \(i\) is a function \(s_{i}:Q_{i}\to T_{i}\). We say that a trace \(\sigma\) (word derived with \(\sigma\)) _implements_ a strategy \(s_{i}\) if for any \(j\) where \(q_{j}^{(i)}\neq q_{j+1}^{(i)}\) we have \(s_{i}(q_{j}^{(i)})=(q_{j}^{(i)},\alpha_{j},q_{j+1}^{(i)})\), where \(\alpha_{j}:I_{i}\to D\) and \(\alpha_{j}(x)=\lambda(q_{j})(x)\). Let \(C\subseteq\{1,\ldots,n\}\) be a set of agent indices. We define _joint strategies_ for \(C\) as tuples of individual strategies, one per \(i\in C\). The semantics of strategic operators is given by the following clause: \[M,q\models\langle\!\langle C\rangle\!\rangle_{Y}\] if there exists a joint strategy \(s_{C}\) for \(C\) such that, for any word \(w\) that implements \(s_{C}\), we have \(M,w\models\gamma\). ## 4. Assumptions and Guarantees We propose an assume-guarantee scheme, where one can reduce the complexity of model checking \(\mathsf{sATT}^{*}\) by verifying individual strategic abilities of single agents against overapproximating abstractions of its environment, i.e., the rest of the system. The general idea is that if an agent has a successful strategy in a more nondeterministic environment, then it can use the same strategy to succeed in the original model. Moreover, it often suffices to prepare the abstraction based only of the modules that are connected with the agent by at most \(k\) synchronization steps. **Assumptions and Guarantees**. The environmental abstractions are formalized by _assumptions_\(A=(M_{A},F)\), where \(M_{A}\) is a module and \(F\) is a set of accepting states that provide Buchi-style accepting rules for infinite traces derived by \(M\). The assumption should be constructed so that it _guarantees_ that the set of computations accepted by \(A\) covers the sequences of changes in the input variables \(I_{M}\) of module \(M\). We capture those changes by the notion of _curtailment_. Formally, a sequence \(v=v_{1}v_{2}\ldots\) over \(D^{Y}\) is a curtailment of sequence \(u=u_{1}u_{2}\ldots\) over \(D^{X}\) (where \(Y\subseteq X\)) if there exists an infinite sequence of indices \(j_{1}<j_{2}<\ldots\) with \(j_{1}=1\) such that \(\forall_{i}\forall_{j_{1}\leq k<j_{1}}v_{i}=u_{k}\|_{Y}\). **The Scheme.** Let \(M=M_{1}|M_{2}|\ldots|M_{n}\) be a system composed from modules \(M_{1},M_{2},\ldots,M_{n}\), where \(X_{M_{i}}\cap X_{M_{j}}=\varnothing\) for \(i\neq j\). By \(Comp_{i}^{1}\) we denote the composition of all modules directly related to \(M_{i}\). Moreover, \(Comp_{i}^{k}\) denotes the composition of the modules in \(Comp_{i}^{k-1}\) and the modules directly related to them (except for \(M_{i}\)). Further, let \(\psi_{i},i\in C\) be path formulas of \(\mathsf{sATT}^{*}\), one for each agent in \(C\). Simple assume-guarantee reasoning for strategic ability is provided by the following inference rule: \[\begin{array}{c c}&\forall_{i\in C}\,M_{i}|_{A}\models_{i\prime}\langle\! \langle i\rangle\!\rangle_{\psi_{i}}\\ \mathbf{R_{k}}&\forall_{i\in C}\,Comp_{i}^{k}\models A_{i}\\ \hline M_{1}|...|M_{n}\models_{i\prime}\langle\!\langle C\rangle\!\rangle_{ \bigwedge_{i\in C}\psi_{i}}\end{array}\] ## 5. Experiments Here, we present preliminary experimental results for the assume-guarantee rule proposed in Section 4, using the voting scenario of Example 2.2 as the benchmark. The assumptions are provided by a simplified module of the coercer, where he only waits for the value reported by \(Voter_{1}\), no matter how he reacts to other voters' choices. The algorithms have been implemented in Python, and run on a server with 2.40 GHz Intel Xeon Platinum 8260 CPU, 991 GB RAM, and 64-bit Linux. The verified formula was \(\varphi\equiv(\langle Voter_{1}\rangle)\mathsf{G}(\neg\mathsf{status}_{1} \lor\mathsf{voted}_{1}=1)\). The results are presented in Table 1. The first column describes the configuration of the benchmark, i.e., the number of the voters. Then, we report the performance of model checking algorithms that operate on the explicit model of the whole system vs. assume-guarantee verification. _DFS_ is a straightforward implementation of depth-first strategy synthesis. _Approx_ refers to the method of fixpoint-approximation (Krishnan et al., 2015); besides the time, we also report if the approximation was conclusive. ## 6. Conclusion In this paper, we sketch how assume-guarantee reasoning can be extended for verification of strategic abilities. The main idea is to factorize coalitional abilities by the abilities of the coalition members, and to verify the individual abilities against Buchi-style abstractions of the agents' environment of action. Preliminary experimental evaluation has produced very promising results, showing noticeable improvement in the verification of large models consisting of asynchronous agents with independent goals. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**V**} & \multicolumn{3}{c|}{**Monolithic model checking**} & \multicolumn{3}{c|}{**Assume-guarantee verification**} \\ \cline{2-9} & **\#st** & **\#tr** & **DFS** & **Approx** & **\#st** & **\#tr** & **DFS** & **Approx** \\ \hline 2 & 529 & 2216 & \textless{}0.1 & \textless{}0.1/\textless{}Yes & 161 & 528 & \textless{}0.1 & \textless{}0.1/\textless{}Yes \\ \hline 3 & 1.22e4 & 1.28e5 & \textless{}0.1 & \textless{}0.8/\textless{}Yes & 1127 & 7830 & \textless{}0.1 & \textless{}0.1/\textless{}Yes \\ \hline 4 & 2.79e5 & 6.73e6 & \textless{}0.1 & \textless{}0.30/\textless{}Yes & 7889 & \textless{}1.08e5 & \textless{}0.1 & \textless{}0.5/\textless{}Yes \\ \hline 5 & 6.43e6 & 3.42e8 & timeout & 5.52e4 & 1.45e6 & \textless{}0.1 & \textless{}0.1 & \textless{}0.7/\textless{}Yes \\ \hline \hline \multirow{2}{*}{**V**} & \multicolumn{3}{c|}{timeout} & \multicolumn{3}{c|}{timeout} & \multicolumn{3}{c|}{timeout} \\ \cline{2-9} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \end{tabular} \end{table} Table 1. Results of assume-guarantee verification for simple voting (times given in seconds; timeout-2h) ## Acknowledgments We acknowledge the support of the National Centre for Research and Development, Poland (NCBR), and the Luxembourg National Research Fund (FNR), under the PolLux/FNR-CORE project STV (POLLUX-VII/1/2019 - C18/IS/12685695/IS/STV/Ryan).
2310.12048
Two-Dimensional Noble Metal Chalcogenides in the Frustrated Snub-Square Lattice
We study two-dimensional noble metal chalcogenides, with composition {Cu, Ag, Au}2{S, Se, Te}, crystallizing in a snub-square lattice. This is a semi-regular two-dimensional tesselation formed by triangles and squares that exhibits geometrical frustration. We use for comparison a square lattice, from which the snub-square tiling can be derived by a simple rotation of the squares. The mono-layer snub-square chalcogenides are very close to thermodynamic stability, with the most stable system (Ag2Se) a mere 7 meV/atom above the convex hull of stability. All compounds studied in the square and snub-square lattice are semiconductors, with band gaps ranging from 0.1 to more than 2.5 eV. Excitonic effects are strong, with an exciton binding energy of around 0.3 eV. We propose the Cu (001) surface as a possible substrate to synthesize Cu2Se, although many other metal and semiconducting surfaces can be found with very good lattice matching.
Hai-Chen Wang, Ahmad W. Huran, Miguel A. L. Marques, Muralidhar Nalabothula, Ludger Wirtz, Zachary Romestan, Aldo H. Romero
2023-10-18T15:34:28Z
http://arxiv.org/abs/2310.12048v1
# Two-Dimensional Noble Metal Chalcogenides in the Frustrated Snub-Square Lattice ###### Abstract We study two-dimensional noble metal chalcogenides, with composition \(\{\mathrm{Cu},\mathrm{Ag},\mathrm{Au}\}_{2}\{\mathrm{S},\mathrm{Se},\mathrm{Te}\}\), crystallizing in a snub-square lattice. This is a semi-regular two-dimensional tesselation formed by triangles and squares that exhibits geometrical frustration. We use for comparison a square lattice, from which the snub-square tiling can be derived by a simple rotation of the squares. The mono-layer snub-square chalcogenides are very close to thermodynamic stability, with the most stable system (Ag\({}_{2}\)Se) a mere 7 meV/atom above the convex hull of stability. All compounds studied in the square and snub-square lattice are semiconductors, with band gaps ranging from 0.1 to more than 2.5 eV. Excitonic effects are strong, with an exciton binding energy of around 0.3 eV. We propose the Cu (001) surface as a possible substrate to synthesize Cu\({}_{2}\)Se, although many other metal and semiconducting surfaces can be found with very good lattice matching. pacs: ## I Introduction In the two-dimensional (2D) world, the plane has eleven different Euclidean tesselations using convex regular polygons. These symmetrical motifs have fascinated mankind for centuries and have been used as decorative elements since Roman and Islamic times or, more recently, in the beautiful work of M. C. Escher. Of these eleven, three are regular and are characterized by the number of edges meeting at each vertex, which can be either six (in the triangular lattice), three (in the hexagonal lattice), or four (in the square lattice). Some of the most notable materials in the atomic 2D world, such as graphene, the transition metal dichalcogenides, black phosphorus, etc., belong to this family. The remaining 8 lattices are semi-regular and are constructed from more than one regular polygon. The trihexagonal tiling is perhaps the most studied semi-regular tesselation, often called the Kagome lattice, due to its use in traditional Japanese basketry. This motif can be found in the layers of some naturally occurring minerals, and the presence of the equilateral triangles leads to a geometrical frustration responsible for an exotic behavior of the electronic and magnetic properties. For example, kagome compounds, such as Fe\({}_{3}\)Sn\({}_{2}\),[1; 2] FeSn,[3] YMn\({}_{6}\)Sn\({}_{6}\),[4] or CoSn[5] can exhibit Dirac cones and flat bands. Recently, a kagome material, KV\({}_{3}\)Sb\({}_{5}\), was found to have an unconventional chiral charge order, with a topological band structure and a superconducting ground state.[6] Here we are concerned by another, much less studied, semi-regular lattice, specifically the snub-square tiling. This tesselation consists of regular squares and triangles of matching edges, arranged so that exactly five edges meet at every vertex, and no edge is shared among two squares. Examples of this lattice can be found at larger length scales in two-dimensional metal-organic networks. For example, in Ref. [7] the snub-square tiling could be fabricated by performing the cerium-directed assembly of linear polyphenyl molecular linkers with terminal carbonitrile groups on an Ag(111) surface and by tuning the concentration and the stoichiometric ratio of rare-earth metal centers to ligands. This tesselation is also created by connecting a neutral rod-shaped secondary building unit with a cationic dicarboxylate ligand[8] or by the linking of trans-LnI\({}_{2}\)\({}^{+}\) nodes (Ln = Gd, Dy) by both closed-shell and anion radicals of 4,4'-bipyridine.[9] Furthermore, in this latter case, the occurrence of sizable magnetic exchange interactions and slow relaxation of magnetization behavior was observed.[9] We emphasize that triangles in the snub-square lattice lead to a geometrical frustration of the lattice (as in the Kagome lattice), so we can expect unique electronic and magnetic properties. The formation of these systems has also been investigated by computer simulations. Antlanger _et al._ succeeded, using a bottom-up strategy, to decorate patchy particles so that they self-assemble in most Archimedean tilings.[10] Furthermore, they found that the snub square was stable at intermediate or even elevated pressure values due to its compact structure, involving only triangles and squares as building polygonal units. Reference [11] has shown that the self-assembly of Archimedean networks requires a combination of the geometry of the particles and chemical selectivity. Finally, the Archimedean tiling can be formed in mixtures of a pentavalent molecule and a linear linker, the driving force being the mobility of the linker.[12] Recently, it was discovered that the snub-square Archimedean lattice (as well as the undistorted square lattice form) can also exist in some group-IB chalcogenides [13] with composition M\({}_{2}\)Ch (where M is Cu, Ag, or Au and Ch is a chalcogen). The crystalline structure of these 2D systems is illustrated in Fig. 1. The metal atoms are arranged in a flat snub-square lattice, while the chalcogen atoms are located at the center of the squares alternating above and below the plane. These systems are very close to thermodynamic stability, only slightly higher in energy than their bulk crystal phases. We also note that other 2D structures of Group-IB chalcogenides with similar stability have been predicted in Ref. [13]. The focus of our current study is the systematic comparison of the snub-square phases with the undistorted square-lattice phases. Related snub-square lattices were recently proposed for BaO\({}_{3}\) and TiO\({}_{2}\). [14] In the former system, the Ba atoms form a planar snub-square lattice with O\({}_{2}\) units filling both the triangular tiles (perpendicular to the plane) and the square tiles (in the plane). In the latter, the Ti sublattice is highly buckled, with the triangles decorated by one O atom and the squares decorated by two O atoms. However, in contrast with these two systems, 2D metal chalcogenides do not involve unusual metallic oxidation numbers (like in BaO\({}_{3}\)) and are very close to the convex hull of thermodynamic stability (for comparison, the TiO\({}_{2}\) system is 138 meV from the hull). As such, one can expect they should be much simpler to synthesize. Here we will discuss in detail a series of group IB snub-square chalcogenide properties. Specifically, we investigate the underlying bonding mechanism that in some cases stabilizes the snub-square phase as compared to the square one. We compare the electronic properties, optical absorption, vibrational properties, and Raman spectra between the two phases. We also propose suitable substrates with minimal mismatch to grow 2D snub-square chalcogenides. Finally, we consider the possibility of making quasi-crystalline lattices based on these systems. ## II Methods The density-functional theory (DFT) calculations of optimized geometries and electronic band structures are performed via the Vienna _ab initio_ simulation package VASP [15; 16] with the projector augmented wave method (PAW). [17] The plane-wave cutoff is set to 520 eV. A vacuum region of at least 15 A is applied to the 2D slabs and the geometries are optimized until the forces are smaller than 0.005 eV/A. The Brillouin zones are sampled by Monkhorst-Pack \(k\)-grids centered at \(\Gamma\), while the densitiy of the 2D \(k\)-mesh for structural optimization is 1200 \(k\)-points/A\({}^{-2}\). For electron band structure and carrier effective mass, we use a higher-density \(k\)-mesh (3000 \(k\)-points/A\({}^{-2}\)), and an interpolation of the eigenvalues is performed using BoltzTraP2. [18; 19] The interpolated bands are further used to calculate the carrier effective masses. The phonon dispersions have been calculated using density-functional perturbation theory (DFPT) as implemented in Quantum Espresso. [20; 21] A \(\Gamma\) centered k-point grid of \(12\times 12\times 1\) and a cut-off of 90 Ry were used to converge the ground state charge density. We have used a vacuum distance off 15 A and a 2D Coulomb cutoff (otherwise, a weak longitudinal optical/transverse optical splitting would occur at the \(\Gamma\) point for some of the phonon modes). We computed dynamical matrices on a uniform \(4\times 4\)\(\Gamma\)-centered coarse grid and performed a Fourier transform to obtain the interatomic force constants. The interatomic force constants were then used to obtain the phonon dispersions. All calculations with the Quantum Espresso code were performed using the norm-conserving pseudopotentials from the PseudoDojo project. [22; 23] Total energy differences and optimized structures are almost identical in calculations with the VASP and Quantum Espresso codes. For geometry optimization and phonon calculations, Figure 1: Crystal structures of 2D–Ag\({}_{2}\)Se with top view and side view. The silver and yellow spheres denote Ag and Se atoms, respectively. The metallic framework forms either (a) a square (\(P4/nmm\)) or a (b) a snub-square lattice (\(P42_{1}2\)), with the chalcogen atoms alternating above and below the squares. we use the Perdew-Burke-Ernzerhof[24] (PBE) exchange-correlation functional. We note that by using different functionals for the geometry optimization (in particular the local density approximation, LDA, that tends to slightly overbind), the energetic ordering of simple square and snub-square phases (and also of the additional phases calculated in Ref. [13]) may change. The electronic band structures are calculated with the Heyd-Scuseria-Ernzerhof HSE06[25] hybrid functional. To obtain the optical absorption spectra, we start with the energy eigenvalues and Kohn-Sham wave functions obtained via DFT-PBE with the Quantum Espresso code. We first perform \(G_{0}W_{0}\) calculations to correct the energy eigenvalues. We use \(9\times 9\times 1\) uniform \(\Gamma\) centered \(k\)-point grids and include 600 Kohn-Sham states to converge the band structure for the materials. A cut-off of 8 Ry is used to construct the dielectric tensor, and a plasmon-pole scheme[26] is employed to model the frequency dependence of dielectric screening. Later, we perform Bethe-Salpeter equation (BSE)[27] calculations to obtain the absorption spectrum, including electron-hole interactions. We use a \(24\times 24\times 1\) uniform \(\Gamma\) centered \(k\)-point grid to converge the absorption spectra. A total of 400 Kohn-Sham states and a cutoff of 8 Ry (109 eV) is used to build the static dielectric tensor. We include the top four conduction and bottom five valence bands to construct the BSE Hamiltonian and employ the Tamm-Dancoff approximation to decrease the computational cost.[28] In both \(G_{0}W_{0}\) and BSE calculation, a Coulomb cutoff of 32 Bohr is set along the non-periodic direction to remove the interactions with periodic images.[29] Both BSE and \(G_{0}W_{0}\) calculations are performed using the YAMBO code.[30; 31] The calculation of 2D films on a substrate is calculated with a non-local van der Waals corrected functional (optB86b-vdW),[32] and a six-layer slab of Cu-(001) surface is used as substrate. In this case, the three bottom layers of Cu-atoms are held fixed for the geometry optimizations while the remaining atoms can relax. ## III Results and Discussion ### Structure and Bonding As a 3D crystal, Ag\({}_{2}\)Se is naturally found in the form of naumannite, an orthorhombic system with \(P2_{1}2_{1}2_{1}\) space group symmetry.[34] Two inequivalent metallic sites are found in naumannite, namely a 3- and a 4-fold coordination centers.[33] Ag\({}_{2}\)S crystalizes in monoclinic anti-PbCl\({}_{2}\)-like structure and transform to Ag\({}_{2}\)Se-like structure at high-pressure.[35] The compound Ag\({}_{2}\)Te[36] forms in a distorted ZrSi\({}_{2}\)-like structure with the monoclinic space group \(P2_{1}/c\). The structure consists of two inequivalent Ag sites acting as 10-fold and 8-fold coordination centers, respectively. The 3D Cu\({}_{2}\)S material crystallizes in the tetragonal \(P4_{3}2_{1}2\) space group,[37] where the copper atoms are coordinated with three sulfur atoms in a trigonal planar configuration. While Cu\({}_{2}\)Se is predicted from theory to form in the same structure as Cu\({}_{2}\)S, experimental reports observe crystalization in cubic phases.[38; 39] In contrast, Cu\({}_{2}\)Te can be found uniquely in 2D hexagonal sheets with the space group \(P6/mmm\).[40] It turns out that Au\({}_{2}\)S is the only reported Au\({}_{2}\)Ch compound, exhibiting a cuprite-like structure with the cubic spacegroup \(P\bar{n}3m\).[41] The S ligands form 4-fold coordination centers with the Au cations in the cuprite-like phase. From this short overview, it is clear that, in the three-dimensional world, metal chalcogenides crystallize in many structures with different coordination and bonding patterns. The 2D snub-square lattice belongs to the \(P42_{1}2\) crystal space group (here, we will use the three-dimensional space group for convenience). It can be seen as the result of a rotation of the tetragonal pyramids of the square lattice (Fig. 1a) belonging to the space group \(P4/nmm\). The rotation causes a distortion of the perfect squared metallic network of the \(P4/nmm\) phase. Due to this close relation between the \(P42_{1}2\) and the \(P4/nmm\) phases, we will often compare them in the following. In Table 1, we compare the structural parameters among the M\({}_{2}\)Ch systems. For \(P42_{1}2\) structures, the degrees of distortion are quantified by the relative difference for Ch-M-Ch bond angles and M-M distances compared to the \(P4/nmm\) counterparts. We also computed atom bond orders[42; 43] between the nearest noble metal atoms in these systems. Of note is that for Ag\({}_{2}\)S and Au\({}_{2}\)S, the \(P42_{1}2\) structures symmetrize to the \(P4/nmm\) lattice during structural relaxation, indicating that for these two systems, the \(P42_{1}2\) structure is dynamically unstable. From this table, we can conclude several things. First, in the \(P4/nmm\) structures the Ch-M-Ch angles are always \(180^{o}\) (no distortion), M-M distances are above 3.5 A and BO\({}_{\text{MM}}\) shows negligible metal-metal bonding interactions, which can also be verified by the electron localization function (ELF) and charge density difference depicted in Fig. 2(a) and (c), respectively. On the contrary, in the \(P42_{1}2\) geometry, the M-M bonding is noticeable, as shown by the ELF and charge difference plots in Fig. 2. More importantly, there is a clear correlation between the degree of distortion and the increase of M-M bonding. The distortions are more significant for the gold compound and smaller for silver ones with fixed chalcogen and increase for heavier chalcogens for a given metal. The BO\({}_{\text{MM}}\) reaches a maximum of 0.41 in the case of Au\({}_{2}\)Te. The ELF in Fig. 2(a) and (b) also shows clearly that there is a strong delocalization of the charge going from the \(P4/nmm\) to the \(P42_{1}2\) phase. We also list the thermodynamic stability of the 2D-M\({}_{2}\)Ch in both \(P4/nmm\) and \(P42_{1}2\) symmetries (see Fig. 2). The system that is furthest from the convex hull is Cu\({}_{2}\)Te at 118 meV/atom in the \(P4/nmm\) geometry and at 65 meV/atom in the \(P42_{1}2\) counterpart. The most stable system is Ag\({}_{2}\)Se at a mere 7 meV/atom in \(P42_{1}2\) and 3 meV/atom in \(P4/nmm\). As mentioned above, only Cu\({}_{2}\)Ch systems stabilize in the \(P42_{1}2\) for all three chalcogenides. For both geometries (not applicable for Ag\({}_{2}\)S and Au\({}_{2}\)S), \(E_{\rm hull}\) increases for copper and silver chalcogenides as the anions get heavier while this trend is reversed among gold compounds. Interestingly, the relative stability between the \(P42_{1}2\) and \(P4/nmm\) analogs correlates with the degree of distortion and bond order for copper compounds, showing that in copper analogs the M-M bonding interaction caused by distortion crucially stabilizes the \(P42_{1}2\) geometry. However, for silver and gold compounds, the selenides are more stable in the \(P4/nmm\) geometry. ### Electronic properties In Fig. 3, we show the electronic band structure and the projected density of states for the three selenide \(P42_{1}2\) systems computed using the screened hybrid density-functional HSE06.[25] The systems exhibit a direct gap at the \(\Gamma\) point. The highest valence states are doubly degenerate with a stark difference in the dispersion behavior near the \(\Gamma\) point. One of the states is characterized by a nearly flat dispersion curve, while the other shows a very pronounced curvature. Consequently, these systems could accommodate light holes as well as heavy ones. The lowest conducting states show somewhat curved dispersion curves around \(\Gamma\) compatible with particles of effective masses similar to those of the aforementioned light holes. Table 2 lists the band gap and the particle/hole effective masses for the \(P42_{1}2\) systems and their \(P4/nmm\) analogs. It is well known that the PBE functional underestimates considerably band gaps.[44] In fact, three of the \(P42_{1}2\) systems were misidentified as metals at the PBE level. The electronic band gaps at the HSE06 level show that \(P4/nmm\) systems are moderate- to wide-band-gap semiconductors with band gap values from 1.59 up to 2.59 eV. All the \(P42_{1}2\) systems have smaller band gaps than their previously mentioned counterparts, with values ranging from 90 meV to 2.12 eV, consistent with the \begin{table} \begin{tabular}{l|c c c c|c c c|c c c c|c c} & \multicolumn{6}{c|}{\(P4/nmm\)} & \multicolumn{6}{c}{\(P42_{1}2\)} & \multicolumn{6}{c}{3D Exp.} \\ Formula & \(E_{\rm hull}\) & Gap\({}^{\rm PBE}\) & Gap\({}^{\rm HSE}\) & \(m_{\rm a}^{*}\) & \(m_{\rm h,\ L}^{*}\) & \(m_{\rm h,\ H}^{*}\) & \(E_{\rm hull}\) & Gap\({}^{\rm PBE}\) & Gap\({}^{\rm HSE}\) & \(m_{\rm a}^{*}\) & \(m_{\rm h,\ L}^{*}\) & \(m_{\rm h,\ H}^{*}\) & Spg. & Gap\({}^{\rm PBE}\) \\ \hline Cu\({}_{2}\)S & 24 & 0.60 & 1.64 & 0.14 & 0.16 & 1.15 & 13 & 0.16 & 1.07 & 0.12 & 0.14 & 1.11 & \(P43_{2}1_{2}\) & 0.13 \\ Cu\({}_{2}\)Se & 54 & 0.62 & 1.63 & 0.15 & 0.15 & 1.05 & 27 & 0.12 & 1.00 & 0.12 & 0.14 & 0.98 & \(Fm3m\) & 0.09 \\ Cu\({}_{2}\)Te & 118 & 0.50 & 1.38 & 0.14 & 0.14 & 0.64 & 65 & 0.00 & 0.67 & 0.11 & 0.11 & 0.81 & \(P6/mmm\) & 0.00 \\ Ag\({}_{2}\)S & 7 & 1.79 & 2.59 & 0.19 & 0.20 & 1.13 & - & - & - & - & - & \(P2_{1}/n\) & 0.93 \\ Ag\({}_{2}\)Se & 3 & 1.82 & 2.58 & 0.19 & 0.22 & 1.22 & 7 & 1.34 & 2.12 & 0.18 & 0.21 & 1.10 & \(P2_{1}2_{1}2_{1}\) & 0.00 \\ Ag\({}_{2}\)Te & 28 & 1.69 & 2.35 & 0.18 & 0.18 & 1.38 & 21 & 0.95 & 1.56 & 0.15 & 0.19 & 1.08 & \(P2_{1}/c\) & 0.00 \\ Au\({}_{2}\)S & 85 & 1.00 & 1.59 & 0.10 & 0.10 & 0.70 & - & - & - & - & - & \(P\bar{n}3m\) & 1.91 \\ Au\({}_{2}\)Se & 42 & 1.02 & 1.61 & 0.12 & 0.10 & 0.71 & 57 & 0.00 & 0.42 & 0.07 & 0.07 & 0.74 & – & – \\ Au\({}_{2}\)Te & 31 & 0.93 & 1.44 & 0.12 & 0.10 & 0.67 & 29 & 0.00 & 0.09 & 0.04 & 0.06 & 0.13 & – & – \\ \end{tabular} \end{table} Table 2: Summary of 2D-M2Ch structures, distance to the convex hull (\(E_{\rm hull}\) in meV/atom), band gap calculated with the PBE functional (gap\({}^{\rm PBE}\) in eV) and HSE06 hybrid functional (gap\({}^{\rm HSE}\) in eV), effective electron mass (\(m_{\rm e}^{*}\) in \(m_{\rm e}\)), and light/heavy hole masses (\(m_{\rm h,\ L}^{*}/m_{\rm h,\ H}^{*}\) in \(m_{\rm e}\)) at the band edges. For comparison, we showed the space group (Spg.) and PBE band gap (gap\({}^{\rm PBE}\), taken from the Materials Project database[33]) for the experimental 3D crystal structures. \begin{table} \begin{tabular}{l|c c c c|c c c c} & \multicolumn{4}{c|}{\(P4/nmm\)} & \multicolumn{4}{c}{\(P42_{1}2\)} \\ Formula & BO\({}_{\rm MM}\) & \(a\) & \(D_{\rm M-M}\) & \(\theta\) & BO\({}_{\rm MM}\) & \(a\) & \(D_{\rm M-M}\) & \(\theta\) \\ \hline Cu\({}_{2}\)S & 0.02 & 5.179 & 3.63 & 180.0 & 0.21 & 5.008 & 2.74 (-24.5\%) & 159.2 (-11.6\%) \\ Cu\({}_{2}\)Se & 0.02 & 5.042 & 3.56 & 180.0 & 0.29 & 4.933 & 2.60 (-27.0\%) & 158.2 (-12.1\%) \\ Cu\({}_{2}\)Te & 0.02 & 5.042 & 3.56 & 180.0 & 0.35 & 4.901 & 2.50 (-29.8\%) & 158.2 (-12.1\%) \\ Ag\({}_{2}\)S & 0.01 & 5.888 & 4.16 & 180.0 & – & – & – & – \\ Ag\({}_{2}\)Se & 0.01 & 5.904 & 4.17 & 180.0 & 0.10 & 5.784 & 3.43 (-17.7\%) & 165.2 (-8.2\%) \\ Ag\({}_{2}\)Te & 0.01 & 5.947 & 4.20 & 180.0 & 0.27 & 5.719 & 3.06 (-27.1\%) & 159.3 (-11.5\%) \\ Au\({}_{2}\)S & 0.02 & 5.818 & 4.11 & 180.0 & – & – & – & – \\ Au\({}_{2}\)Se & 0.02 & 5.788 & 4.09 & 180.0 & 0.34 & 5.579 & 2.95 (-27.9\%) & 157.6 (-12.4\%) \\ Au\({}_{2}\)Te & 0.02 & 5.820 & 4.11 & 180.0 & 0.41 & 5.598 & 2.88 (-30.0\%) & 157.1 (-12.7\%) \\ \end{tabular} \end{table} Table 1: Summary of the structural and bond order of the calculated 2D-M2Ch structures. We present the in-plane cell parameters (\(a\), in both phases \(a=b\), in Å), the distances between the metallic atoms (\(D_{\rm M-M}\) in Å), the Ch larger degree of delocalization of the electronic states in these systems. As expected, moving down the chalcogen group for a specific metal reduces the gap in both phases. We also note that the heavier the chalcogen, the more significant the HSE06 band gap difference (\(\Delta\)Gap) between \(P4/nmm\) and \(P42_{1}2\) phases, with \(\Delta\)Gap largest in the case of Au\({}_{2}\)Te. Clearly, there is also a strong positive correlation between \(\Delta\)Gap and the distortion/M-M bonding order. The correlation is consistent with the ELF and charge transfer shown above, as forming the M-M bond effectively reduces the charge transfer from metal to chalcogen, weakening the covalent M-Ch bonding and consequently reducing the gap. We can see that light and heavy holes appear in the studied systems in both geometric configurations. The transformation from one space group to the other has limited impact on the particle/hole effective masses, except in the case of Au\({}_{2}\)Te, where we find the heavy hole effective mass to be reduced to less than a fifth of its original value. The band masses of these systems show considerable improvement in the \(m_{e}^{*}\) over commercialized n-type TCOs [45] to below 0.2 \(m_{0}\) across all compositions and in both polymorphs. The light holes show similar improvement, reducing \(m_{h}^{*}\) to below 0.2 \(m_{0}\). However, the heavy holes remain a potentially limiting factor for functional p-type mobility. Unfortunately, the maximum band gap we find in our chalcogenides (2.59 eV for Ag\({}_{2}\)S) is well within the visible spectrum, limiting the usability of these materials as \(n\)- or \(p\)- type transparent conductors for transparent electronic applications. [46; 47; 48] ### Optical absorption In this section, we examine the excitonic effects in the optical absorption of \(P4/nmm\) and \(P42_{1}2\) phases of Ag\({}_{2}\)Se. The optical response of a typical two-dimensional semiconductor is dominated by excitons due to reduced environment screening. [49] To describe the optical absorption spectrum, we calculate the imaginary part of the dielectric tensor, which is given by [50] \[\varepsilon_{2}(\omega)=\frac{8\pi^{2}e^{2}}{\omega^{2}}\sum_{S}\Big{|}\sum_{ kcv}A_{kcv}^{S}\mathbf{e}\cdot\langle vk|\mathbf{v}|ck\rangle\Big{|}^{2}\delta( \omega-E_{S}) \tag{1}\] where \(\mathbf{e}\) is the light polarization direction, \(\mathbf{v}\) is the velocity operator, \(A_{kcv}^{S}\) are the expansion coefficients of the exciton eigenstates, calculated in the electron-hole basis with the help of the Bethe-Salpeter-Equation, and \(E_{S}\) are exciton energies. In Fig. 4, we show the absorption spectra computed along the in-plane direction for the \(P4/nmm\) and \(P42_{1}2\) phases of Ag\({}_{2}\)Se. The vertical solid and dashed lines denote the first exciton energy (E\({}_{\text{exc}}\)) and the direct band gap (E\({}_{\text{direct}}\)) calculated with \(G_{0}W_{0}\), respectively. The absorption on-sets shift by \(\sim\)0.3 eV when the electron-hole interaction is included, indicating relatively strong excitonic effects in both phases of Ag\({}_{2}\)Se. The first exciton in both phases is optically bright. It is doubly degenerate with different effective masses (meaning that in the exciton dispersion, for finite wave vector \(\mathbf{k}\), there are two different branches). It mainly consists of transitions from the top valence bands to the lowest conduction band at the Brillouin zone center. In Fig. 5, we plot the total probability density of the first exciton for both phases of Ag\({}_{2}\)Se. We observe that both excitonic wave functions are rather extended in real space, showing that the exciton is of the Wannier-Mott type. The extension is considerably higher for the \(P42_{1}2\) phase, which is compatible with the lower band gap, the increased screening, and, consequently, with the lower excitonic binding energy. ### Phonons In Fig. 6 we present the phonon dispersion for Cu\({}_{2}\)Se in the \(P4/nmm\) and in the \(P42_{1}2\) phases. The main difference between the two dispersions is the soft mode at Figure 2: Electron localization function (ELF) of (a) the \(P4/nmm\) and (b) the \(P42_{1}2\) structures for Cu\({}_{2}\)S (Cu atoms are denoted as bronze color). Iso-surface plot at a value of \(\pm 0.003\) electron/Å\({}^{-3}\) of charge density difference of (c) the \(P4/nmm\) and (d) the \(P42_{1}2\) structures for Cu\({}_{2}\)S, where depletion and accumulation of charges compared to atomic density are represented as naval blue and teal colors, respectively. \(\Gamma\) for the \(P4/nmm\) phase. The phonon eigenvector corresponding to this soft mode is displayed in panel (b) of Fig. 7. This "snub-square rotation mode" drives the symmetry reduction from the \(P4/nmm\) to the \(P42_{1}2\) phase. The formation of Cu-Cu bonds in the snub-square geometry (as demonstrated by the bond-order calculations in Table 1) is the reason why this mode has imaginary frequency and thus describes the relaxation to the lower-symmetry phase. In the \(P42_{1}2\) phase, the same mode exists, but it has a finite (positive) frequency, describing the snub-square rotation around the new equilibrium position. We note that the out-of-plane acoustic mode displays a small negative overshoot around \(\Gamma\) (for both phases). This is not a real instability but related to numerical inaccuracies in the determination of the equilibrium lattice constant and phonon calculations. Stretching the lattice constant would render this branch entirely positive and Figure 4: Optical absorption spectra of (left) \(P4/nmm\) Ag\({}_{2}\)Se and (right) \(P42_{1}2\) Ag\({}_{2}\)Se with (solid lines) and without (dashed lines) electron-hole interaction. Vertical solid and dashed lines denote the first exciton energy (E\({}_{\rm{exe}}\)) and the direct band gap (E\({}_{\rm{direct}}\)) respectively. Figure 5: The total probability density of the first bright exciton of (a) \(P4/nmm\) and (b) \(P42_{1}2\) phases of Ag\({}_{2}\)Se, the hole is fixed on the Se atom denoted by a black circle. Green and grey circles denote Se and Ag atoms, respectively. Figure 3: Electronic band structures of the \(P42_{1}2\) phases of (a) Cu\({}_{2}\)Se, (b) Ag\({}_{2}\)Se, and (c) Au\({}_{2}\)Se, calculated with the HSE06 hybrid functional. Figure 6: Calculated phonon dispersion of Cu\({}_{2}\)Se (a) in the \(P4/nmm\) phase and (b) in the \(P42_{1}2\) phase. The soft mode of the \(P4/nmm\) phase at \(\Gamma\) (marked by red circle) is responsible for the transition to the \(P42_{1}2\) phase and acquires there a finite frequency of 114.5 cm\({}^{-1}\). The blue triangles mark the Raman active A\({}_{1g}\) (A\({}_{1}\)) mode in the \(P4/nmm\) (\(P42_{1}2\)) phase, respectively. (The mode eigenvectors are displayed in panels (b) and (c) of Fig. 7). give it a linear slope around \(\Gamma\). Squeezing the lattice constant increases the negative (imaginary) overshoot and corresponds to long-wavelength wrinkles of the 2D layer. In Table 3, we list all modes of the two phases of Cu\({}_{2}\)Se along with their infrared (IR) or Raman (R) activity according to group theory. In Fig. 7, we show the calculated non-resonant Raman spectra [51; 52] of the two phases of Cu\({}_{2}\)Se. In the spectrum of the undeformed square lattice (red-dashed line), the \(A_{1g}\) mode at 196 cm\({}^{-1}\) dominates the spectrum. The mode consists of vertical (out-of-plane) vibrations of the sulfur atoms (panel (c)) while the Cu atoms are not moving. This mode is similar to the Raman active A\({}_{1}\) mode in monolayer MoS\({}_{2}\)[53] where the sulfur atoms are also vibrating in the direction normal to the plane while the Mo atoms are not moving. Contrary to MoS\({}_{2}\), however, the spectrum of \(P4/nmm\) Cu\({}_{2}\)Se is a quasi-one peak spectrum where the doubly degenerate \(E_{g}\) mode at 160 cm\({}^{-1}\) has vanishing intensity and is not visible in the spectrum. The spectrum changes to a quasi-two peak spectrum in the \(P42_{1}2\) phase (blue line): The "snub-square mode" (panel (b)) which is responsible for the instability of the \(P4/nmm\) phase acquires a finite frequency of 114.5 cm\({}^{-1}\). It becomes Raman active and dominates the spectrum besides the high-frequency \(A_{1}\) mode that slightly up-shifts in position. The other Raman active modes, listed in Table 3 have comparatively low intensity. The Raman spectrum thus gives a clear and easy way to distinguish between the two phases of Cu\({}_{2}\)Se. For iso-structural M\({}_{2}\)X monolayers that are stable in the \(P4/nmm\) phase, the snub-square rotation mode has finite frequency, but is not Raman active due to its A\({}_{1u}\) symmetry. We thus conclude that for the other elemental combinations discussed in this manuscript, the same two-peak structure serves as a clear signal for the presence of the snub-square deformation. ### Substrates To investigate possible substrates suitable for synthesizing the snub-square lattice, we first searched through all simple elementary crystals and binary oxides for potential substrates, and we found 76 substrates with a lattice mismatch below 5%. We then looked at these, searching for substrates that matched the symmetry of the 2D layer. This led us to choose Cu (001), Ge (001), Pt (001) for Ag\({}_{2}\)S, and Cu (001), Ge (100), Pd (001) for Cu\({}_{2}\)Se. After geometry optimization only the Cu (001) substrate preserved, to a large extent, the symmetry of the snub-square lattice, as shown in Fig. 8. In the other cases, the 2D layer deformed significantly due to the strong interaction with the substrate. Furthermore, for Cu\({}_{2}\)Se on Cu(001) substrate, the average Cu-Se bond length in the 2D film is stretched by 1.8% to 2.40 A, and the average Cu-Se-Cu bond angle is changed slightly to 157.6\({}^{\circ}\). The bottom Se layer is separated 2.25 A from the substrate, and the distance from Se to the substrate Cu atoms is 2.89 A, much longer than Cu-Se bond length in the 2D film, indicating very weak bonding between the film and the substrate. We then calculated the adhesion energies for Cu\({}_{2}\)S, Ag\({}_{2}\)S, Au\({}_{2}\)S, and Cu\({}_{2}\)Se on Cu (001). The results are 55, 71, 90, and 19 meV/A\({}^{2}\), respectively. The adhesion energy for Cu\({}_{2}\)Se is within the range of physical adhesion, and is comparable to the adhesion energy of graphite (about 26 meV/A\({}^{2}\)). [54] Therefore, it might be possible to obtain a free-standing Cu\({}_{2}\)Se layer via mechanical exfoliation of deposited layers on the Cu-substrate. However, for the sulfides, Cu(001) exhibits a stronger bond with the films and is less ideal for applying mechanical exfoliation of the Snub-square lattice. Figure 7: (a) Calculated Raman spectrum of \(P42_{1}2\) Cu\({}_{2}\)Se (blue solid line) and of \(P4/nmm\) Cu\({}_{2}\)Se (red dashed line). (b) Sketch of the vibrational mode responsible for the Raman peak at 120 cm1 (and representing the soft mode in the \(P4/nmm\) phase). (c) Sketch of the A\({}_{1g}\) mode. \begin{table} \begin{tabular}{c c c c c c c c c c c c} Label & A\({}_{1u}\) & E\({}_{u}\) & B\({}_{1u}\) & E\({}_{u}\) & A\({}_{2u}\) & B\({}_{2u}\) & E\({}_{g}\) & A\({}_{1g}\) & B\({}_{1u}\) & E\({}_{u}\) & A\({}_{2u}\) \\ \hline Activity & - & I & - & I & I & - & R & R & - & I & I \\ Frequency & -31 & 67 & 80 & 137 & 155 & 158 & 160 & 196 & 205 & 275 & 318 \\ \end{tabular} \begin{tabular}{c c c c c c c c c c c} & \multicolumn{8}{c}{\(P42_{1}2\)} \\ Label & E & B\({}_{1}\) & A\({}_{1}\) & E & E & A\({}_{2}\) & B\({}_{1}\) & B\({}_{2}\) & A\({}_{1}\) & E & A\({}_{2}\) \\ \hline Activity & I/R & R & R & I/R & I/R & I & R & R & R & I/R & I \\ Frequency & 73 & 77 & 115 & 141 & 154 & 156 & 191 & 197 & 201 & 262 & 307 \\ \end{tabular} \end{table} Table 3: Irreducible representation labels, Infrared (I) or Raman (R) activity, and frequencies (in cm\({}^{-1}\)) of optical phonon modes for Cu\({}_{2}\)Se (\(P4/nmm\)) and Cu\({}_{2}\)Se (\(P42_{1}2\)) at \(\Gamma\) point. Note that the E modes are doubly degenerate at \(\Gamma\). ation to obtain free-standing layers. In Fig. 9 we explore further the interaction between film and substrate by plotting the plane averaged density of states as a function of the plane distance using DensityTool.[55] Clearly, a small part of 3d-states from the Cu-substrate is located in the middle of the gap of the film and there is only small mixing between the states of the substrate and film, consistent with the small adhesion energy. ### Quasicrystals Two-dimensional quasicrystals were discovered experimentally in BaTiO\({}_{3}\) on top of Pt and a few other related systems.[56; 57] Changing the synthesis conditions, it was also possible to create simpler approximant structures, periodic 2D crystals that can be inflated by a recursive approach to generate the quasi-crystalline system. The stability of the M\({}_{2}\)Ch snub-square lattice, which can be seen as a small approximant structure, raises, therefore, the interesting question if noble metal chalcogenides quasicrystals are possible. It is straightforward to generate larger and larger approximants for our system. However, we are immediately faced with two difficulties: (i) the ratio of squares and triangles changes during the inflation process, and tends to an irrational number in the quasicrystalline limit. This poses the problem of charge neutrality, as the balance of the positive metal charges is no longer compensated by an equal number of negative chalcogenide charges. A possible solution is electron transfer from a metallic substrate to make up for the unbalanced charge, or by forming defects (e.g., vacancies) in the 2D structure. (ii) The chalcogenide atoms in our snub-square structure are out-of-plane and show an alternation that can be seen from the lower panel of Fig. 8. Unfortunately, the inflation structure disrupts this alternation resulting in a frustrated system with two neighbor chalcogen atoms placed either above or below the plane. We can expect this to raise the energy of the system by an amount that clearly depends on the specific chemistry. This situation can also be alleviated by creating chalcogen vacancies in the structure. To test this hypothesis, we performed DFT calculation in the first inflation of the snub-square structure (of composition Ag\({}_{15}\)Se\({}_{6}\)). As expected, the steric hindrance of the neighboring Se leads to structural instability that completely destroys the snub-square lattice. We tried to remedy this by removing the Se atom from the central square, but the structure was again highly unstable. As such, it seems very unlikely that a quasicrystal can ever be achieved in this system. ## IV Conclusion In this paper, we discussed the snub-square tiling, and its parent square lattice, for a series of noble-metal chalcogenides. We showed that snub-square tiling is closely related to regular square tiling, with a rotation of squares forming the extra metal-metal bond in the former. The metal-metal bonding, leading to a substantial delocalization of the charge, is the key to understanding the structural distortion and the thermodynamic stabilization of the snub-square concerning its square coun Figure 8: Structures of 2D snub-square Cu\({}_{2}\)Se on Cu (001) substrate. The silver, yellow, and brown spheres denote Cu in Cu\({}_{2}\)Se, Se, and Cu of substrate atoms, respectively. In side view only two out of the six layers of the substrate are shown. Figure 9: The average local density of states (LDOS) for each (00\(z\)) plane for 2D snub-square Cu\({}_{2}\)Se on Cu (001) substrate. the Fermi level is shifted to 0 eV, and the 2D-Cu\({}_{2}\)Se layer is located at \(z=0\) Å terpart. It is also responsible for reducing the band gap of the snub-square systems. The valence band edge at \(\Gamma\) is doubly degenerate, and the curvature is different for these two bands, leading to heavy and light holes. The holes have comparable effective mass to CuI, the most promising \(p\)-type transparent conductor. Combined with the relatively large band gap of some of the chalcogenide systems, the low electron effective mass could be helpful to develop \(n\)- and \(p-\)type transparent semiconductors. Due to the 2D geometry, the excitonic interaction plays a crucial role in the optical absorption spectra, as expected. The first exciton is bright, and the absorption on-set is largely red-shifted by around 0.3 eV due to the strong exciton binding. The exciton is highly localized at \(\Gamma\). The square geometry and the snub-square geometry are related by a phonon mode in which the squares formed by the metal atoms get tilted. This mode is soft for the materials where the snub-square geometry is lower in energy than the square one. Upon the snub-square deformation, it acquires finite frequency and its prominent Raman peak is a clear fingerprint of the snub-square geometry. Finally, we explored possible substrates that could be used for the experimental synthesis of the snub-square lattice. We find that Cu (001), with a 3% mismatch with the Cu\({}_{2}\)Se 2D layer, is a good candidate, with a low adhesion energy for mechanical exfoliation. We also tried to construct quasicrystals derived from the snub-square tiling through inflation. However, quasicrystal approximants turned out to be highly unstable due to deviations from charge neutrality and steric hindrance due to frustration. All these results suggest that noble metal chalcogenide snub-square lattices are very good candidates for experimental synthesis, being very close to thermodynamical stability and compatible with simple surfaces of common metals. Moreover, they exhibit interesting properties and can open a new playing ground for studying frustration in two-dimensional systems. ## V Data availability The relevant data are available at Materials Cloud ([https://doi.org/10.24435/materialscloud](https://doi.org/10.24435/materialscloud): sb-cy). The structures, distances to the hull, and other basic properties, can be accessed at [https://tddft.org/bmg/physics/2D/](https://tddft.org/bmg/physics/2D/) through a simple web-based interface. ## VI Supporting information Electronic band structures and phonon band structures for all studied systems. ## VII Acknowledgements This research was funded in part, by the Luxembourg National Research Fund (FNR), Inter Mobility 2DOPMA, grant reference 15627293. We also acknowledge the computational resources awarded by XSEDE, a project supported by National Science Foundation grant number ACI-1053575. The authors also acknowledge the support from the Texas Advances Computer Center (with the Stampede2 and Bridges supercomputers). We also acknowledge the Super Computing System (Thorny Flat) at WVU, which is funded in part by the National Science Foundation (NSF) Major Research Instrumentation Program (MRI) Award #1726534, and West Virginia University. AHR also recognizes the support of West Virginia Higher Education Policy Commission under the call Research Challenge Grant (RCG) program. MALM gratefully acknowledges the computing time provided to them on the high-performance computers Noctua 2 at the NHR Center PC2. These are funded by the Federal Ministry of Education and Research and the state governments participating on the basis of the resolutions of the GWK for the national highperformance computing at universities (www.nhr-verein.de/unesre-partner). For the purpose of open access, the authors have applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission. ## VIII Competing interests The authors declare no competing financial or non-financial interests. ## IX Author contributions AHR generated the crystal structures; MALM, HCW and AWH performed the DFT calculations for energies, band structures, and interaction with substrates; MN and LW performed optical property and phonon calculations; AHR, LW, and MALM directed the research; all authors contributed to the analysis of the results and to the writing of the manuscript.
2305.16098
Inhomogeneous approximation for systems of linear forms with primitivity constraints
We study (inhomogeneous) approximation for systems of linear forms using integer points which satisfy additional primitivity constraints. The first family of primitivity constraints we consider were introduced in 2015 by Dani, Laurent, and Nogueira, and are associated to partitions of the coordinate directions. Our results in this setting strengthen a theorem of Dani, Laurent, and Nogueira, and address problems posed by those same authors. The second primitivity constraints we consider are analogues of the coprimality required in the higher-dimensional Duffin--Schaeffer conjecture, posed by Sprind\v{z}uk in the 1970's and proved by Pollington and Vaughan in 1990. Here, with attention restricted to systems of linear forms in at least three variables, we prove a univariate inhomogeneous version of the Duffin--Schaeffer conjecture for systems of linear forms, the multivariate homogeneous version of which was stated by Beresnevich, Bernik, Dodson, and Velani in 2009 and recently proved by the second author.
Demi Allen, Felipe A. Ramirez
2023-05-25T14:29:44Z
http://arxiv.org/abs/2305.16098v1
# Inhomogeneous approximation for systems of linear forms with primitivity constraints ###### Abstract. We study (inhomogeneous) approximation for systems of linear forms using integer points which satisfy additional primitivity constraints. The first family of primitivity constraints we consider were introduced in 2015 by Dani, Laurent, and Nogueira, and are associated to partitions of the coordinate directions. Our results in this setting strengthen a theorem of Dani, Laurent, and Nogueira, and address problems posed by those same authors. The second primitivity constraints we consider are analogues of the coprimality required in the higher-dimensional Duffin-Schaeffer conjecture, posed by Sprindzuk in the 1970's and proved by Pollington and Vaughan in 1990. Here, with attention restricted to systems of linear forms in at least three variables, we prove a univariate inhomogeneous version of the Duffin-Schaeffer conjecture for systems of linear forms, the multivariate homogeneous version of which was stated by Beresnevich, Bernik, Dodson, and Velani in 2009 and recently proved by the second author. Key words and phrases:Diophantine approximation, metric number theory, primitive points 2020 Mathematics Subject Classification: Primary: 11J83, 11J20, 11J13, 11K60 ###### Contents * 1 Introduction * 2 Results * 3 Partition reduction * 4 Arithmetic lemmas * 5 Counting * 6 Uniformity * 7 Proofs of Theorems 1, 2, and 3 ## 1. Introduction We are concerned here with the problem of determining whether for a given sequence \((B_{q})_{q=1}^{\infty}\) of balls in \(\mathbb{R}^{m}\) and a typical \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there are infinitely many \((\mathbf{p},\mathbf{q})\in\mathbb{Z}^{m}\times\mathbb{Z}^{n}\) such that \[\mathbf{q}\mathbf{x}-\mathbf{p}\in B_{|\mathbf{q}|}, \tag{1}\] ## 1. Introduction In this paper we consider the following two-dimensional setting: \[\mathbf{x}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}, \tag{1}\] where \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) are the vectors of the form \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\), **Question 3** ([16, Problem 2]).: Can the monotonicity condition imposed on the approximating function \(\psi\) be removed or relaxed? Monotonicity of \(\psi\) is needed in the inhomogeneous Khintchine-Groshev theorem in the case \((m,n)=(1,1)\)[8], and it is not needed when \(nm>2\)[2]. The cases when \((m,n)=(2,1)\) or \((1,2)\) are open. ## 2. Results ### Main results The following theorem addresses Questions 1 and 2. In particular, a singly metric version of the theorem above due to Dani, Laurent and Nogueira holds without any assumptions on the partition. **Theorem 1**.: _Let \(m,n\in\mathbb{N}\) and fix \(\mathbf{y}\in\mathbb{R}^{m}\). Suppose \(\pi=\{\pi_{1},\ldots,\pi_{k}\}\) is a partition of \(\{1,\ldots,m+n\}\) with \(|\pi_{j}|\geq 2\) for each \(j=1,\ldots,k\). If \(\psi:\mathbb{N}\to\mathbb{R}_{\geq 0}\) is non-increasing and \(\sum q^{n-1}\psi(q)^{m}\) diverges, then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there exist infinitely many points \((\mathbf{p},\mathbf{q})\in P(\pi)\) such that (3) holds._ _Conversely, if \(\sum q^{n-1}\psi(q)^{m}\) converges, then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there are only finitely many \((\mathbf{p},\mathbf{q})\in P(\pi)\) such that (3) holds._ _Remark_.: In fact, we prove a stronger theorem (Theorem 7) where \(\mathbf{y}\) can depend on \(|\mathbf{q}|\), that is, the target balls \(B_{|\mathbf{q}|}\) do not have to be concentric. The following theorem shows that, further to Theorem 1, we can also answer Question 3 affirmatively in the cases where \(nm>2\) (mirroring the current knowledge in the classical setting) if we are willing to impose a mild assumption on the partition. **Theorem 2**.: _Let \(m,n\in\mathbb{N}\) be such that \(nm>2\) and fix \(\mathbf{y}\in\mathbb{R}^{m}\). Suppose \(\pi=\{\pi_{1},\ldots,\pi_{k}\}\) is a partition of \(\{1,\ldots,m+n\}\) such that \(|\pi_{j}|\geq 2\) for each \(j=1,\ldots,k\). Furthermore, suppose that there exists some \(\ell\in\{1,\ldots,k\}\) for which \(|\pi_{\ell}|\geq 3\) and \(\pi_{\ell}\cap\{m+1,\ldots,m+n\}\neq\emptyset\). If \(\psi:\mathbb{N}\to\mathbb{R}_{\geq 0}\) is any function such that \(\sum q^{n-1}\psi(q)^{m}\) diverges, then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there exist infinitely many points \((\mathbf{p},\mathbf{q})\in P(\pi)\) such that (3) holds._ _Conversely, if \(\sum q^{n-1}\psi(q)^{m}\) converges, then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there are only finitely many \((\mathbf{p},\mathbf{q})\in P(\pi)\) such that (3) holds._ _Remark_.: As with the previous result, this one also follows from a stronger statement (Theorem 8) where the target ball's center can move. Next, we turn our attention to the following univariate inhomogeneous analogue of the Duffin-Schaeffer conjecture for systems of linear forms. A _homogenenous multivariate_ Duffin-Schaeffer conjecture for systems of linear forms, i.e. where \(\psi\) is a multivariate function depending on \(\mathbf{q}\) rather than \(|\mathbf{q}|\), and \(\mathbf{y}=0\), was posed in [3] and has recently been proved by the second author in [19]. We complement this recent work with the following _inhomogeneous univariate_ statement. Of course, the univariate case is a special case of the multivariate case. The novelty of the following statement therefore is the inhomogeneity which is allowed. **Theorem 3** (Univariate inhomogeneous Duffin-Schaeffer conjecture for systems of linear forms).: _Let \(m,n\in\mathbb{N}\) with \(n>2\) and fix \(\mathbf{y}\in\mathbb{R}^{m}\). If \(\psi:\mathbb{N}\to\mathbb{R}_{\geq 0}\) is a function such that_ \[\sum_{\mathbf{q}\in\mathbb{Z}^{n}}\left(\frac{\varphi(\gcd(\mathbf{q}))\psi(| \mathbf{q}|)}{\gcd(\mathbf{q})}\right)^{m}=\infty, \tag{4}\] _then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there exist infinitely many points \((\mathbf{p},\mathbf{q})\in\mathbb{Z}^{m}\times\mathbb{Z}^{n}\) with \(\gcd(p_{i},\mathbf{q})=1\) for every \(i=1,\ldots,m\) and such that (3) holds._ _Remark_.: Again, this follows from a more general statement (Theorem 9) where \(\mathbf{y}\) may vary. We conjecture that the result holds without the condition on \(n\). **Conjecture 1**.: _Theorem 3 also holds when \(n\leq 2\)._ In the case when \(m=n=1\) and \(\mathbf{y}=0\), Conjecture 1 is exactly the Duffin-Schaeffer conjecture [8], which was proved in 2020 by Koukoulopoulos and Maynard [15]. In 1990, Pollington and Vaughan [18] proved Conjecture 1 in the cases \((m,1)\) with \(m\geq 2\) and \(\mathbf{y}=0\), thus verifying a higher-dimensional simultaneous version of the classical Duffin-Schaeffer Conjecture as postulated by Sprindzuk [21]. ### Hausdorff measure statements In Diophantine Approximation, in addition to considering Lebesgue measure, one is often interested in studying the Hausdorff measure and dimension of sets. This is particularly pertinent for sets which have zero Lebesgue measure as Hausdorff measures and dimensions can often provide a means for distinguishing such sets. For example, to observe this phenomenon, one can compare the classical works of Khintchine [14], Jarnik [12, 13], and Besicovitch [6]. For definitions of Hausdorff measures and dimension, we refer the reader to [9]. Below we record Hausdorff measure analogues of Theorems 1, 2, and 3. For the Hausdorff measure analogues of Theorems 1 and 2, the statements we give follow immediately from [1, Theorem 7], which itself is deduced from the mass transference principle for systems of linear forms proved in [1, Theorem 1]. Given a function \(\psi:\mathbb{N}\to\mathbb{R}_{\geq 0}\), a partition \(\pi=\{\pi_{1},\pi_{2},\ldots,\pi_{k}\}\) of \([m+n]\) with \(|\pi_{j}|\geq 2\) for each \(j=1,\ldots,k\), fixed \(\Phi\in\mathfrak{l}^{mm}\), and \(\mathbf{y}\in\mathfrak{l}^{m}\), define \(\mathcal{M}_{n,m}^{\pi,\mathbf{y},\Phi}(\psi)\) to be the set of \(\mathbf{x}\in\mathfrak{l}^{nm}\) such that \[|\mathbf{q}\mathbf{x}-\mathbf{p}\Phi-\mathbf{y}|<\psi(|\mathbf{q}|)\] holds for \((\mathbf{p},\mathbf{q})\in P(\pi)\) with arbitrarily large \(|\mathbf{q}|\). When \(\Phi\) is the \(m\times m\) identity matrix, we will omit the superscript \(\Phi\) and simply write \(\mathcal{M}_{n,m}^{\pi,\mathbf{y}}(\psi)\). Throughout, we define a _dimension function_ to be a continuous function \(f:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) with \(f(r)\to 0\) as \(r\to 0\). For a subset \(X\subset\mathfrak{l}^{nm}\), we denote by \(|X|\) its Lebesgue measure and by \(\mathcal{H}^{f}(X)\) its Hausdorff \(f\)-measure. **Theorem** ([1]).: _Let \(\psi:\mathbb{N}\to\mathbb{R}_{\geq 0}\) be such that \(\frac{\psi(q)}{q}\to 0\) as \(q\to\infty\). Let \(\pi=\{\pi_{1},\pi_{2},\ldots,\pi_{k}\}\) be a partition of \([m+n]\) with \(|\pi_{j}|\geq 2\) for each \(j=1,\ldots,k\) and let \(\Phi\in\mathfrak{l}^{mm}\) and \(\mathbf{y}\in\mathfrak{l}^{m}\) be fixed. Let \(f:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) be a dimension function such that \(r^{-nm}f(r)\) is monotonic and \(g:r\mapsto g(r)=r^{-m(n-1)}f(r)\) is also a dimension function. Define \(\theta:\mathbb{N}\to\mathbb{R}_{\geq 0}\) by_ \[\theta(q)=qg\left(\frac{\psi(q)}{q}\right)^{\frac{1}{m}}.\] _Then_ \[\left|\mathcal{M}_{n,m}^{\pi,\mathbf{y},\Phi}(\theta)\right|=1\qquad\text{ implies}\qquad\mathcal{H}^{f}\left(\mathcal{M}_{n,m}^{\pi,\mathbf{y},\Phi}(\psi)\right)= \mathcal{H}^{f}(\mathfrak{l}^{nm}).\] Applying the above theorem gives rise to the following Hausdorff measure versions of Theorems 1 and 2. We note that we may assume without loss of generality that \(\frac{\psi(q)}{q}\to 0\) as \(q\to\infty\) in Theorems 1, 2, and 3. Hence the appearance of this condition in the statements below is not restrictive. **Theorem 4** (Hausdorff measure version of Theorem 1).: _Let \(m,n\in\mathbb{N}\) and fix \(\mathbf{y}\in\mathbb{R}^{m}\). Let \(\pi=\{\pi_{1},\pi_{2},\ldots,\pi_{k}\}\) be a partition of \([m+n]\) with \(|\pi_{j}|\geq 2\) for each \(j=1,\ldots,k\) and suppose that \(\psi:\mathbb{N}\to\mathbb{R}_{\geq 0}\) is non-increasing (in particular, note that this means that \(\frac{\psi(q)}{q}\to 0\) as \(q\to\infty\)). Let \(f:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) be a dimension function such that \(r^{-nm}f(r)\) is monotonic and \(g:r\mapsto g(r)=r^{-m(n-1)}f(r)\) is also a dimension function. If_ \[\sum_{q=1}^{\infty}q^{n+m-1}g\left(\frac{\psi(q)}{q}\right)=\infty,\] _then_ \[\mathcal{H}^{f}\left(\mathcal{M}_{n,m}^{\mathbf{y}}(\psi)\right)=\mathcal{H} ^{f}(\mathfrak{l}^{nm}).\] **Theorem 5** (Hausdorff measure version of Theorem 2).: _Let \(m,n\in\mathbb{N}\) and fix \(\mathbf{y}\in\mathbb{R}^{m}\). Let \(\pi=\{\pi_{1},\pi_{2},\ldots,\pi_{k}\}\) be a partition of \([m+n]\) with \(|\pi_{j}|\geq 2\) for each \(j=1,\ldots,k\) and further suppose that there exists some \(\ell\in\{1,\ldots,k\}\) such that \(|\pi_{\ell}|\geq 3\) and \(\pi_{\ell}\cap\{m,m+1,\ldots,m+n\}\neq\emptyset\). Suppose that \(\psi:\mathbb{N}\to\mathbb{R}_{\geq 0}\) is such that \(\frac{\psi(q)}{q}\to 0\) as \(q\to\infty\) and suppose \(f:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) is a dimension function such that \(r^{-nm}f(r)\) is monotonic and \(g:r\mapsto g(r)=r^{-m(n-1)}f(r)\) is also a dimension function. If_ \[\sum_{q=1}^{\infty}q^{n+m-1}g\left(\frac{\psi(q)}{q}\right)=\infty,\] _then_ \[\mathcal{H}^{f}\left(\mathcal{M}_{n,m}^{\mathbf{y}}(\psi)\right)=\mathcal{H} ^{f}(\mathfrak{l}^{nm}).\] _Remark_.: Theorems 1 and 2 further refine some similar statements proved in [1], see Theorems 8-10 therein. Given a function \(\psi:\mathbb{N}\to\mathbb{R}_{\geq 0}\) and a fixed \(\mathbf{y}\in\mathbb{R}^{m}\), let us now denote by \(\mathcal{A}_{n,m}^{\mathbf{y}}(\psi)\) the set of points \(\mathbf{x}\in\mathfrak{l}^{nm}\) for which \[|\mathbf{q}\mathbf{x}-\mathbf{p}-\mathbf{y}|<\psi(|\mathbf{q}|)\] for infinitely many pairs of vectors \((\mathbf{p},\mathbf{q})\in\mathbb{Z}^{m}\times\mathbb{Z}^{n}\setminus\{ \mathbf{0}\}\) with \(\gcd(p_{i},\mathbf{q})=1\) for every \(1\leq i\leq m\). By modifying arguments contained in [1, Section 2], combining [1, Theorem 1] with Theorem 3, it is possible to obtain the following inhomogeneous univariate Hausdorff measure statement. **Theorem 6** (Hausdorff measure version of Theorem 3).: _Suppose \(\psi:\mathbb{N}\to\mathbb{R}_{\geq 0}\) is any function such that \(\frac{\psi(q)}{q}\to 0\) as \(q\to\infty\) and suppose \(\mathbf{y}\in\mathbb{R}^{m}\) is fixed. Let \(f:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) be a dimension function such that \(r^{-nm}f(r)\) is monotonic and \(g:r\mapsto g(r)=r^{-m(n-1)}f(r)\) is also a dimension function. If_ \[\sum_{\mathbf{q}\in\mathbb{Z}^{n}\setminus\{0\}}\left(\frac{\varphi(\gcd( \mathbf{q}))}{\gcd(\mathbf{q})}|\mathbf{q}|\right)^{m}g\left(\frac{\psi(| \mathbf{q}|)}{|\mathbf{q}|}\right)=\infty, \tag{5}\] _then_ \[\mathcal{H}^{f}(\mathcal{A}_{n,m}^{\mathbf{y}}(\psi))=\mathcal{H}^{f}( \mathfrak{l}^{nm}).\] _Remark_.: The complementary convergence statements corresponding to Theorems 4, 5, and 6 can all be proved via standard covering arguments. Moreover, in the convergence cases, no monotonicity assumptions are required. ## 3. Partition reduction Fix \(m,n\in\mathbb{N}\). We are concerned with partitions of \([m+n]:=\{1,2,\ldots,m,m+1,\ldots,m+n\}\). For example, \([m+n]\) itself can be regarded as the trivial partition, more correctly written as \(\{[m+n]\}\). Another important partition is \(\{[m],m+[n]\}\), where \(m+[n]=m+\{1,\ldots,n\}=\{m+1,\ldots,m+n\}\). Let \(\pi=(\pi_{1},\ldots,\pi_{k})\) be a partition of \([m+n]\) such that \(|\pi_{j}|\geq 2\) for each \(j=1,\ldots,k\). By reordering the partition components if necessary, we may suppose that there exist \(0\leq a\leq b\leq k\) such that * For each \(j\in[1,a]\cap\mathbb{Z}\) we have \(\pi_{j}\cap[m]\neq\emptyset\) and \(\pi_{j}\cap m+[n]\neq\emptyset\). * For each \(j\in(a,b]\cap\mathbb{Z}\) we have \(\pi_{j}\subset[m]\). * For each \(j\in(b,k]\cap\mathbb{Z}\) we have \(\pi_{j}\subset m+[n]\). For each \(j=1,\ldots,k\) it is convenient to abuse notation and let \(\pi_{j}\) also denote the projection of \(\mathbb{Z}^{m+n}\) onto the coordinates corresponding to \(\pi_{j}\), regarded as a vector in \(\mathbb{Z}^{|\pi_{j}|}\). (The context will disambiguate.) For example, if we use \(\pi_{1}=[m]\) and \(\pi_{2}=m+[n]\), and elements of \(\mathbb{Z}^{m+n}\) are written \((\mathbf{p},\mathbf{q})\) with \(\mathbf{p}\in\mathbb{Z}^{m}\) and \(\mathbf{q}\in\mathbb{Z}^{n}\), then \(\pi_{1}(\mathbf{p},\mathbf{q})=\mathbf{p}\) and \(\pi_{2}(\mathbf{p},\mathbf{q})=\mathbf{q}\). Indeed, let us keep the convention of writing \((\mathbf{p},\mathbf{q})\) for elements of \(\mathbb{Z}^{m+n}=\mathbb{Z}^{m}\times\mathbb{Z}^{n}\). Then \[P(\pi)=\{(\mathbf{p},\mathbf{q})\in\mathbb{Z}^{m+n}:\text{ for each }j=1,\ldots,k, \quad\gcd(\pi_{j}(\mathbf{p},\mathbf{q}))=1\}.\] Note that \(P(\{[m+n]\})\) is the set of primitive points in \(\mathbb{Z}^{m+n}\). For \(\mathbf{q}\in\mathbb{Z}^{n}\), let \[P(\pi,\mathbf{q})=\{\mathbf{p}\in\mathbb{Z}^{m}:(\mathbf{p},\mathbf{q})\in P( \pi)\}.\] For example, \(P(\{[m+n]\},\mathbf{q})=\mathbb{Z}^{m+n}\) if \(\gcd(\mathbf{q})=1\), but in general \(P(\{[m+n]\},\mathbf{q})\) is a proper subset of \(\mathbb{Z}^{m+n}\). Finally, let \[Q(\pi)=\{\mathbf{q}\in\mathbb{Z}^{n}:P(\pi,\mathbf{q})\neq\emptyset\}.\] These are the vectors \(\mathbf{q}\) such that for all \(j\in(b,k]\cap\mathbb{Z}\), \(\gcd(\pi_{j}(\mathbf{0},\mathbf{q}))=1\), with \(\mathbf{0}\) denoting the \(m\)-dimensional \(0\)-vector. ## 4. Arithmetic lemmas In this section we collect some useful arithmetic sums, many of which are well-known. The main purpose of this section is to prove Lemma 4, a sum which is crucial later in the proof of Lemma 7. We will often use the Vinogradov symbols. Suppose that \(f:\mathbb{R}_{\geq 0}\to\mathbb{R}_{>0}\) and \(g:\mathbb{R}_{\geq 0}\to\mathbb{R}_{>0}\) are functions. We say that \(f\ll g\) if there exists a constant \(C>0\) such that \(f(x)\leq Cg(x)\) for all \(x\in\mathbb{R}_{\geq 0}\). If \(f\ll g\) and \(g\ll f\), we write \(f\asymp g\) and say that \(f\) and \(g\) are _comparable_. We write \(f\sim g\) if \(\frac{f(x)}{g(x)}\to 1\) as \(x\to\infty\) and we write \(f=o(g)\) if \(\frac{f(x)}{g(x)}\to 0\) as \(x\to\infty\). Throughout, we use \(\varphi\) to denote the _Euler totient function_ and \(\mu\) to denote the _Mobius function_. For definitions and properties, the reader is referred to [10]. **Lemma 1**.: _For each integer \(D\geq 2\) there exists a constant \(C>0\) such that_ \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}1\geq Cq^{D}\] _for all \(q\geq 1\)._ Proof.: It is well-known that for all \(D\geq 2\) we have the asymptotic equality \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}1\simeq 2^{D}\zeta(D)^{-1}q^{D}\] as \(q\to\infty\), where \(\zeta\) denotes the Riemann zeta function. (See, for example, [7, Lemma 4.2].) In particular, there is some \(Q_{D}\) such that \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}1\geq 2^{D-1}\zeta(D)^{-1}q^{D}\] holds for all \(q\geq Q_{D}\). We may take \(C\) as the minimum of the finite set \[\left\{2^{D-1}\zeta(D)^{-1}\right\}\cup\left\{q^{-D}\sum_{\begin{subarray}{c} \mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}1:q\leq Q_{D}\right\}.\] This proves the lemma. **Lemma 2**.: _For each integer \(D\geq 1\) there exists a constant \(C>0\) such that_ \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q},q)=1\end{subarray}}1\geq\sum_{\begin{subarray}{c}\mathbf{q} \in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}1\geq Cq^{D},\] _for all \(q\geq 1\)._ Proof.: The case \(D=1\) is exactly the definition of \(2\varphi(D)\), so let us assume \(D\geq 2\). Then \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q},q)=1\end{subarray}}1\geq\sum_{\begin{subarray}{c}\mathbf{q} \in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}1\geq Cq^{D},\] for all \(q\geq 1\), by Lemma 1. **Lemma 3**.: _For each integer \(D\geq 1\) there is a constant \(C>0\) such that_ \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \end{subarray}}\frac{\varphi(\gcd(\mathbf{q}))}{\gcd(\mathbf{q})}\geq Cq^{D} \qquad\text{and}\qquad\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}\frac{\varphi(\gcd(\mathbf{q},q))}{\gcd( \mathbf{q},q)}\geq Cq^{D}\] _both hold for all \(q\geq 1\)._ Proof.: For the first expression, if \(D\geq 2\), we have \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}\frac{\varphi(\gcd(\mathbf{q}))}{\gcd( \mathbf{q})}=\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}1\gg q^{D},\] where the last inequality follows from Lemma 1. On the other hand, if \(D=1\), we may write \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}\\ |\mathbf{q}|\leq q\end{subarray}}\frac{\varphi(\gcd(\mathbf{q}))}{\gcd( \mathbf{q})}=\sum_{\begin{subarray}{c}q^{\prime}\in\mathbb{Z}\\ |q^{\prime}|\leq q\end{subarray}}\frac{\varphi(q^{\prime})}{q^{\prime}}\gg q.\] The last estimate is well-known. It follows from the fact that the average order of \(\varphi(q)\) is \(6q/\pi^{2}\) (see, for example, [10, Theorem 330]). For the second expression in the lemma, if \(D\geq 2\), we have \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\end{subarray}}\frac{\varphi(\gcd(\mathbf{q},q))}{\gcd( \mathbf{q},q)}\geq\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}\frac{\varphi(\gcd(\mathbf{q},q))}{\gcd( \mathbf{q},q)}=\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{D}\\ |\mathbf{q}|\leq q\\ \gcd(\mathbf{q})=1\end{subarray}}1\gg q^{D},\] where the final inequality again follows by Lemma 1. If \(D=1\), we write \[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}\\ |\mathbf{q}|\leq q\end{subarray}}\frac{\varphi(\gcd(\mathbf{q},q))}{\gcd( \mathbf{q},q)}= \sum_{d|q}\frac{\varphi(d)}{d}\sum_{\begin{subarray}{c}q^{\prime}\in \mathbb{Z}\\ |q^{\prime}|\leq q\\ \gcd(q^{\prime},q)=d\end{subarray}}1\] \[= \sum_{d|q}\frac{\varphi(d)}{d}\sum_{\begin{subarray}{c}q^{ \prime}\in\mathbb{Z}\\ |q^{\prime}|\leq q/d\\ \gcd(q^{\prime},\frac{q}{d})=1\end{subarray}}1\] \[= 2\sum_{d|q}\frac{\varphi(d)}{d}\varphi(q/d).\] Meanwhile, we have \[\sum_{d|q}\frac{\varphi(d)\varphi(q/d)}{d} =\sum_{d|q}\varphi(q/d)\sum_{j|d}\frac{\mu(j)}{j}\] \[=\sum_{j|q}\frac{\mu(j)}{j}\sum_{i(q/j)}\varphi(q/ij)\] \[=\sum_{j|q}\frac{\mu(j)}{j}\left(\frac{q}{j}\right)\] \[=q\sum_{j|q}\frac{\mu(j)}{j^{2}}\] \[=q\prod_{p|q}\bigl{(}1-p^{-2}\bigr{)}.\] Noting that (see [10, Theorem 280]) for every \(q\geq 1\) we have \[\zeta(2)^{-1}\leq\prod_{p|q}\bigl{(}1-p^{-2}\bigr{)}\leq 1,\] we see that \[\sum_{d|q}\frac{\varphi(d)\varphi(q/d)}{d}=q\prod_{p|q}\bigl{(}1-p^{-2}\bigr{)} \asymp q.\] This finishes the proof of the lemma. **Lemma 4**.: _Suppose \(\pi=(\pi_{1},\ldots,\pi_{k})\) is a partition of \([m+n]\) such that for every \(j=1,\ldots,k\) we have \(|\pi_{j}|\geq 2\), as in Section 3. Suppose there is some \(\ell\in\{1,2,\ldots,k\}\) such that \(\pi_{\ell}\cap(m+[n])\neq\emptyset\) and \(|\pi_{\ell}|\geq 3\). Then there exists a constant \(C>0\) such that_ \[\sum_{\begin{subarray}{c}|\mathbf{q}|=q\\ \mathbf{q}\in Q(\pi)\mid\pi_{j}\cap[m]=1\end{subarray}}\prod_{\begin{subarray} {c}1\leq j\leq k\\ \mathbf{q}\in Q(\pi)\mid\pi_{j}\cap[m]=1\end{subarray}}\frac{\varphi(\gcd(\pi_ {j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\geq Cq^{n-1}\] _for all \(q\geq 1\). If no such \(\ell\) exists, then the above sum can be bounded below by \(\geq Cq^{n-2}\varphi(q)\)._ Proof.: Suppose first that \(\ell\in(b,k]\), so \(\pi_{\ell}\subset m+[n]\). By restricting the sum to those \(\mathbf{q}\) whose norm is achieved in a coordinate corresponding to \(\pi_{\ell}\), i.e. \(|\pi_{\ell}(\mathbf{q})|=|\mathbf{q}|\), we may bound \[\sum_{\begin{subarray}{c}|\mathbf{q}|=q\\ \mathbf{q}\in Q(\pi)\mid\pi_{j}\cap[m]=1\end{subarray}}\prod_{\begin{subarray} {c}1\leq j\leq k\\ \mathbf{q}\in Q(\pi)\mid\pi_{j}\cap[m]=1\end{subarray}}\frac{\varphi(\gcd(\pi_ {j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\geq\sum_{\begin{subarray}{c}| \pi_{\ell}(\mathbf{q})|=q\\ \mathbf{q}\in Q(\pi)\mid\pi_{j}\cap[m]=1\end{subarray}}\prod_{\begin{subarray} {c}1\leq j\leq k\\ |\pi_{j}\cap[m]|\neq 1\end{subarray}}\frac{\varphi(\gcd(\pi_{j}(\mathbf{q})))}{ \gcd(\pi_{j}(\mathbf{q}))}\prod_{\begin{subarray}{c}1\leq j\leq k\\ |\pi_{j}\cap[m]|\neq 1\end{subarray}}\mathbf{1}.\] We can further split \[\sum_{\begin{subarray}{c}|\pi_{\ell}(\mathbf{q})|=q\\ \mathbf{q}\in Q(\pi)\mid\pi_{j}\cap[m]=1\end{subarray}}\frac{\varphi(\gcd( \pi_{j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\prod_{\begin{subarray}{c}1 \leq j\leq k\\ |\pi_{j}\cap[m]|\neq 1\end{subarray}}\mathbf{1}\\ =\left[\prod_{\begin{subarray}{c}|\pi_{j}\cap[m]=1\end{subarray}} \sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{j}\cap[m]|}\\ |\mathbf{q}|\leq q\end{subarray}}\frac{\varphi(\gcd(\mathbf{q}))}{\gcd( \mathbf{q})}\right]\left[\prod_{\begin{subarray}{c}|\pi_{j}\cap[m]|>1\end{subarray}} \sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{j}\cap[m+[n]|]}\\ |\mathbf{q}|\leq q\end{subarray}}\mathbf{1}\right]\\ \times\left[\prod_{\begin{subarray}{c}j\in\{b,k\}\\ j\neq\ell\end{subarray}}\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{ j}|}\\ |\mathbf{q}|\leq q,\gcd(\mathbf{q})=1\end{subarray}}\mathbf{1}\right]\left[\sum_{ \begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{\ell}|-1}\\ |\mathbf{q}|\leq q,\gcd(\mathbf{q},q)=1\end{subarray}}\mathbf{1}\right].\] Using Lemmas 1, 2, and 3, and the fact that \(|\pi_{\ell}|\geq 3\), we see that there is some constant \(C>0\) such that we can bound the above expression below by \[\geq C\left[\prod_{|\pi_{j}\cap[m]|=1}q^{|\pi_{j}\cap(m+[n]|)|} \right]\left[\prod_{\begin{subarray}{c}|\pi_{j}\cap[m]|>1\end{subarray}}q^{| \pi_{j}\cap(m+[n]|)|}\right]\left[\prod_{\begin{subarray}{c}j\in\{b,k\}\\ j\neq\ell\end{subarray}}q^{|\pi_{j}|}\right]\left[q^{|\pi_{\ell}|-1}\right]\] \[=Cq^{n-1},\] as needed. Now suppose \(\ell\in[1,a]\). If \(|\pi_{\ell}\cap[m]|>1\) then by again restricting the sum to those \(\mathbf{q}\) having \(|\mathbf{q}|=|\pi_{\ell}(\mathbf{q})|\), we have \[\sum_{\begin{subarray}{c}|\mathbf{q}|=q\\ \mathbf{q}\in Q(\pi)\end{subarray}}\prod_{|\pi_{j}\cap[m]|=1}\frac{\varphi( \gcd(\pi_{j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\\ \geq\left[\prod_{|\pi_{j}\cap[m]|=1}\sum_{\begin{subarray}{c} \mathbf{q}\in\mathbb{Z}^{|\pi_{j}\cap(m+[n])|}\\ |\mathbf{q}|\leq q\end{subarray}}\frac{\varphi(\gcd(\mathbf{q}))}{\gcd( \mathbf{q})}\right]\left[\prod_{|\pi_{j}\cap[m]|>1}\sum_{\begin{subarray}{c} \mathbf{q}\in\mathbb{Z}^{|\pi_{j}\cap(m+[n])|}\\ |\mathbf{q}|\leq q\end{subarray}}\sum_{\begin{subarray}{c}|\mathbf{q}|\leq q \end{subarray}}1\right]\\ \times\left[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{| \pi_{\ell}\cap(m+[n])|-1}\\ |\mathbf{q}|\leq q\end{subarray}}\mathbf{1}\right]\left[\prod_{j\in(b,k]}\sum_ {\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{j}|}\\ |\mathbf{q}|\leq q,\gcd(\mathbf{q})=1\end{subarray}}\mathbf{1}\right],\] and by Lemmas 1 and 3 we bound below by \[\sum_{\begin{subarray}{c}|\mathbf{q}|=q\\ \mathbf{q}\in Q(\pi)\end{subarray}}\prod_{|\pi_{j}\cap[m]|=1}\frac{\varphi( \gcd(\pi_{j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\] \[\geq C\left[\prod_{|\pi_{j}\cap[m]|=1}q^{|\pi_{j}\cap(m+[n])|} \right]\left[\prod_{|\pi_{j}\cap[m]|>1}q^{|\pi_{j}\cap(m+[n])|}\right]\left[ q^{|\pi_{\ell}\cap(m+[n])|-1}\right]\left[\prod_{j\in(b,k]}q^{|\pi_{j}|}\right]\] \[=Cq^{n-1}.\] On the other hand, if \(|\pi_{\ell}\cap[m]|=1\), then again by restricting the sum to those \(\mathbf{q}\) such that \(|\pi_{\ell}(\mathbf{q})|=|\mathbf{q}|\), we have \[\sum_{\begin{subarray}{c}|\mathbf{q}|=q\\ \mathbf{q}\in Q(\pi)\end{subarray}}\prod_{|\pi_{j}\cap[m]|=1}\frac{\varphi( \gcd(\pi_{j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\\ \geq\left[\prod_{\begin{subarray}{c}|\pi_{j}\cap[m]|=1\\ j\neq\ell\end{subarray}}\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{ j}\cap(m+[n])|}\\ |\mathbf{q}|\leq q\end{subarray}}\frac{\varphi(\gcd(\mathbf{q}))}{\gcd( \mathbf{q})}\right]\left[\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{| \pi_{\ell}\cap(m+[n])|-1}\\ |\mathbf{q}|\leq q\end{subarray}}\frac{\varphi(\gcd(\mathbf{q},q))}{\gcd( \mathbf{q},q)}\right]\\ \times\left[\prod_{|\pi_{j}\cap[m]|>1}\sum_{\begin{subarray}{c} \mathbf{q}\in\mathbb{Z}^{|\pi_{j}\cap(m+[n])|}\\ |\mathbf{q}|\leq q\end{subarray}}\mathbf{1}\right]\left[\prod_{j\in(b,k]}\sum_ {\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{j}|}\\ |\mathbf{q}|\leq q\end{subarray}}\mathbf{1}\right]\] \[\geq C\left[\prod_{|\pi_{j}\cap[m]|=1}q^{|\pi_{j}\cap(m+[n])|} \right]\left[q^{|\pi_{\ell}\cap(m+[n])|-1}\right]\left[\prod_{|\pi_{j}\cap[m] |>1}q^{|\pi_{j}\cap(m+[n])|}\right]\left[\prod_{j\in(b,k]}q^{|\pi_{j}|}\right]\] \[= Cq^{n-1}\] for all \(q\geq 1\). The penultimate line in this case again follows from Lemmas 1 and 3. This proves the first part of the lemma. Suppose now that there is no \(\ell\) such that \(\pi_{\ell}\cap(m+[n])\neq\emptyset\) and \(|\pi_{\ell}|\geq 3\). If instead there is some \(\ell\in(b,k]\) with \(|\pi_{\ell}|=2\), then restricting the sum to the \(\mathbf{q}\) such that \(|\mathbf{q}|=|\pi_{\ell}(\mathbf{q})|\) and using Lemmas 1, 2, and 3, we find \[\sum_{\begin{subarray}{c}|\mathbf{q}|=q\\ \mathbf{q}\in\mathbf{Q}(\pi)\end{subarray}}\prod_{|\pi_{j}\cap[m]|=1}\frac{ \varphi(\gcd(\pi_{j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\left[ \prod_{\begin{subarray}{c}j\in(b,k]\\ j\neq\ell\end{subarray}}\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{| \pi_{j}|}\\ |\mathbf{q}|\leq q,\gcd(\mathbf{q})=1\end{subarray}}\mathbf{1}\right]\left[ \sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{\ell}|-1}\\ |\mathbf{q}|\leq q,\gcd(\mathbf{q},q)=1\end{subarray}}\mathbf{1}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \times\left[\prod_{\begin{subarray}{c}j\in(b,k]\\ j\neq\ell\end{subarray}}\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{| \pi_{j}|}\\ |\mathbf{q}|\leq q,\gcd(\mathbf{q})=1\end{subarray}}\mathbf{1}\right]\left[ \sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{\ell}|-1}\\ |\mathbf{q}|\leq q,\gcd(\mathbf{q},q)=1\end{subarray}}\mathbf{1}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\gg q^{n-2} \varphi(q).\] If there is no \(\ell\in(b,k]\), then we must have \(n=1\) and there must necessarily exist some \(\ell\in[1,a]\) such that \(|\pi_{\ell}\cap[m]|=1\). Restricting the sum again to \(\mathbf{q}\) such that \(|\pi_{\ell}(\mathbf{q})|=|\mathbf{q}|\), we have \[\sum_{\begin{subarray}{c}|\mathbf{q}|=q\\ \mathbf{q}\in\mathbf{Q}(\pi)\end{subarray}}\prod_{|\pi_{j}\cap[m]|=1}\frac{ \varphi(\gcd(\pi_{j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\times\left[\prod_{\begin{subarray}{c}|\pi_{j}\cap[m]|>1\\ |\mathbf{q}|\leq q\end{subarray}}\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z} ^{|\pi_{j}|\cap[m+[n]]|}\\ |\mathbf{q}|\leq q\end{subarray}}\mathbf{1}\right]\left[\prod_{j\in(b,k]}\sum_{ \begin{subarray}{c}\mathbf{q}\in\mathbb{Z}^{|\pi_{j}|}\\ |\mathbf{q}|\leq q,\gcd(\mathbf{q})=1\end{subarray}}\mathbf{1}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\gg\left[\prod_{\begin{subarray}{c}|\pi_{j}\cap[m] |=1\\ j\neq\ell\end{subarray}}q^{|\pi_{j}\cap(m+[n])|}\left[\frac{\varphi(q)}{q}\right] \left[\prod_{\begin{subarray}{c}|\pi_{j}\cap[m]|>1\\ |\mathbf{q}|\leq q\end{subarray}}q^{|\pi_{j}\cap(m+[n])|}\right]\left[\prod_{j \in(b,k]}q^{|\pi_{j}|}\right]\] \[\qquad\qquad\qquad\qquad\gg q^{n-2}\varphi(q).\] The penultimate line in this case follows from Lemmas 1 and 3. This completes the proof of the lemma. ## 5. Counting The main purpose of this section is to prove Lemma 7 which states that, in a sense, the integer points \((\mathbf{p},\mathbf{q})\in P(\pi)\) with \(|\mathbf{q}|=q\) are uniformly distributed for large \(q\), where \(\pi\) is a partition of \([m+n]\) as in Section 3. The lemma is important later in our proof of Lemma 8, where we show that the sets \(A^{\pi}(\mathbf{q})\) (defined in the next section) also enjoy a kind of uniform distribution in \(\mathbb{I}^{nm}\) where \(\mathbb{I}=[0,1]\). The following lemma can be deduced from [17, Lemma 1]. We include its proof for completeness. **Lemma 5**.: _For any \(0\leq\alpha<\beta\leq 1\) with \(\beta-\alpha=\gamma\), there exists some integer \(Q_{\gamma}>0\) such that_ \[\frac{1}{2}\varphi(q)\gamma\leq\#\{p\in\mathbb{N}:\gcd(p,q)=1,\quad\alpha q \leq p\leq\beta q\}\leq\frac{3}{2}\varphi(q)\gamma \tag{6}\] _whenever \(q\geq Q_{\gamma}\)._ Proof.: Let \(\theta(q)=\#\{p/q\in(\alpha,\beta)\}\), and notice that \(|\gamma q\rfloor\leq\theta(q)\leq|\gamma q\rfloor+1\). We have \[\theta(q)=\sum_{d|q}\#(P_{d}\cap(\alpha,\beta)),\] where \(P_{d}\) denotes the reduced fractions in \([0,1]\) with denominator \(d\). The Mobius inversion formula (see [10, Theorem 266]) gives \[\#(P_{q}\cap(\alpha,\beta))=\sum_{d|q}\mu\Big{(}\frac{q}{d}\Big{)}\theta(d)\] For a lower bound, we have \[\#(P_{q}\cap(\alpha,\beta)) =\sum_{d|q}\mu\Big{(}\frac{q}{d}\Big{)}\theta(d)\] \[\geq\sum_{d|q}\mu\Big{(}\frac{q}{d}\Big{)}\gamma d-\sum_{d|q}\{ \gamma d\}\mu\Big{(}\frac{q}{d}\Big{)}\] \[=\gamma\varphi(q)-\sum_{d|q}1,\] and for an upper bound, we have \[\#(P_{q}\cap(\alpha,\beta)) =\sum_{d|q}\mu\Big{(}\frac{q}{d}\Big{)}\theta(d)\] \[\leq\sum_{d|q}\mu\Big{(}\frac{q}{d}\Big{)}\gamma d+\sum_{d|q}1\] \[=\gamma\varphi(q)+\sum_{d|q}1.\] This last term is the number of divisors of \(q\), which is \(o(q^{\varepsilon})\) for any \(\varepsilon>0\)[10, Theorem 315], and in particular \(o(\varphi(q))\) (since \(\varphi(q)\gg q/\log\log q\) by [10, Theorem 328]). Therefore, there exists \(Q_{\gamma}>0\) such that for all \(q\geq Q_{\gamma}\) we have \[\sum_{d|q}1\leq\frac{1}{2}\gamma\varphi(q).\] Combining this with the previous two bounds gives (6) for all \(q\geq Q_{\gamma}\), proving the lemma. **Lemma 6**.: _Suppose \(d\in\mathbb{N}\) and, for \(1\leq i\leq d\), suppose that \(\alpha_{i},\beta_{i}\in[0,1]\) are such that_ \[0\leq\alpha_{i}<\beta_{i}\leq 1\quad\text{and}\quad\gamma=\beta_{i}-\alpha_{i} \quad\text{ for all }\,1\leq i\leq d.\] _Then there exist \(C,Q_{\gamma}>0\) such that_ \[\#\left\{(p_{1},\ldots,p_{d})\in\mathbb{Z}^{d}:\gcd(p_{1},\ldots,p_{d})=1\text { and }\,\alpha_{i}q\leq p_{i}\leq\beta_{i}q\quad\text{for }\,i=1,\ldots,d\right\}\geq C\gamma^{d}q^{d} \tag{7}\] _holds for all \(q\geq Q_{\gamma}\)._ Proof.: We proceed by induction, first establishing (7) in the case when \(d=2\). Notice that \[\#\left\{(p_{1},p_{2})\in\mathbb{Z}^{2}:\gcd(p_{1},p_{2})=1\text{ and }\alpha_{i}q\leq p_{i}\leq\beta_{i}q\quad\text{for }\,i=1,2\right\}=\sum_{p_{1}=[\alpha_{1}q]}^{[\beta_{1}q]} \sum_{\begin{subarray}{c}p_{2}=[\alpha_{2}q]\\ \gcd(p_{1},p_{2})=1\end{subarray}}^{[\beta_{2}q]}1.\] For a fixed \(p_{1}\), by Lemma 5, the inner sum is \[\sum_{\begin{subarray}{c}p_{2}=[\alpha_{2}q]\\ \gcd(p_{1},p_{2})=1\end{subarray}}^{[\beta_{2}q]} 1=\#\left\{p_{2}\in\mathbb{N}:\gcd(p_{1},p_{2})=1\text{ and }\alpha_{2}q\leq p_{2}\leq\beta_{2}q\right\}\] \[=\#\left\{p_{2}\in\mathbb{N}:\gcd(p_{1},p_{2})=1\text{ and }\left(\frac{\alpha_{2}q}{p_{1}}\right)p_{1}\leq p_{2}\leq\left(\frac{\beta_{2 }q}{p_{1}}\right)p_{1}\right\}\] \[\geq\frac{1}{2}\varphi(p_{1})\left(\frac{\beta_{2}q}{p_{1}}-\frac {\alpha_{2}q}{p_{1}}\right)\] \[=\frac{1}{2}\frac{\varphi(p_{1})}{p_{1}}q\gamma.\] Thus, recalling that \(\sum_{n=1}^{N}\frac{\varphi(n)}{n}\sim\frac{6}{\pi^{2}}N\) (see [10]), we have that there exists some \(C>0,Q>0\) such that if \(q\geq Q\) we have \[\sum_{p_{1}=[\alpha_{1}q]}^{[\beta_{1}q]}\sum_{\begin{subarray}{c}p_{2}=[ \alpha_{2}q]\\ \gcd(p_{1},p_{2})=1\end{subarray}}^{[\beta_{2}q]}1\geq\frac{q\gamma}{2} \left(\sum_{p_{1}=[\alpha_{1}q]}^{[\beta_{1}q]}\frac{\varphi(p_{1})}{p_{1}} \right)\geq Cq\gamma(\beta_{1}q-\alpha_{1}q)=Cq^{2}\gamma^{2}.\] This completes the proof of (7) in the case that \(d=2\). Next suppose that (7) has been established for \(d=k\). We will now show that it also holds when \(d=k+1\), and so the proof is then completed by induction. In the case when \(d=k+1\), we are interested in \[\mathcal{P}(q):=\left\{(p_{1},\ldots,p_{d},p_{d+1})\in\mathbb{Z}^{d+1}:\gcd(p_ {1},\ldots,p_{d},p_{d+1})=1\text{ and }\alpha_{i}q\leq p_{i}\leq\beta_{i}q\quad\text{for }\,i=1, \ldots,d+1\right\}.\] However, notice that \[\left\{(p_{1},\ldots,p_{d},p_{d+1})\in\mathbb{Z}^{d+1}:\gcd(p_{1},\ldots,p_{d} )=1\text{ and }\alpha_{i}q\leq p_{i}\leq\beta_{i}q\quad\text{for }\,i=1, \ldots,d+1\right\}\subset\mathcal{P}(q).\] Now, by our inductive hypothesis, \[\#\left\{(p_{1},\ldots,p_{d},p_{d+1})\in\mathbb{Z}^{d+1}: \gcd(p_{1},\ldots,p_{d})=1\text{ and }\alpha_{i}q\leq p_{i}\leq\beta_{i}q\quad\text{for }i=1,\ldots,d+1\right\}\] \[=\sum_{p_{d+1}=\lceil\alpha_{d+1}q\rceil}^{\lfloor\beta_{d+1}q \rfloor}\left(\sum_{p_{1}=\lceil\alpha_{1}q\rceil}^{\lfloor\beta_{1}q\rfloor} \sum_{p_{2}=\lceil\alpha_{2}q\rceil}^{\lfloor\beta_{2}q\rfloor}\ldots\sum_{ \begin{subarray}{c}p_{d}=\lceil\alpha_{d}q\rceil\\ \gcd(p_{1},\ldots,p_{d})=1\end{subarray}}^{\lfloor\beta_{d}q\rfloor}1\right)\] \[\geq C\sum_{p_{d+1}=\lceil\alpha_{d+1}q\rceil}^{\lfloor\beta_{d+1} q\rfloor}q^{d}\gamma^{d}\] \[\geq Cq^{d}\gamma^{d}(\beta_{d+1}q-\alpha_{d+1}q)\] \[\geq Cq^{d+1}\gamma^{d+1}.\] This completes the proof of the lemma. **Lemma 7**.: _Suppose \(\pi=(\pi_{1},\ldots,\pi_{k})\) is a partition of \([m+n]\) such that for every \(j=1,\ldots,k\) we have \(\left\lvert\pi_{j}\right\rvert\geq 2\), as in Section 3. Suppose there is some \(\ell\in\{1,\ldots,k\}\) such that \(\pi_{\ell}\cap(m+[n])\neq\emptyset\) and \(\left\lvert\pi_{\ell}\right\rvert\geq 3\). Then there exists a constant \(C>0\) such that the following holds. For every \(0<\gamma\leq 1\) there exists \(Q_{\gamma}>0\), such that for every choice of \(0\leq\alpha_{i}<\beta_{i}\leq 1\) (\(i=1,\ldots,m\)) with \(\beta_{i}-\alpha_{i}=\gamma\), we have_ \[\sum_{\left\lvert\mathbf{q}\right\rvert=q}\#\left\{\mathbf{p}\in P(\pi, \mathbf{q}):\forall i\in[m],\quad\alpha_{i}q\leq p_{i}\leq\beta_{i}q\right\} \geq C\gamma^{m}q^{m+n-1}\] _as long as \(q\geq Q_{\gamma}\). If no such \(\ell\) exists, then the sum above is bounded below by \(C\gamma^{m}q^{m+n-2}\varphi(q)\)._ Proof.: Suppose \(\mathbf{q}\in Q(\pi)\) with \(\left\lvert\mathbf{q}\right\rvert=q\). Let us estimate \[\#\left\{\mathbf{p}\in P(\pi,\mathbf{q}):\forall i\in[m],\quad\alpha_{i}q\leq p _{i}\leq\beta_{i}q\right\}.\] We will do this by analysing our freedom of choice of \(\mathbf{p}\) in the different components defined by \(\pi\). For \(j\in[1,b]\), let \[N_{j}(\mathbf{q})=\#\left\{\mathbf{p}\in\mathbb{Z}^{\left\lvert\pi_{j}\cap[m] \right\rvert}:\gcd(\mathbf{p},\pi_{j}(\mathbf{q}))=1,\quad\alpha_{i}q\leq p_{ i}\leq\beta_{i}q\quad(i=1,\ldots,\left\lvert\pi_{j}\cap[m]\right\rvert)\right\},\] and note that for \(j\in(\alpha,b]\) this is equivalent to \[N_{j}(\mathbf{q})=\#\left\{\mathbf{p}\in\mathbb{Z}^{\left\lvert\pi_{j}\right\rvert }:\gcd(\mathbf{p})=1,\quad\alpha_{i}q\leq p_{i}\leq\beta_{i}q\quad(i=1,\ldots, \left\lvert\pi_{j}\right\rvert)\right\}.\] Then we have \[\#\left\{\mathbf{p}\in P(\pi,\mathbf{q}):\forall i\in[m],\quad\alpha_{i}q\leq p _{i}\leq\beta_{i}q\right\}=\prod_{j=1}^{b}N_{j}(\mathbf{q}).\] For \(j\in(\alpha,b]\), Lemma 6 tells us that there exists \(C_{j},Q_{j,\gamma}>0\) such that \[N_{j}(\mathbf{q})\geq C_{j}q^{\left\lvert\pi_{j}\right\rvert}\gamma^{\left\lvert \pi_{j}\right\rvert}\] whenever \(q\geq Q_{j,\gamma}\). For \(j\in[1,a]\), we consider two sub-cases. First, if \(\left\lvert\pi_{j}\cap[m]\right\rvert\geq 2\), then \[N_{j}(\mathbf{q})\geq\#\left\{\mathbf{p}\in\mathbb{Z}^{\left\lvert\pi_{j} \cap[m]\right\rvert}:\gcd(\mathbf{p})=1,\quad\alpha_{i}q\leq p_{i}\leq\beta_{ i}q\quad(i=1,\ldots,\left\lvert\pi_{j}\cap[m]\right\rvert)\right\},\] so, as above, Lemma 6 tells us that there exists \(C_{j},Q_{j,\gamma}>0\) such that \[N_{j}(\mathbf{q})\geq C_{j}\gamma^{\left\lvert\pi_{j}\cap[m]\right\rvert}q^{ \left\lvert\pi_{j}\cap[m]\right\rvert}\] whenever \(q\geq Q_{j,\gamma}\). Otherwise, we have \(|\pi_{j}\cap[m]|=1\), in which case, \[N_{j}(\mathbf{q}) =\#\{p\in\mathbb{Z}:\gcd(p,\pi_{j}(\mathbf{q}))=1,\quad\alpha_{i}q \leq p_{i}\leq\beta_{i}(q)\quad(i=1,\ldots,|\pi_{j}\cap[m]|)\}\] \[=\#\{p\in\mathbb{Z}:\gcd(p,\gcd(\pi_{j}(\mathbf{q})))=1,\quad \alpha_{i}q\leq p_{i}\leq\beta_{i}(q)\quad(i=1,\ldots,|\pi_{j}\cap[m]|)\}\] \[=\#\left\{p\in\mathbb{Z}:\gcd(p,\gcd(\pi_{j}(\mathbf{q})))=1,\right.\] \[\left.\left(\frac{\alpha_{i}q}{\gcd(\pi_{j}(\mathbf{q}))}\right) \gcd(\pi_{j}(\mathbf{q}))\leq p_{i}\leq\left(\frac{\beta_{i}q}{\gcd(\pi_{j}( \mathbf{q}))}\right)\gcd(\pi_{j}(\mathbf{q}))\quad(i=1,\ldots,|\pi_{j}\cap[m ]|)\right\}.\] Thus, by Lemma 5, there exists \(C_{j},Q_{j,\gamma}>0\) such that \[N_{j}(\mathbf{q})\geq C_{j}\gamma q\frac{\varphi(\gcd(\pi_{j}(\mathbf{q})))} {\gcd(\pi_{j}(\mathbf{q}))}\] for all \(q\geq Q_{j,\gamma}\). We conclude that there exists \(C>0\) and \(Q_{\gamma}>0\) such that \[\prod_{j=1}^{b}N_{j}(\mathbf{q})\geq C\gamma^{m}q^{m}\prod_{|\pi_{j}\cap[m]|=1 }\frac{\varphi(\gcd(\pi_{j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\] holds for all \(q\geq Q_{\gamma}\). Therefore, by Lemma 4, \[\sum_{\begin{subarray}{c}|\mathbf{q}|=q\\ \mathbf{q}\in Q(\pi)\end{subarray}}\prod_{j=1}^{b}N_{j}(\mathbf{q}) \geq C\gamma^{m}q^{m}\sum_{\begin{subarray}{c}|\mathbf{q}|=q\\ \mathbf{q}\in Q(\pi)\end{subarray}}\prod_{|\pi_{j}\cap[m]|=1}\frac{\varphi( \gcd(\pi_{j}(\mathbf{q})))}{\gcd(\pi_{j}(\mathbf{q}))}\] \[\geq\begin{cases}C\gamma^{m}q^{m+n-1}&\text{if $\ell$ exists,}\\ C\gamma^{m}q^{m+n-2}\varphi(q)&\text{if not,}\end{cases}\] and the lemma is proved. ## 6. Uniformity Suppose \(n,m\in\mathbb{N}\). For \(\mathbf{q}\in\mathbb{Z}^{n}\) and a ball \(B\subset\mathbb{R}^{m}\), let \[A_{n,m}(\mathbf{q},B)=A(\mathbf{q},B)=\{\mathbf{x}\in\mathbb{I}^{nm}:\exists \mathbf{p}\in\mathbb{Z}^{m},\quad\mathbf{q}\mathbf{x}+\mathbf{p}\in B\}\] and \[A_{n,m}^{\pi}(\mathbf{q},B)=A^{\pi}(\mathbf{q},B)=\big{\{}\mathbf{x}\in \mathbb{I}^{nm}:\exists\mathbf{p}\in\mathbb{Z}^{m},\quad(\mathbf{p},\mathbf{q })\in P(\pi),\quad\mathbf{q}\mathbf{x}+\mathbf{p}\in B\big{\}}\] and \[A_{n,m}^{\prime}(\mathbf{q},B)=A^{\prime}(\mathbf{q},B)=\{\mathbf{x}\in \mathbb{I}^{nm}:\exists\mathbf{p}\in\mathbb{Z}^{m},\forall i\in[m],\quad\gcd( p_{i},\mathbf{q})=1,\quad\mathbf{q}\mathbf{x}+\mathbf{p}\in B\}.\] Subscripts will be dropped when the context is clear. Notice that \[|A(\mathbf{q},B)|=\min\{|B|,1\}. \tag{8}\] Suppose \((B_{q})_{q\in\mathbb{N}}\) is a sequence of balls in \(\mathbb{R}^{m}\). For \(\mathbf{q}\in\mathbb{Z}^{n}\), we will write \[A(\mathbf{q})=A(\mathbf{q},B_{|\mathbf{q}|}),\quad A^{\pi}(\mathbf{q})=A^{\pi} (\mathbf{q},B_{|\mathbf{q}|}),\quad\text{and}\quad A^{\prime}(\mathbf{q})=A^{ \prime}(\mathbf{q},B_{|\mathbf{q}|}).\] Theorems 1, 2, and 3 can be phrased in terms of sets of the type we have just defined. It is therefore important for us to establish some lemmas regarding the measures of these sets. The following lemmas show that for partitions \(\pi\) as in our main theorem statements, \(A^{\pi}\) and \(A\) have comparable measures, even when intersected with arbitrary open sets, and that the same goes for \(A^{\prime}\) and \(A\). **Lemma 8**.: _Suppose \(\pi=(\pi_{1},\ldots,\pi_{k})\) is a partition of \([m+n]\) such that for every \(j=1,\ldots,k\) we have \(|\pi_{j}|\geq 2\), as in Section 3._ 1. _Suppose there is some_ \(\ell\in\{1,\ldots,k\}\) _such that_ \(\pi_{\ell}\cap(m+[n])\neq\emptyset\) _and_ \(|\pi_{\ell}|\geq 3\)_. Then there exists a constant_ \(C:=C_{\pi}>0\) _such that for every open set_ \(U\subset\mathfrak{l}^{nm}\) _there is some_ \(Q_{U}>0\) _such that for all_ \(q\geq Q_{U}\) _we have_ \[\sum_{|\mathbf{q}|=q}\left|A^{\pi}(\mathbf{q},B)\cap U\right|\geq C\sum_{| \mathbf{q}|=q}|A(\mathbf{q},B)||U|\] _for every ball_ \(B\subset\mathbb{R}^{m}\)_._ 2. _If, on the other hand, no such_ \(\ell\) _exists, but the measures_ \(|B_{q}|\) _are non-increasing, then_ \[\sum_{|\mathbf{q}|\leq Q}\left|A^{\pi}(\mathbf{q},B_{|\mathbf{q}|})\cap U\right| \geq C\sum_{|\mathbf{q}|\leq Q}\left|A(\mathbf{q},B_{|\mathbf{q}|})\right||U|\] _for all_ \(Q\) _sufficiently large._ **Lemma 9**.: _Let \(n>2\). There exists a constant \(C>0\) such that for every open set \(U\subset\mathfrak{l}^{nm}\) there is some \(Q_{U}>0\) such that for all \(q\geq Q_{U}\) we have_ \[\sum_{|\mathbf{q}|=q}\left|A^{\prime}(\mathbf{q},B)\cap U\right|\geq C\sum_{| \mathbf{q}|=q}|A(\mathbf{q},B)||U|\] _for every ball \(B\subset\mathbb{R}^{m}\)._ Proof of Lemma 8.: First, find a finite union \(V\) of disjoint balls contained in \(U\) such that \(|V|\geq|U|/2\). (In principle, we could get as close to the measure of \(U\) as we want.) Without loss of generality, we assume all the balls in \(V\) have the same radius, \(\gamma>0\). Now let \(W\subset\mathfrak{l}^{nm}\) be any ball of radius \(\gamma\). It will be enough to show that \[\sum_{|\mathbf{q}|=q}\left|A^{\pi}(\mathbf{q},B)\cap W\right|\geq C^{\prime} \sum_{|\mathbf{q}|=q}|A(\mathbf{q},B)||W| \tag{9}\] for all \(q\geq Q_{\gamma}\), where \(C^{\prime}>0\) is some absolute constant which may depend on \(n,m\), and \(\pi\), and \(Q_{\gamma}\) only depends on \(\gamma\). Importantly for us, \(Q_{\gamma}\) does not depend on \(B\) and so, given (9), one can deduce the lemma with \(C=C^{\prime}/2\). Let us write \(\mathbf{x}\in\mathfrak{l}^{nm}\) as \(\mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{m})\) where \(\mathbf{x}_{j}\) are column vectors. Then for any \(\mathbf{q}\in Q(\pi)\subseteq\mathbb{Z}^{n}\) we have that \[\mathbf{q}\mathbf{x}=(\mathbf{q}\cdot\mathbf{X}_{1},\ldots,\mathbf{q}\cdot \mathbf{X}_{a},\mathbf{q}\cdot\mathbf{X}_{a+1},\ldots,\mathbf{q}\cdot \mathbf{X}_{b})\in\mathfrak{l}^{m},\] where \(\mathbf{X}_{1},\ldots,\mathbf{X}_{a}\) are the projections to the components corresponding to \(\pi_{1}\cap[m],\ldots,\pi_{a}\cap[m]\), respectively, and \(\mathbf{X}_{a+1},\ldots,\mathbf{X}_{b}\) are the projections to the components corresponding to \(\pi_{a+1},\ldots,\pi_{b}\), respectively. For \(j\in[1,a]\), \(\mathbf{X}_{j}\) is an \(n\times|\pi_{j}\cap[m]|\)-matrix, and for \(j\in(a,b]\), \(\mathbf{X}_{j}\) is an \(n\times|\pi_{j}|\)-matrix. The condition that \[\mathbf{q}\mathbf{x}+\mathbf{p}\in B\qquad\text{for}\qquad\mathbf{p}=(p_{1}, \ldots,p_{m})\in P(\pi,\mathbf{q})\] is equivalent to \(\mathbf{q}\cdot\mathbf{X}_{j}+\pi_{j}(\mathbf{p})\in\pi_{j}(B)\) for each \(j\). Therefore, we have \[A_{n,m}^{\pi}(\mathbf{q},B)\cap W=\left(\prod_{j=1}^{b}A_{n,|\pi_{j}\cap[m]|}^{ \pi}(\mathbf{q},\pi_{j}(B))\right)\cap W=\prod_{j=1}^{b}A_{n,|\pi_{j}\cap[m]|}^{ \pi}(\mathbf{q},\pi_{j}(B))\cap W_{j}, \tag{10}\] noting that when \(j\in(a,b]\) we have \(\pi_{j}\cap[m]=\pi_{j}\), and where \(W_{j}\) is the projection of \(W\) to the copy of \(\mathfrak{l}^{n(|\pi_{j}\cap[m]|)}\) corresponding to the \(j\)th component of the above product. Let us now find the \((n(|\pi_{j}\cap[m]|)\)-dimensional Lebesgue) measure of \[A_{n,|\pi_{j}\cap[m]|}^{\pi}(\mathbf{q},\pi_{j}(B))\cap W_{j}.\] We will accomplish the task in one fell swoop, but it is worth bearing in mind that there are two interpretations of what follows, depending on whether \(j\in[1,a]\) or \(j\in(a,b]\). In the latter case, we have \(\pi_{j}\cap[m]=\pi_{j}\) and the condition \(\gcd(\mathbf{p},\pi_{j}(\mathbf{q}))\) is the same as \(\gcd(\mathbf{p})=1\). Proceeding, we have \[A_{n,|\pi_{j}\cap[m]|}^{\pi}(\mathbf{q},\pi_{j}(B))\cap W_{j}=\left\{\mathbf{x }\in\mathfrak{l}^{n|\pi_{j}\cap[m]|}:\exists\mathbf{p}\in\mathbb{Z}^{|\pi_{j} \cap[m]|},\quad\gcd(\mathbf{p},\pi_{j}(\mathbf{q}))=1,\quad\mathbf{q}\mathbf{ x}-\mathbf{p}\in\pi_{j}(B)\right\},\] having noted that in this case the condition \(\mathbf{p}\in P(\pi_{j},\mathbf{q})\) is \(\gcd(\mathbf{p},\pi_{j}(\mathbf{q}))=1\). Now we express points \(\mathbf{x}\in\mathfrak{l}^{n|\pi_{j}\cap[m]|}\) as matrices with columns \(\mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{|\pi_{j}\cap[m]|})\) (having relabelled coordinates), so that the condition of interest is that \(\mathbf{q}\cdot\mathbf{x}_{i}-p_{i}\) lies in an interval of side-length \(2r\), the projection of \(W_{j}\) to the \(i\)th coordinate. Suppose that \(|\mathbf{q}|\) is achieved in the first coordinate, \(q_{1}=q\). For any \(\mathbf{z}\in\mathfrak{l}^{(n-1)|\pi_{j}\cap[m]|}\) (representing the coordinates in the rows \(2,\ldots,n\) of \(\mathfrak{l}^{n|\pi_{j}\cap[m]|}\)), let \[S_{\mathbf{z}}=\left(A_{n,|\pi_{j}\cap[m]|}^{\pi}(\mathbf{q},\pi_{j}(B))\cap W _{j}\right)_{\mathbf{z}}=A_{n,|\pi_{j}\cap[m]|}^{\pi}(\mathbf{q},\pi_{j}(B))_ {\mathbf{z}}\cap(W_{j})_{\mathbf{z}}\] be the cross-section through \(\mathbf{z}\) parallel to the \(|\pi_{j}\cap[m]|\)-dimensional space spanned by the coordinates in the first row. Then \[\left|A_{n,|\pi_{j}\cap[m]|}^{\pi}(\mathbf{q},\pi_{j}(B))\cap W_{ j}\right| =\int_{\mathfrak{l}^{(n-1)|\pi_{j}\cap[m]|}}|S_{\mathbf{z}}|\,d \mathbf{z}\] \[=\int_{Y_{j}}|S_{\mathbf{z}}|\,d\mathbf{z}\] where \(Y_{j}\) is the projection of \(W_{j}\) to the last \(n-1\) rows' coordinates. Meanwhile, \((W_{j})_{\mathbf{z}}\) is a \(|\pi_{j}\cap[m]|\)-dimensional ball and \[A_{n,|\pi_{j}\cap[m]|}^{\pi}(\mathbf{q},\pi_{j}(B))_{\mathbf{z}} =\left\{\mathbf{x}\in\mathfrak{l}^{|\pi_{j}\cap[m]|}:\exists \mathbf{p}\in\mathbb{Z}^{|\pi_{j}\cap[m]|},\quad\gcd(\mathbf{p},\pi_{j}( \mathbf{q}))=1,\quad\mathbf{q}\begin{pmatrix}\mathbf{x}\\ \mathbf{z}\end{pmatrix}-\mathbf{p}\in\pi_{j}(B)\right\}\] \[=\left\{\mathbf{x}\in\mathfrak{l}^{|\pi_{j}\cap[m]|}:\exists \mathbf{p}\in\mathbb{Z}^{|\pi_{j}\cap[m]|},\quad\gcd(\mathbf{p},\pi_{j}( \mathbf{q}))=1,\quad q\mathbf{x}+\mathbf{q}\begin{pmatrix}\mathbf{0}\\ \mathbf{z}\end{pmatrix}-\mathbf{p}\in\pi_{j}(B)\right\}.\] It is a union of disjoint balls of diameter \(\frac{|B|^{1/m}}{q}\) with centers at the points \(\frac{\mathbf{p}}{q}\) such that \(\gcd(\mathbf{p},\pi_{j}(\mathbf{q}))=1\) in \((\mathfrak{l}^{nm})_{\mathbf{z}}\cong\mathfrak{l}^{|\pi_{j}\cap[m]|}\), translated by \(\mathbf{q}\begin{pmatrix}\mathbf{0}\\ \mathbf{z}\end{pmatrix}+\text{center}(\pi_{j}(B))\). Let \(N_{j}(\mathbf{q},W)\) be the number of such center points which are also contained in the ball \(\frac{1}{2}W_{j}\) in \(\mathfrak{l}^{|\pi_{j}\cap[m]|}\), that is, \[N_{j}(\mathbf{q},W)=\#\left\{\mathbf{p}\in\mathbb{Z}^{|\pi_{j}\cap[m]|}:\gcd( \mathbf{p},\pi_{j}(\mathbf{q}))=1,\quad\mathbf{p}/q\in\frac{1}{2}W_{j}+t \right\},\] where \(t\) is the translation vector as above; its precise value does not matter. The reason for bringing our attention to the shrunken ball \(\frac{1}{2}W_{j}\) is that each relevant \(\mathbf{p}/q\) is the center of a diameter-\(|B|^{1/m}/q\) sub-ball of \(A^{\pi}_{n,|\pi_{j}\cap[m]|}(\mathbf{q},\pi_{j}(B))_{\mathbf{z}}\) which is fully contained in \(W_{j}\). We can therefore bound \[|S_{\mathbf{z}}|\geq N_{j}(\mathbf{q},W)\frac{|B|^{|\pi_{j}\cap[m]|/m}}{q^{|\pi _{j}\cap[m]|}}\] and \[\left|A^{\pi}_{n,|\pi_{j}\cap[m]|}(\mathbf{q},\pi_{j}(B))\cap W_{j}\right| =\int_{Y_{j}}|S_{\mathbf{z}}|\,d\mathbf{z}\] \[\geq N_{j}(\mathbf{q},W)\frac{|B|^{|\pi_{j}\cap[m]|/m}}{q^{|\pi _{j}\cap[m]|}}\int_{Y_{j}}d\mathbf{z}\] \[=N_{j}(\mathbf{q},W)\frac{|B|^{|\pi_{j}\cap[m]|/m}}{q^{|\pi_{j} \cap[m]|}}|Y_{j}|\] \[=N_{j}(\mathbf{q},W)\frac{|B|^{|\pi_{j}\cap[m]|/m}}{q^{|\pi_{j} \cap[m]|}}(2\gamma)^{(n-1)|\pi_{j}\cap[m]|}\] Note that the argument in this paragraph did not depend on the supposition that \(|\mathbf{q}|\) was achieved in the first of the \(n\) coordinates, so the measure calculation would have come out the same regardless. Now, combining (10) with the calculation above, we have \[\left|A^{\pi}(\mathbf{q},B)\cap W\right| =\prod_{j=1}^{b}\left|A^{\pi}_{n,|\pi_{j}\cap[m]|}(\mathbf{q},\pi _{j}(B))\cap W_{j}\right|\] \[\geq\frac{|B|}{q^{m}}(2\gamma)^{m(n-1)}\prod_{j=1}^{b}N_{j}( \mathbf{q},W).\] Finally, Lemma 7 tells us that \[\sum_{|\mathbf{q}|=q}\prod_{j=1}^{b}N_{j}(\mathbf{q},W)\geq\begin{cases}C \gamma^{m}q^{m+n-1}\\ C\gamma^{m}q^{m+n-2}\varphi(q),\end{cases}\] as long as \(q\geq Q_{\gamma}\), depending on whether there exists \(\ell\) as in the lemma's statement. If there does exist such an \(\ell\), then for \(q\geq Q_{\gamma}\) we will have \[\sum_{|\mathbf{q}|=q}\left|A^{\pi}(\mathbf{q},B)\cap W\right|\geq\bar{C}|B| \gamma^{mn}q^{n-1}\overset{\eqref{eq:A(q,B)}}{\gg}\sum_{|\mathbf{q}|=q}|A( \mathbf{q},B)||W|,\] where \(\bar{C}\) is a constant absorbing \(C\) and all of the powers of \(2\) appearing above. If there does not exist such an \(\ell\), but \(|B_{q}|\) is non-increasing, then for large \(Q\) we will have \[\sum_{|\mathbf{q}|\leq Q}\left|A^{\pi}(\mathbf{q},B_{|\mathbf{q}| })\cap W\right| \geq\sum_{q\leq Q}\bar{C}|B_{q}|\gamma^{mn}q^{n-2}\varphi(q)\] \[\gg\sum_{q\leq Q}|B_{q}|\gamma^{mn}q^{n-1}.\] The last estimate comes from the fact that the average order of \(q^{n-2}\varphi(q)\) is \(\gg q^{n-1}\) and that \(|B_{q}|\) is monotonic. Finally, it follows from (8) that \[\sum_{|\mathbf{q}|\leq Q}\left|A^{\pi}(\mathbf{q},B_{|\mathbf{q}|})\cap W \right|\gg\sum_{|\mathbf{q}|\leq Q}\left|A(\mathbf{q},B_{|\mathbf{q}|})||W|,\] which proves the lemma. The following proof takes the same steps as the previous one, and only has been modified to accommodate the definition of \(A^{\prime}(\mathbf{q})\), which is different from the definition of \(A^{\pi}(\mathbf{q})\). Proof of Lemma 9.: As in the previous proof, we find a finite union \(V\) of disjoint balls contained in \(U\) such that \(|V|\geq|U|/2\), and we assume all the balls in \(V\) have the same radius, \(\gamma>0\). Let \(W\subset\mathbb{I}^{nm}\) be any ball of radius \(\gamma\). It is enough to show that \[\sum_{|\mathbf{q}|=q}\left|A^{\prime}(\mathbf{q},B)\cap W\right|\geq C^{\prime }\sum_{|\mathbf{q}|=q}\left|A(\mathbf{q},B)\right|\left|W\right| \tag{11}\] for all \(q\geq Q_{\gamma}\), where \(C^{\prime}>0\) is some absolute constant which may depend on \(n,m\), and \(Q_{\gamma}\) only depends on \(\gamma\). Importantly, \(Q_{\gamma}\) does not depend on \(B\) and so, given (11), one can deduce the lemma with \(C=C^{\prime}/2\). Let us write \(\mathbf{x}\in\mathbb{I}^{nm}\) as \(\mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{m})\) where \(\mathbf{x}_{j}\) are column vectors. Then for any \(\mathbf{q}\subseteq\mathbb{Z}^{n}\) we have that \[\mathbf{q}\mathbf{x}=(\mathbf{q}\cdot\mathbf{x}_{1},\ldots,\mathbf{q}\cdot \mathbf{x}_{m})\in\mathbb{I}^{m}.\] The condition that \[\mathbf{q}\mathbf{x}+\mathbf{p}\in B\qquad\text{for}\qquad\mathbf{p}\in \mathbb{Z}^{m},\forall i\in[m],\quad\gcd(p_{i},\mathbf{q})=1\] is equivalent to \(\mathbf{q}\cdot\mathbf{x}_{i}+p_{i}\in B_{i}\) for each \(i\), where \(B_{i}\) is the projection of \(B\) to the \(i\)th coordinate. Therefore, we have \[A^{\prime}(\mathbf{q},B)\cap W=\left(\prod_{i=1}^{m}A^{\prime}_{n,1}(\mathbf{ q},B_{i})\right)\cap W=\prod_{i=1}^{m}A^{\prime}_{n,1}(\mathbf{q},B_{i})\cap W _{i}, \tag{12}\] where \(W_{i}\) is the projection of \(W\) to the copy of \(\mathbb{I}^{n}\) corresponding to the \(i\)th component of the above product. Let us now find the (\(n\)-dimensional Lebesgue) measure of \(A^{\prime}_{n,1}(\mathbf{q},B_{i})\cap W_{i}\). We have \[A^{\prime}_{n,1}(\mathbf{q},B_{i})=\{\mathbf{x}\in\mathbb{I}^{n}:\exists p \in\mathbb{Z},\quad\gcd(p,\mathbf{q})=1,\quad\mathbf{q}\mathbf{x}-p\in B_{i}\}.\] Suppose for now that \(|\mathbf{q}|\) is achieved in the first coordinate, \(q_{1}=q\). For any \(\mathbf{z}\in\mathbb{I}^{n-1}\) (representing the coordinates in the rows \(2,\ldots,n\) of \(\mathbb{I}^{n}\)), let \[S_{\mathbf{z}}=\left(A^{\prime}_{n,1}(\mathbf{q},B_{i})\cap W_{i}\right)_{ \mathbf{z}}=A^{\prime}_{n,1}(\mathbf{q},B_{i})_{\mathbf{z}}\cap(W_{i})_{ \mathbf{z}}\] be the cross-section through \(\mathbf{z}\) parallel to the first coordinate axis. Then \[\left|A^{\prime}_{n,1}(\mathbf{q},B_{i})\cap W_{i}\right|= \int_{\mathbb{I}^{n-1}}\left|S_{\mathbf{z}}\right|d\mathbf{z}\] \[= \int_{Y_{i}}\left|S_{\mathbf{z}}\right|d\mathbf{z}\] where \(Y_{i}\) is the projection of \(W_{i}\) to the last \(n-1\) coordinates. Meanwhile, \((W_{i})_{\mathbf{z}}\) is an interval and \[A^{\prime}_{n,1}(\mathbf{q},B_{i})_{\mathbf{z}} =\left\{x\in\mathbb{I}:\exists p\in\mathbb{Z},\quad\gcd(p,\mathbf{ q})=1,\quad\mathbf{q}\binom{x}{\mathbf{z}}-p\in B_{i}\right\}\] \[=\left\{x\in\mathbb{I}:\exists p\in\mathbb{Z},\quad\gcd(p, \mathbf{q})=1,\quad qx+\mathbf{q}\binom{0}{\mathbf{z}}-p\in B_{i}\right\}\] It is a union of disjoint intervals of diameter \(|B|^{1/m}/q\) with centers at the points \(p/q\) such that \(\gcd(p,\mathbf{q})=1\) in \((\mathbb{I}^{n})_{\mathbf{z}}\cong\mathbb{I}\), translated by \(\mathbf{q}\binom{0}{\mathbf{z}}+\text{center}(B_{i})\). Let \(M_{i}(\mathbf{q},W)\) be the number of such center points which are also contained in the interval \(\frac{1}{2}W_{i}\) in \(\mathbb{I}\), that is, \[M_{i}(\mathbf{q},W)=\#\left\{p\in\mathbb{Z}:\gcd(p,\mathbf{q})=1,\quad p/q \in\frac{1}{2}W_{i}+t\right\},\] where \(t\) is the translation vector as above. As before, the reason considering the contracted interval \(\frac{1}{2}W_{i}\) is that each relevant \(p/q\) is the center of a diameter-\(|B|^{1/m}/q\) sub-interval of \(A^{\prime}_{n,1}(\mathbf{q},B_{i})_{\mathbf{z}}\) which is fully contained in \(W_{i}\). We can therefore bound \[|S_{\mathbf{z}}|\geq M_{i}(\mathbf{q},W)\frac{|B|^{1/m}}{q}\] and \[\left|A^{\prime}_{n,1}(\mathbf{q},B_{i})\cap W_{i}\right| =\int_{Y_{i}}|S_{\mathbf{z}}|\,d\mathbf{z}\] \[\geq M_{i}(\mathbf{q},W)\frac{|B|^{1/m}}{q}\int_{Y_{i}}d\mathbf{z}\] \[=M_{i}(\mathbf{q},W)\frac{|B|^{1/m}}{q}|Y_{i}|\] \[=M_{i}(\mathbf{q},W)\frac{|B|^{1/m}}{q}(2\gamma)^{n-1}\] Since the argument in this paragraph did not depend on the assumption that \(|\mathbf{q}|\) was achieved in the first of the \(n\) coordinates, the measure calculation would have come out the same regardless of that assumption. Now, by (12), we have \[\left|A^{\prime}(\mathbf{q},B)\cap W\right|\geq\frac{|B|}{q^{m}}(2\gamma)^{m( n-1)}\prod_{i=1}^{m}M_{i}(\mathbf{q},W).\] Finally, Lemma 5 tells us that \[\prod_{i=1}^{m}M_{i}(\mathbf{q},W)\geq\frac{1}{2^{m}}\,q^{m}\left(\frac{\varphi (\gcd(\mathbf{q}))}{\gcd(\mathbf{q})}\right)^{m}\gamma^{m}\] as long as \(q\geq Q_{\gamma}\). Then for \(q\geq Q_{\gamma}\) we have \[\sum_{|\mathbf{q}|=q}\left|A^{\prime}(\mathbf{q},B)\cap W\right|\geq\tilde{C} |B|\gamma^{mn}\sum_{|\mathbf{q}|=q}\left(\frac{\varphi(\gcd(\mathbf{q}))}{ \gcd(\mathbf{q})}\right)^{m} \tag{13}\] Meanwhile, \[\sum_{|\mathbf{q}|=q}\left(\frac{\varphi(\gcd(\mathbf{q}))}{\gcd( \mathbf{q})}\right)^{m} \asymp\sum_{\begin{subarray}{c}\mathbf{q}\in Z^{n-1}\\ |\mathbf{q}|\leq q\end{subarray}}\left(\frac{\varphi(\gcd(\mathbf{q},q))}{ \gcd(\mathbf{q},q)}\right)^{m}\] \[\geq\sum_{\begin{subarray}{c}\mathbf{q}\in Z^{n-1}\\ \gcd(\mathbf{q})=1\end{subarray}}\left(\frac{\varphi(\gcd(\mathbf{q},q))}{ \gcd(\mathbf{q},q)}\right)^{m}\] \[=\sum_{\begin{subarray}{c}\mathbf{q}\in Z^{n-1}\\ \gcd(\mathbf{q})=1\end{subarray}}1 \tag{14}\] \[\gg q^{n-1},\] by Lemma 1. Putting this back into (13) gives \[\sum_{|\mathbf{q}|=q}\left|A^{\prime}(\mathbf{q},B)\cap W\right|\gg|B|\gamma^ {mn}q^{n-1}\] with an absolute implicit constant, for all sufficiently large \(q\). This establishes (11) and proves the lemma. ## 7. Proofs of Theorems 1, 2, and 3 We require two lemmas from measure theory: **Lemma 10** (Divergence Borel-Cantelli Lemma, [11, Lemma 2.3]).: _Suppose \((X,\mu)\) is a finite measure space and \((A_{q})_{q\in\mathbb{N}}\subset X\) is a sequence of measurable subsets such that \(\sum\mu(A_{q})=\infty\). Then_ \[\mu\left(\limsup_{q\to\infty}A_{q}\right)\geq\limsup_{Q\to\infty}\frac{\left( \sum_{q=1}^{Q}\mu(A_{q})\right)^{2}}{\sum_{q,r=1}^{Q}\mu(A_{q}\cap A_{r})}. \tag{15}\] If the expression on the right-hand side of (15) is strictly positive, then we say that the sets \((A_{q})_{q\in\mathbb{N}}\) are _quasi-independent on average_. **Lemma 11** ([4, Lemma 6]).: _Let \((X,d)\) be a metric space with a finite measure \(\mu\) such that every open set is \(\mu\)-measurable. Let \(A\) be a Borel subset of \(X\) and let \(f:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) be an increasing function with \(f(x)\to 0\) as \(x\to 0\). If for every open set \(U\subset X\) we have_ \[\mu(A\cap U)\geq f(\mu(U)),\] _then \(\mu(A)=\mu(X)\)._ Theorem 1 is a consequence of the following theorem. **Theorem 7**.: _Let \(m,n\in\mathbb{N}\) be such that \(nm>2\). Suppose \(\pi=\{\pi_{1},\ldots,\pi_{k}\}\) is a partition of \([m+n]\) with \(|\pi_{j}|\geq 2\) for \(j=1,\ldots,k\). If \(\Psi:=(B_{q})_{q=1}^{\infty}\subset\mathbb{R}^{m}\) is a sequence of balls such that \(|B_{q}|\) is non-increasing and such that \(\sum q^{n-1}|B_{q}|\) diverges, then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there exist infinitely many points \((\mathbf{p},\mathbf{q})\in P(\pi)\) such that_ \[\mathbf{q}\mathbf{x}-\mathbf{p}\in B_{|\mathbf{q}|}. \tag{16}\] _Conversely, if \(\sum q^{n-1}|B_{q}|\) converges, then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there are only finitely many \((\mathbf{p},\mathbf{q})\in P(\pi)\) such that (16) holds._ Proof of Theorem 1.: If \(nm>2\), then this is the case of Theorem 7 where the balls \(B_{q}\) are concentric around \(\mathbf{y}\). In the cases \((m,n)=(2,1)\) and \((m,n)=(1,2)\), the only possible partition is the trivial partition. In the case \((m,n)=(1,2)\), the corollary is already contained in [21, Chapter 1, Theorem 14], and in the case \((m,n)=(2,1)\) it is in the inhomogeneous version of Khintchine's theorem due to Schmidt [20]. In the case \((m,n)=(1,1)\) the result is implied by the inhomogeneous Khintchine theorem due to Szusz [22]. Theorem 2 is a consequence of the following. **Theorem 8**.: _Let \(m,n\in\mathbb{N}\) be such that \(nm>2\). Suppose \(\pi=\{\pi_{1},\ldots,\pi_{k}\}\) is a partition of \([m+n]\) such that \(|\pi_{j}|\geq 2\) for \(j=1,\ldots,k\) and for some \(\ell\in\{1,\ldots,k\}\) we have \(|\pi_{\ell}|\geq 3\) and \(\pi_{\ell}\cap(m+[n])\neq\emptyset\). If \(\Psi:=(B_{q})_{q=1}^{\infty}\subset\mathbb{R}^{m}\) is a sequence of balls such that \(\sum q^{n-1}|B_{q}|\) diverges, then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there exist infinitely many points \((\mathbf{p},\mathbf{q})\in P(\pi)\) such that (16) holds._ _Conversely, if \(\sum q^{n-1}|B_{q}|\) converges, then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there are only finitely many \((\mathbf{p},\mathbf{q})\in P(\pi)\) such that (16) holds._ Proof of Theorem 2.: This is the concentric version of Theorem 8. Theorem 3 is a consequence of the following. **Theorem 9** (Inhomogeneous Duffin-Schaeffer conjecture for systems of linear forms).: _Let \(m,n\in\mathbb{N}\) with \(n>2\). If \(\Psi:=(B_{q})_{q=1}^{\infty}\subset\mathbb{R}^{m}\) is a sequence of balls such that_ \[\sum_{\mathbf{q}\in\mathbb{Z}^{n}}\biggl{(}\frac{\varphi(\gcd(\mathbf{q}))}{ \gcd(\mathbf{q})}\biggr{)}^{m}|B_{|\mathbf{q}|}|=\infty, \tag{17}\] _then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there exist infinitely many points \((\mathbf{p},\mathbf{q})\in\mathbb{Z}^{m}\times\mathbb{Z}^{n}\) with \(\gcd(p_{i},\mathbf{q})=1\) for every \(i=1,\ldots,m\) and such that (16) holds._ _Conversely, if the sum in (17) converges, then for almost every \(\mathbf{x}\in\operatorname{Mat}_{n\times m}(\mathbb{R})\) there are only finitely many such \((\mathbf{p},\mathbf{q})\in\mathbb{Z}^{m}\times\mathbb{Z}^{n}\)._ Proof of Theorem 3.: This is the concentric version of the divergence part of Theorem 9. The following proofs of Theorems 7, 8, and 9 are nearly identical. ### Proof of Theorem 8 Proof of Theorem 8.: By [2, Proposition 2, Theorem 8] the sets \((A(\mathbf{q}))_{q\in\mathbb{N}}\) are quasi-independent on average, meaning that the right-hand side of the inequality in Lemma 10 is positive. From this and the fact that \(A^{\pi}\subset A\), we get that there is some \(C>0\) such that \[\sum_{1\leq\mathbf{r}\leq\mathbf{q}\leq Q}\bigl{|}A^{\pi}(\mathbf{r})\cap A^{ \pi}(\mathbf{q})\cap U\bigr{|}\leq C\biggl{(}\sum_{|\mathbf{q}|=1}^{Q}|A( \mathbf{q})|\biggr{)}^{2} \tag{18}\] for infinitely many \(Q\). Meanwhile, we have \[\sum_{|\mathbf{q}|=q}\bigl{|}A^{\pi}(\mathbf{q})\cap U\bigr{|}\geq C\sum_{| \mathbf{q}|=q}|A(\mathbf{q})||U| \tag{19}\] for all \(q\geq Q_{U}\) by Part (a) of Lemma 8. In particular, combining this with (8) and the fact that \(\sum q^{n-1}|B_{q}|\) diverges, we find that \(\sum_{\mathbf{q}\in\mathbb{Z}^{n}}|A^{\pi}(\mathbf{q})\cap U|\) diverges. Combining (18) and (19) and collecting constants, we see that there is some constant \(C>0\) such that \[\sum_{1\leq r\leq q\leq Q}\bigl{|}A^{\pi}(\mathbf{r})\cap A^{\pi}(\mathbf{q}) \cap U\bigr{|}\leq\frac{C}{|U|^{2}}\Biggl{(}\sum_{|\mathbf{q}|=1}^{Q}\bigl{|}A ^{\pi}(\mathbf{q})\cap U\bigr{|}\Biggr{)}^{2}\] holds for infinitely many \(Q>0\). Therefore, by Lemma 10, \[\Biggl{|}\limsup_{|\mathbf{q}|\to\infty}A^{\pi}(\mathbf{q})\cap U\Biggr{|} \geq\frac{|U|^{2}}{C}.\] Finally, by Lemma 11, \(\limsup_{|\mathbf{q}|\to\infty}A^{\pi}(\mathbf{q})\) must have full measure in \(\mathbb{N}^{nm}\) and the theorem is proved. For the convergence part of Theorem 8, notice that convergence of \(\sum_{q\in\mathbb{N}}q^{n-1}|B_{q}|\) and (8) imply the convergence of \(\sum_{\mathbf{q}\in\mathbb{Z}^{n}}|A(\mathbf{q})|\). The Borel-Cantelli lemma (see, e.g. [11, Lemma 1.2]) immediately tells us that \(\limsup_{|\mathbf{q}|\to\infty}A(\mathbf{q})\), and hence also \(\limsup_{|\mathbf{q}|\to\infty}A^{\pi}(\mathbf{q})\), has zero measure. ### Proof of Theorem 7 Proof of Theorem 7.: Theorem 8 subsumes Theorem 7 when there is some \(\ell\) such that \(|\pi_{\ell}|\geq 3\) and \(\pi_{\ell}\cap(m+[n])\neq\phi\), so let us assume there is no such \(\ell\), but that the measures \(|B_{q}|\) are decreasing. Note that the standing assumption on \(\pi\) that all components have at least two elements implies that \(mn\neq 2\). And the case \((m,n)=(1,1)\) is already known (Khintchine's theorem [14]). So let us assume \(nm>2\). The proof begins exactly like the proof of Theorem 8, to give that there is some \(C>0\) such that \[\sum_{1\leq r\leq q\leq Q}\bigl{|}A^{\pi}(\mathbf{r})\cap A^{\pi}(\mathbf{q}) \cap U\bigr{|}\leq C\Biggl{(}\sum_{|\mathbf{q}|=1}^{Q}|A(\mathbf{q})|\Biggr{)} ^{2}\] for infinitely many \(Q\). Part (b) of Lemma 8 tells us that if \(Q\) is large enough we have \[\sum_{|\mathbf{q}|\leq Q}\bigl{|}A^{\pi}(\mathbf{q})\cap U\bigr{|}\geq C\sum _{|\mathbf{q}|\leq Q}|A(\mathbf{q})||U|,\] which in combination with (8) and the divergence of \(\sum q^{n-1}|B_{q}|\) gives that \(\sum_{\mathbf{q}}|A^{\pi}(\mathbf{q})\cap U|\) diverges. We are led as before to \[\sum_{1\leq r\leq q\leq Q}\bigl{|}A^{\pi}(\mathbf{r})\cap A^{\pi}(\mathbf{q}) \cap U\bigr{|}\leq\frac{C}{|U|^{2}}\Biggl{(}\sum_{|\mathbf{q}|=1}^{Q}\bigl{|} A^{\pi}(\mathbf{q})\cap U\bigr{|}\Biggr{)}^{2}\] holding for infinitely many \(Q\in\mathbb{N}\). Lemma 10 gives \[\left|\limsup_{|\mathbf{q}|\to\infty}A^{\pi}(\mathbf{q})\cap U\right|\geq\frac{| U|^{2}}{C},\] and Lemma 11 tells us that \(\limsup_{|\mathbf{q}|\to\infty}A^{\pi}(\mathbf{q})\) must have full measure in \(\mathbb{I}^{nm}\), thus proving the theorem. The convergence part of Theorem 7 is proved exactly as in Theorem 8. ### Proof of Theorem 9 Proof of Theorem 9.: We have \(n>2\). In this case, it is found in [2, Proposition 2, Theorem 8] and also [21, Theorem 14] that the sets \((A(\mathbf{q}))_{q\in\mathbb{N}}\) are quasi-independent on average. Since \(A^{\prime}\subset A\), there is some \(C>0\) such that \[\sum_{1\leq\mathbf{r}\leq\mathbf{q}\leq Q}\left|A^{\prime}(\mathbf{r})\cap A^ {\prime}(\mathbf{q})\cap U\right|\leq C\left(\sum_{|\mathbf{q}|=1}^{Q}\left|A( \mathbf{q})\right|\right)^{2} \tag{20}\] for infinitely many \(Q\). Lemma 9 tells us that \[\sum_{|\mathbf{q}|=q}\left|A^{\prime}(\mathbf{q})\cap U\right|\geq C\sum_{| \mathbf{q}|=q}\left|A(\mathbf{q})\right|\left|U\right| \tag{21}\] for all \(q\geq Q_{U}\). The divergence condition (17) and (13) imply that \(\sum_{\mathbf{q}}\left|A^{\prime}(\mathbf{q})\cap U\right|\) diverges. Combining (20) and (21), there is some constant \(C>0\) such that \[\sum_{1\leq\mathbf{r}\leq\mathbf{q}\leq Q}\left|A^{\prime}(\mathbf{r})\cap A ^{\prime}(\mathbf{q})\cap U\right|\leq\frac{C}{|U|^{2}}\left(\sum_{|\mathbf{q }|=1}^{Q}\left|A^{\prime}(\mathbf{q})\cap U\right|\right)^{2}\] holds for infinitely many \(Q\in\mathbb{N}\). Therefore, by Lemma 10, \[\left|\limsup_{|\mathbf{q}|\to\infty}A^{\prime}(\mathbf{q})\cap U\right|\geq \frac{|U|^{2}}{C}.\] Lemma 11 now guarantees that \(\limsup_{|\mathbf{q}|\to\infty}A^{\prime}(\mathbf{q})\) has full measure in \(\mathbb{I}^{nm}\) and the theorem is proved. For the convergence part, notice that if the sum in (17) converges, then by (14) so does the sum \(\sum q^{n-1}|B_{q}|\), and we are back in the situation from the proofs of Theorems 7 and 8.
2302.05725
Parameterizable Acoustical Modeling and Auralization of Cultural Heritage Sites based on Photogrammetry
The photogrammetric and reconstructive modeling of cultural heritage sites is mostly focused on visually perceivable aspects, but if their intended purpose is the performance of cultural acts with a sonic emphasis, it is important to consider the preservation of their acoustical behaviour to make them audible in an authentic way. This applies in particular to sacral and concert environments as popular objects for photogrammetric models, which contain geometrical and textural information that can be used to locate and classify acoustically relevant surface properties. With the advancing conversion or destruction of historical acoustical spaces, it becomes even more important to preserve their unique sonic characters, while three-dimensional auralizations become widely applicable. The proposed study presents the current state of a new methodological approach to acoustical modeling using photogrammetric data and introduces a parameterizable pipeline that will be accessible as an open-source software with a graphical user interface.
Dominik Ukolov
2023-02-11T15:44:54Z
http://arxiv.org/abs/2302.05725v1
# Parameterizable Acoustical Modeling and ###### Abstract The photogrammetric and reconstructive modeling of cultural heritage sites is mostly focused on visually perceivable aspects, but if their intended purpose is the performance of cultural acts with a sonic emphasis, it is important to consider the preservation of their acoustical behaviour to make them audible in an authentic way. This applies in particular to sacral and concert environments as popular objects for photogrammetric models, which contain geometrical and textural information that can be used to locate and classify acoustically relevant surface properties. With the advancing conversion or destruction of historical acoustical spaces, it becomes even more important to preserve their unique sonic characters, while three-dimensional auralizations become widely applicable. The proposed study presents the current state of a new methodological approach to acoustical modeling using photogrammetric data and introduces a parameterizable pipeline that will be accessible as an open-source software with a graphical user interface. acoustics photography cultural heritage auralization virtualization ## 1 Introduction The virtualization of musical instruments is an emerging field in digital organology, which has mostly been limited to capturing and synthesizing their sounds through sampling techniques or physical modeling. However, the MODAVIS project aims at researching a methodology for the multimodal and scientifically consistent digitization and virtualization of musical instruments as three-dimensional acoustical objects and the development of a new standard: the Virtual Acoustic Object [Ukolov, 2023]. This research is aimed in particular at the pipe organ, the most complex and largest of all instruments, but also the most endangered one due to the consequences of climate change, military conflicts and economic crises. Beyond that, it is a challenge to fully understand the sound of this instrument without considering its surroundings, due to its acoustical coupling to historical and representative buildings, which must be thought of as resonant bodies and thus accurately modeled. Since the sound propagates three-dimensionally while being driven by the dimensions, materials and shapes inside the building, photogrammetric models seem very promising for the generation of interactive virtual acoustic objects and acoustical room models for the purpose of auralization. The proposed methodology to approach this problem is based on the principle of point projections between classified masks and their three-dimensional correlates with annotations of acoustical properties, performed by a parameterizable pipeline (see Figure 1). ## 2 Related Work Related studies of the use of photogrammetric models for interior acoustical simulations were conducted by Llorca-Boff et al. (2022), who applied geometric reductions based on Siltanen et al. (2008) and evaluated several modeling and material assignments using the RAVEN plugin (Schroder and Vorlander, 2011), while the fundamentals of auralization were researched by Vorlander (2020). With SoundSpaces 2.0 by Chen et al. (2022), continuous renderings of impulse responses were performed using bidirectional path-tracing on mesh datasets and evaluated after acoustically measuring an object of the Replica dataset (Straub et al., 2019). However, the developed methodology described below differs from these approaches, as it is based on remodeling using point reprojections and multiple segmentations by neural networks; to the current knowledge, this is the first approach to generate acoustical models using these methods. ## 3 Photogrammetric-Acoustical Modeling Toolkit In order to parameterize the pipeline with real-time visual previews and to enable an easy operability of the acoustical modeling process with different levels of expertise, the Photogrammetric-Acoustical Modeling Toolkit (PAMT) was developed (see Figure 2), which allows manual and semi-automated processing steps. ### Preprocessing The PAMT uses COLMAP (Schonberger and Frahm, 2016) to calculate the point cloud from a set of photos and to extract the camera parameters, which are used to generate a point reprojection database in the first step after creating a new project file, where all pipeline settings and individual modeling operations are saved. In the next step, the extracted data can be scaled based on markers, measurements, or model alignments, followed by filtering the photo set based on their overlaps to reduce their quantity for the following processes. ### Object Segmentation The reduced photo set is available in the GUI for manual or automated segmentation of objects, where manual segmentation is performed by polygonal masking with adjustable edge detections using the Canny algorithm (Xu et al., 2017). The automated segmentation uses Mask R-CNN (He et al., 2017), while its output above an adjustable threshold will be used for generating the masks. Once a mask has been set, an object class, an object, and a material can be assigned either manually or by classification suggestions; afterwards, several masks can be merged or divided, as all masks remain editable at any time. Once a photo has been processed, the masked points will be extrapolated to the other Figure 1: This pipeline contains multiple processing steps that can be modified by various parameters, while the procedurally generated output data can be further analyzed, edited, or used in other instances. The modular structure allows the optimization and addition of further steps, which will be evaluated and implemented during the MODAVIS project. photos by projecting those 3D point identities that are near the reprojected 2D pixels inside the masks. This results in pre-generated local masks for global surfaces, significantly reducing the time for the manual tasks. The pre-trained model for the automated detection and segmentation can be selected, downloaded, and updated in the settings, for which MMLab (Chen et al., 2019) is integrated into the toolkit to ensure the implementation of state-of-the-art models for this task. ### Material Classification and Absorption Coefficients The material can be defined after setting a mask, either manually or by following classifications from neural networks in two stages, currently based on the ResNet-18 (He et al., 2016) architecture that has been trained on the MINC dataset (Bell et al., 2015) for materials and the DTD dataset (Cimpoi et al., 2014) for textural properties like fibrous, marbled, or porous. In order to assign frequency-dependent absorption coefficients to a textured material, a specialized database with 2573 entries (Physikalisch-Technische Bundesanstalt) was processed to be used for suggesting probably suitable measurements; this can be achieved through natural language processing (NLP) by calculating the cosine similarities of the input string and the acoustically measured material names based on their vector representations. ### Point Reprojection The basic method behind the reprojection is the assignment of local 2D coordinates to global 3D point identities, which are used to map local points within a photo. The polygonal masking defines a set of coordinates, from which the identities and their assigned 2D coordinates can be derived. As a result, the local coordinates of a photo set are always related to global coordinates in the PAMT and vice versa, for which database access structures have been developed which allow to return and process the target data by providing any geometrical input data. Figure 2: The main interface of the PAMT consists of the reduced set of photos that was used in the photogrammetric calculations (upper left), the masking window and its controls (middle left) and the list with the masked areas and classified objects, materials, and absorption coefficients (middle right). After a reprojection has been applied, the related images and the correlating local and global points can be investigated in the bottom left and in the upper right, which will be marked on the image and on the 3D viewer below. This viewer is interactive and shows the object for the selected and reprojected mask, while the ‘Geometrical Editor’ opens an external window for the object modification, plane segmentation and other geometrical operations. In this screenshot, the carpet was selected in the masks list and the corresponding reprojection was applied, resulting in rendering the associated point coordinates as a preview of the current masking progression. ### Object Modification Since objects such as rows of chairs often appear repetitively and not every of its appearances can be captured in detail, it is possible to substitute inaccurate object appearances in the point cloud by more detailed ones through pointing to the object or importing another point cloud or mesh. The correct placement is ensured by either estimating the point-relative positioning in the room or placing it manually, while object classes or individual objects can be filtered or added to the space, which also enables acoustical simulations of different room situations. ### Point Cloud and Surface Reconstruction After defining the objects coordinates by masking them, an annotated point can be generated using the Blender API and rendered in the GUI with interactive controls, where object classes or individual masked areas can be shown. The denoising of the point cloud is performed using a score-based approach [10], while the rendering and parameterized processes are executed using Open3D [21], consisting of downsampling and triangulation, currently using the ball-pivoting [1] or Poisson [11] algorithm. The plane segmentation is fundamental for the model building (see Figure 3) and is performed using the RANSAC algorithm [13], but as the point sets have been changed during these processes, the nearest points will finally be assigned to the corresponding identities. In the final step, the model will be exported to the STL format and validated to prevent inconsistencies like holes or inadequate volumes during auralizations. In addition, a referential database will be generated to point to the acoustical surface properties for every element in the STL file. ### Acoustical Simulation and Auralization Interfaces The implementation of the annotated model for the simulation of acoustical room situations from a specific listener position can be realized using an interface to pyroomacoustics [12], which converts the encoded geometric and acoustical data into compatible structures. After setting an arbitrary source and receiver position by defining their coordinates and emitting properties, the image source method [1] and acoustical ray tracing [14] can be applied to simulate impulse responses and convolute them with an input audio signal. By using the photogrammetric calculations, it is also possible to perform auralizations from the exact perspective that is captured in a photo, as the corresponding camera positions and orientations are known. ## 4 Future Work It is planned to evaluate the PAMT performance using valid acoustical test datasets and to add important psychoacoustical and acoustical parameters such as scattering coefficients and directivity patterns to the auralization engine, which is Figure 3: With the proposed pipeline, the photogrammetric data can be used to remodel its resulting point cloud into a geometrically reduced one (right), which is suitable for acoustical simulations after consistency corrections. The photogrammetric model of the interior of a church (left), viewed from the outside, was captured under poor lighting conditions with a low-quality camera and therefore contains inaccuracies in certain areas. It is particularly noticeable that the plane segmentation as well as the boundary calculations lead to significant changes in shape (see left and bottom right side on both models), which might lead to wanted results as in this case, but which must be further evaluated on variable and low- as well as high-quality datasets. The materials that are visible on the right - glass (light blue), wood (brown), fabric (dark blue) and the painting (yellow) - were assigned to surfaces that have been defined during the pipeline processing, while the exterior of the building was not captured and is not relevant in the current state of development. generally considered as secondary at the current state of development. The configurations and methods of the geometrical processes shall generally be optimized to achieve the lowest possible error rate. Within the MODAVIS project, it is planned to create an object and material segmentation dataset for sacral environments and organ-specific parts, in addition, it shall be examined to what extent directivity patterns of different organ pipe types can be simulated and generalized via physical modeling. Furthermore, it shall be possible to use RGB-D data and non-photogrammetrically generated 3D models in the toolkit using virtual image renderings as well as to indicate the perspective on an orthographic map to prevent confusions with repetitive structures. It will be considered to publish a Blender module and an interface to Unity for annotation and auralization purposes at a later phase. After the evaluations and significant improvements are completed, the PAMT will be released on GitHub [20].
2310.05534
Thech. Report: Genuinization of Speech waveform PMF for speaker detection spoofing and countermeasures
In the context of spoofing attacks in speaker recognition systems, we observed that the waveform probability mass function (PMF) of genuine speech differs significantly from the PMF of speech resulting from the attacks. This is true for synthesized or converted speech as well as replayed speech. We also noticed that this observation seems to have a significant impact on spoofing detection performance. In this article, we propose an algorithm, denoted genuinization, capable of reducing the waveform distribution gap between authentic speech and spoofing speech. Our genuinization algorithm is evaluated on ASVspoof 2019 challenge datasets, using the baseline system provided by the challenge organization. We first assess the influence of genuinization on spoofing performance. Using genuinization for the spoofing attacks degrades spoofing detection performance by up to a factor of 10. Next, we integrate the genuinization algorithm in the spoofing countermeasures and we observe a huge spoofing detection improvement in different cases. The results of our experiments show clearly that waveform distribution plays an important role and must be taken into account by anti-spoofing systems.
Itshak Lapidot, Jean-Francois Bonastre
2023-10-09T08:56:31Z
http://arxiv.org/abs/2310.05534v1
Thech. Report: Genuinization of Speech waveform PMF for speaker detection spoofing and countermeasures ###### Abstract In the context of spoofing attacks in speaker recognition systems, we observed that the waveform _probability mass function_ (PMF) of genuine speech differs significantly from the PMF of speech resulting from the attacks. This is true for synthesized or converted speech as well as replayed speech. We also noticed that this observation seems to have a significant impact on spoofing detection performance. In this article, we propose an algorithm, denoted _genuinization_, capable of reducing the waveform distribution gap between authentic speech and spoofing speech. Our _genuinization_ algorithm is evaluated on ASVspoof 2019 challenge datasets, using the baseline system provided by the challenge organization. We first assess the influence of _genuinization_ on spoofing performance. Using _genuinization_ for the spoofing attacks degrades spoofing detection performance by up to a factor of \(10\). Next, we integrate the _genuinization_ algorithm in the spoofing countermeasures and we observe a huge spoofing detection improvement in different cases. The results of our experiments show clearly that waveform distribution plays an important role and must be taken into account by anti-spoofing systems. speaker detection, spoofing, spoofing countermeasure, waveform, probability mass function (PMF), CQCC, LFCC, GMM. ## I Introduction In recent years the sensitivity of speaker recognition to spoofing attacks and the development of spoofing countermeasures raised an increasing interest, [1, 2, 3, 4, 5, 6, 7]. The most common threats in voice authentication are replaying recorded utterances, voice synthesis, and voice conversion. The associated countermeasures generally consist of a specific additional system capable of separating actual examples of speech and examples of impersonation, regardless of the type of impersonation attacks. Different approaches are applied, [8, 9, 10, 11]. One of the main differences between these approaches (as well as between speaker recognition and spoofing detection) is related to the feature extraction. Different features were proposed for anti-spoofing systems [10]. Sometime, the feature used for spoofing detection is linked to the feature used for the speaker recognition task or is optimized together with it [12, 13]. As a result of the ASVSpoof challenge suite, the most promising appears to be _constant Q cepstral coefficients_ (CQCC) [8] which is a non-linear extension of the _linear frequency Cepstral coefficients_ (LFCC). Most of features offered are based on short-term spectral conversion (e.g., _mel-frequency cepstral coefficients_ (MFCC) and CQCC) and ignore the time domain. Moreover, even the rare exceptions that take the time domain into account usually only use it as a pre-processing step followed by short-term spectral analysis: [14] filters the voice excitation source in order to estimate the residual signal and uses it together with the frequency domain information inside a _Gaussian mixture model_ (GMM)-based classifier; [15] applies cochlear filtering and nerve spike density before to perform a short-term spectral analysis. Spectral features are commonly used not only for countermeasures but also in many speech conversion and synthesis algorithm [16, 17]. This apparent lack of interest in time domain information is surprising because time domain information is well known for its richness and is frequently used, for example, to estimate voice quality parameters and assess voice quality in clinical phonetics [18, 19, 20, 21, 22, 23]. It seems obvious that at least the voice quality parameters are important for genuine or spoofing speech separation. If the time domain is mostly ignored in spoofing countermeasures, this is certainly more related to the intrinsic difficulty of time-based approaches than to the lack of information at this level. In order to exploit temporal information without being drowned in the associated complexity, we have proposed in recent years to use a simple approach, the entropy of waveform coefficients. In [24] and [25] it has been shown that it allows to detect the overlap of speech between two speakers and in [26], we successfully applied a similar approach to database assessment. In [27, 28], we looked at an even simpler solution, the _probability mass functions_ (PMFs) of waveform amplitude coefficients. We compared the PMFs of genuine voice recordings versus spoofing voice recordings (synthesized, converted or replayed voice) and were surprised by the large differences observed. We then exploited the waveform PMFs for the spoofing speech detection task. We also proposed to correct the observed imbalance between authentic speech and speech spoofing PMFs, using a process inspired by the _Gaussianization_ of MFCC features [29], applied at the waveform coefficient level and noted by analogy _genuinization_. In this article, we investigate further the effect of our _genuinization_ process when applied to different types of spoofing and genuine speech. We also question the behavior of _genuinization_ on high and low energized part of the speech signal and on spoofing detection system training set. Using this new knowledge on _genuinization_, we propose a solution to make spoofing detection as much as possible insensible to the use of such a waveform manipulation. All the experiments are done using the ASVspoof \(2019\) challenge [30] train and the development sets as well as the baseline system provided by the challenge. ## II Waveform PMF _Genuinization_ process As noted in [27, 28], the waveform coefficient PMFs can differ significantly between spoofing speech and genuine speech. This section is dedicated to a transformation to be applied on the spoofing speech waveform coefficients to remove this difference. Our transformation is directly inspired from the data distribution conversion algorithm proposed in [29] for MFCC features and is denoted _Genuinization_ as it targets the waveform PMF of genuine speech. When in [29], the targeted distribution is a Gaussian distribution (and the process is denoted _Gaussianization_), we present here a generalized version of the algorithm to any continuous CDF then its adaptation to our _Genuinization_ case. ### _Continuous CDF conversion_ The main principle of the data distribution conversion algorithm is illustrated in Fig. 1. Both source and target are continues _random variables_ (RV) denoted, respectively, \(x\) and \(y\). Their distributions are represented by the corresponding _cumulative distribution function_ (CDF) denoted, respectively, \(F_{x}\) and \(F_{y}\). For \(x=\alpha_{0}\), a data of the source, the algorithm: * Finds \(F_{x}\left(\alpha_{0}\right)\), the value of the source CDF for \(\alpha_{0}\); * Then, finds \(\beta_{0}\), the value of \(F_{y}\), the target CDF, such as \(F_{y}\left(\beta_{0}\right)=F_{x}\left(\alpha_{0}\right)\) * \(y=\beta_{0}\) is the converted value of \(x=\alpha_{0}\). ### _Basic Genuinization algorithm_ The _Genuinization_ process aims to transform the samples of a source speech file in order to obtain a transformed speech file as close as possible to the original one but with a sample distribution following that of the genuine speech. The algorithm that we have just described is a simple and pleasant solution for this task, except that it is designed for the case of continuous random variables (in the _Gaussianization_, the source is assumed to be continue and the target is a Gaussian distribution with zero mean and variance equal to 1). It is not valid for the _Genuinization_ case, where the source and target variables are discrete1: the data points are speech signals samples following a given quantization, on \(16\)bits here. It does not allow the one-to-one mapping used in the _Gaussianization_ case, even when the two distributions are similar. Footnote 1: When it comes to discrete RVs, the use of the term CDF is not entirely appropriate and _cumulative mass function_ (CMF) is better suited. However, since CDF is more common to the reader, we will continue to use it for clarity in the rest of this article. Knowing that, we will assume that the CDF is not longer continue from the left. To overcome this limitation, we propose a quantile normalization [31] of the spoofing-speech sample distribution, using as target a genuine-speech sample distribution. Fig. 2 and Alg. 1 illustrate the process. First, the genuine CDF, \(F_{x}^{g}\left(k\right)\), is computed following \(F_{x}^{g}\left(k\right)=\sum_{q=1}^{k}p_{x}^{g}\left(q\right)\), where \(p_{x}^{g}\left(k\right)\) is the waveform PMF of the genuine speech. where \(s(n)\) is the spoofing file to _genuinized_, Our _Genuinization_ is described bellow: * Given,\(x\), a discrete random variable \(x\in\left\{1,\ldots,2^{16}\right\}\), \(g\), the genuine speech, \(p_{x}^{g}\left(k\right)\) is its waveform amplitudes PMF. Figure 1: Speech conversion illustration. * \(k\) is the value that assigned to \(x\) (the actual signal's amplitude is \(s\left(n\right)=-1+k\cdot 2^{-15}\)). Next, the CDF of genuine speech, \(F_{x}^{g}\left(k\right)=\sum_{q=1}^{k}p_{x}^{g}\left(q\right)\), is calculated over all the training set. For each spoofing speech signal \(s\left(n\right)\), the CDF \(F_{x}^{s}(k)\) (\(s\) for the spoofed signal) is calculated in the same way as previously. The _genuinization_ algorithm is then applied, as described in Algorithm 1, and illustrated in Fig. 2. For each data point of the spoofed signal \(s(n)\), the value \(k\) is found and \(F_{x}^{s}\left(k\right)\) is assigned. Than, the genuine CDF value which is the closest from below to \(F_{x}^{s}\left(k\right)\) is defined as an optimal match, \(F_{x}^{g}\left(q*\right)\). \(q*\) is assigned to be the genuinized value which will define the \(\hat{s}\left(n\right)\). ``` 0: Given a spoofing file, \(s\left(n\right)\) \(\triangleright\)\(n=1,\ldots,N\) Be the genuinized file, \(\hat{s}\left(n\right)\) \(\triangleright\)\(k\in{1,\ldots,2^{16}}\) Genuine CDF \(F_{x}^{g}\left(k\right)\) \(\triangleright\)\(\left.\) Spoofing file CDF \(F_{x}^{s}\left(k\right)\) for\(n:=1\)to\(N\)step\(1\)do Set \(k=\left[s\left(n\right)+1\right]2^{15}\). Find \(q^{*}=\underset{q}{\arg\max}\left\{F_{x}^{g}\left(q\right)\leq F_{x}^{s} \left(k\right)\right\}\) Set \(\hat{s}\left(n\right)=-1+2^{-15}\cdot q^{*}\) Return:\(\hat{s}\left(n\right)\) ``` **Algorithm 1** Genuanization algorithm We applied this approach to genuinization of two databases of the ASVspoof2019 challenge [30]. At this challenge there were two separated sub-challenges. The first is _logical access_ (LA) and it includes synthesis speech and converted speech; the second was _physical access_ (PA) and it includes replayed speech (this database is a simulated replayed database in order to have controlled conditions, both for recorded and replayed signals). In each challenge there was a train set and development set to design systems which classify for genuine or spoofed speech. The genuinization was applied on the test sets of Figure 2: Speech conversion illustration for CDF of discrete RVs. both datasets. The genuine PMF and CDF were estimated from all genuine files at each dataset, while the genuinization was performed file by file and the overall PMFs of the spoofed files before and after genuinization were calculated. First, the genuinization was applied on LA dataset and the results are presented in Fig. 3. The horizontal axis is the amplitude of the speech signal and not \(k\). The upper subplot is the PMF of the spoofed data. At the middle it is the spoofed data after genuinization, while at the bottom is the PMF of the genuine speech which was used as a target distribution. It is clear that while the spoofed PMF significantly differs from the genuine PMF, the genuinized PMF is similar to the target, genuine PMF. Next, the same procedure is applied to the PA dataset and the results are presented in Fig. 4. As the peek of the spoofed PMF in the PA case is much higher than the other two plots, the vertical axis have different scales (\(0.12\) for spoofed PMF and \(0.04\) for the others). In this case the genuinization algorithm dose not work, and a notch is observed at amplitude equals zero after genuinization. The main difference to the LA case is the spoofed PMF has a very high pick at one amplitude. Such case illustrated in Fig. 5. In the illustration the pick appears at \(k=4\). Due to this peak, \(q=3\) and \(q=4\) never present in the genuinized distribution despite there presence in the target distribution. Such phenomena can appear only in discrete distributions as they CMF is not continuous from the left at the discrete values which have the probability mass. For this reason, the algorithm has to be adapted to perform also in such cases. ### _Perturbated Genuinization algorithm_ Before explaining the algorithm, it is important to remind the quantization function during the sampling process. In most databases the quantization is \(16\)bit per sample in the range \([-1.0,1.0]\), when the lowest Figure 3: Waveform amplitude PMFs for logical condition, train set: Spoofed (upper); Genuinized (middle); Genuine (bottom). level is \(-1+2^{-15}\) and the highest level is \(1.0\). In general case, for \(D\)-bits case, the quantization levels are between \(-1+2^{-(D-1)}\) and \(1.0\). In Fig. 6 a 3bits quantizer is presented. It is a mapping of many to Figure 4: Waveform amplitude PMFs for PA condition, train set: Spoofed (upper); Genuinized (middle); Genuine (bottom). Figure 5: Speech conversion illustration for CDF of discrete RVs with high pick in the source PMF (spoofed signal PMF). one, when a segment \(S_{k}=\left(-1+\left(k-1\right)\cdot 2^{-\left(D-1\right)},-1+k\cdot 2^{-\left(D-1 \right)}\right]\) mapped to its maximum value \(-1+k\cdot 2^{-\left(D-1\right)}\). This means that the probability of the quantization level at \(k=K\), \(p_{K}=\int_{S_{k}}f_{A}\left(\alpha\right)d\alpha\), when \(f_{A}\left(\alpha\right)\) is the _probability density function_ (_pdf_) of the continuous amplitude \(A\). As the true \(f_{A}\left(A\right)\) is not known and assuming that \(S_{k}\) is sufficiently small, it is possible to approximate this _pdf_ as a piecewise constant, \(\hat{f}_{A}\left(\alpha\right)=\frac{p_{k}}{\Delta}\) where \(\Delta=\frac{2}{2^{D}}=2^{-\left(D-1\right)}\) (segments length), and \(A\in S_{k}\) (\(k\)-th segment). According to these assumptions it is possible to find a new PMF and the CDF for quantizer with \(\left(D+d\right)\)-bits. Adding \(d\)-bits, means, adding \(2^{d}-1\) quantization levels to each segment (in total \(2^{d}\) quantization levels per segment). The extended CDF is calculated only to the spoofed signal CDF, i.e., from \(F_{x}^{s}\left(k\right)\) we obtained an extended spoofed CDF \(G_{x}^{s}\left(k\right)\). Algorithm 1 can now be applied but with the new CDF, as shown in Fig. 7. The only unsolved problem is how to assign a new \(k\) for \(G_{x}^{s}\left(k\right)\). As \(k\) represent a value of a signal in the \(k\)-th segment, and the assumption is of uniform distribution in a segment, the new \(k\) is assign by uniformly distributed discrete random value between the \(2^{d}\) new levels: \(\left(k\cdot 2^{d}-n\right)\to k\), where \(n\) is a discrete uniform random variable, \(n\sim U\left(0,2^{d}-1\right)\). As was mentioned above, in all the databases \(D=16\) and we augmented it with \(d=5\), extended the quntizer by a factor of \(32\). The results after such manipulation is presented in Fig. 8. Comparing this result with the one in that Figure 6: 3bits quantizer. a significant improvement. The results for LA remained the same. This algorithm is much more robust than the original genuinization algorithm, but the hyper-parameter \(d\), the additional number of bits, have to be chosen in advance. Too small \(d\) and the results will be insufficient, while too large \(d\) will lead to overload in memory and the analysis time without loss in the end performance. ### _The importance of the non-speech parts_ Another issue, is the question of what is learned by the anti-spoofing systems. Current spoofing countermeasure systems typically use the entire signal, including the non-voice parts [32]. When a voice activity detector (VAD) is used to remove these unvoiced portions of the signal, performance is degraded. This result suggests that countermeasure systems pay great attention to parts of the signal where there is no speech-related information. If this is true, countermeasure systems appear sensitive to a change in PMF in the low amplitude region, that is, at the low energy portions of the signal and primarily at the non-speech portions. In order to assess this sensitivity of spoofing detection systems to the low energy signal parts, we are conducing several experiments using VAD. We apply here the simple energy based VAD used in [26, 33] (using \(\alpha=0.03\)). The PMFs of the genuine speech and spoofed speech after removing the non-speech parts thanks to the VAD are shown in Figure 9. The PMFs looks very similar one to the other. In order to better understand whether our _genuinization_ process is sensitive to the speech/non-speech question, we propose an experiment where the _genuinization_ parameters are learned using only non-speech parts. Other than that, there is no difference in the the procedure described in Algorithm 1 (in other words, in the genuinization process, we change only the target distribution, which is now computed only on the non-speech segments detected by the VAD). Figure 10 presents the PMFs of genuine speech, original spoofing speech and spoofing speech after this "non speech-only" _genuinization_. The PMF after _genuinization_ is far from being identical to the genuine speech PMF. It is very clear when Figure 10 is compared with Figure 3, where the PMF after _genuinization_ Figure 7: Speech conversion illustration for extended CDF of discrete RVs with high pick in the source PMF (spoofed signal PMF). is closer to the genuine speech PMF. This is explained by the heavier tails of spoofing speech, particularly the left tail, as showed in Figure 11. Figure 10 presents the PMFs of genuine speech, original spoofing speech and spoofing speech after this "non speech-only" _genuinization_. The PMF after _genuinization_ is far from being identical to the genuine speech PMF. It is very clear when Figure 10 is compared with Figure 3, where the PMF after _genuinization_ is closer to the genuine speech PMF. This is explained by the heavier tails of spoofing speech, particularly the left tail, as showed in Figure 11. ### _Random Genuinization_ In random genuinization, the procedure is exactly the same as presented in section II-C, with one exception: the reference PMF is not estimated from an entire training set, but each time a random file is selected. In such a manner, not all the genuinized recordings will have almost the same PMF. It is a more realistic distribution, when each recording is having its own PMF and not using a predefined collective PMF. ## III Impact of genuinization on the spoofing countermeasure In this section we present the effects of the genuinization and random genuinization on the countermeasure. We apply the genuinization on the development set of ASVspoof, 20219 [30]. As in such application there are always two sides, the attacker and the countermeasure. On the attacker side, the PMF Figure 8: Waveform amplitude PMFs for PA condition, train set, using the randomized genuinization: Spoofed (upper); Genuinized (middle); Genuine (bottom). model for genuinization was estimated from the genuine speech of the evaluation set of ASVspoof 2019. It is important to emphasize that on the attacker side the genuinization was performed only on spoofed speech, while genuine speech stayed untouched. On the countermeasure side, there are two actions that can be taken: 1. Training the GMMs for the genuine spoof model using genuinized speech. In this case, the genuine speech PMF estimation is done from the genuine speech of the ASVspoof 2019 train set. All the spoofed speech recordings of the train set are genuinized, and served for training the spoofed GMM. When genuine speech genuinization is performed to train the genuine GMM, the same PMF was used as for spoofed speech, i.e., same data was used both for PMF estimation and for genuinization. 2. On the countermeasure side it is possible to genuinized all the files under examination (the development set), but it must be done for all the recordings, as the countermeasure does not have the information whether the recording is genuine speech or spoofed neither the information whether the spoofed recordings where genuinized or not. The genuinization performed using the PMF estimated on the train, genuine speech data, exactly the same used for GMMs training. When we describe the actions on both, attacker and countermeasure sides, the genuinization can be as Figure 9: Waveform amplitude PMFs for logical condition after VAD, train set: genuine speech (upper) and spoofed speech (bottom). described in subsection II-C or random, genuinanization in section genuinization, subsection II-E. The following experiments test the preferred action that countermeasure can take in the situation of lack of knowledge whether the genuinization was performed by the attacker or not, and which genuinization, random or not. The condition notations for the attack and the countermeasure defined in table I. Each recording can belong to human genuine speech or spoofed speech and belongs to a train set or to test set; the GMM of a spoofed speech can be trained on spoofed data without genuinization, after genuinization, or after random genuinization; attacker action is applied only to spoof recordings and can be no action, genuinization or random genuinization. While the countermeasure action is applied to all the files under examination. It means that **AC** may have \(9\) different options: NN, NG, NR, GN, GG, GR, RN, RG, and RR, for both, train and test. For the train conditions the training of the genuine GMM, \(\textbf{H\_Tr}\) and the Spoofed GMM, \(\textbf{S\_Tr}\) can be either **O**, **G**, or **R**. It ends for \(9\) different possibilities. However, \(4\) possibilities are illogical and were not examined: \(\textbf{H\_Tr}=\textbf{GorR}\), while \(\textbf{S\_Tr}=\textbf{O}\). There is no reason the use a genuine speech model after genuinization, if the spoof model is the original one. Also \(\textbf{H\_Tr}=\textbf{G}\setminus\textbf{R}\), while \(\textbf{S\_Tr}=\textbf{R}\setminus\textbf{G}\), i.e., different genuinization methods. At the end, there are only \(5\) possibilities. On the side of the test Figure 10: Waveform amplitude PMFs for logical condition, train set: original spoofing speech (upper), spoofing speech after non-speech-based genuinized (middle) and genuine speech (bottom). (**H_Te** and **S_Te**), the action that is taken by the attacker is always on the spoofed data, while the genuine speech is untouched, i.e., **N**. It leads to \(3\) possibilities. From the countermeasure side, the action must be applied to all the data, as it is not known whether it is genuine or spoofed. It leads also to \(3\) possibilities. This ended with \(3\times 3=9\) possibilities at the countermeasure side. In total, there are \(5\times 9=45\) possible scenarios which have to be evaluated for LA conditions and \(45\) for PA conditions. Each experiment performed for LFCC and CQCC features. As far as we know, it is a pioneer research in time domain. As such, the importance of this work is not to show that is makes the detection of spoofed speech harder or easier, but to show that it changes the behavior of the systems. In table II the results of both LA and PA conditions are presented for for original GMMs for both, genuine and spoofed speech. From table II can be learned that when the countermeasure takes the action of genuinization (either regular or random) the results are practically independent from the action of the attacker (experiments \(4-6\) and \(7-9\)) both for LA and PA conditions. Those observations are valid for all other combinations of GMMs. The values of the EER changes, but not the tendency. Another observation is that countermeasure action of of genuinization is at least as good as random genuinization in all the scenarios. Probably due to the larger statistic for PMF estimation, which makes it more robust to a larger variety of manipulations. Figure 11: CDFs of the Logical conditions train set waveform amplitudes for the Genuine and Spoofing speech. When applying the original data GMMs and do not take action on the countermeasure side, the results are usually the best if the attacker do not take any action as well (\(1st\) experiment for both LA and PA). However, there is a significant degradation in EER if the attacker perform any kind of genuinization. The only exception is for LA conditions with LFCC features (experiments \(2\) and \(3\)). In general experiments \(4-9\) gave relatively poor results. It leads to a conclusions that the best not to make any manipulation on the data on the countermeasure side. Next experiment repeats experiment \(1-3\) from table II, but with different trained GMMs for both, genuine and spoofed speech. The results are presented at table III. From examining experiments \(4-12\) and comparing the experiments \(1-3\), which are the same as in table II, it can be observe that testing with original GMMs when no action is taken by both side is significantly better than using at least one GMM that trained on genuinized data. However, if the attacker applying any geniunization, using GMM trained on the genuinized spoof data leads to almost perfect (and sometimes even perfect) classification. It emphasis the need of a detector whether the data genuinized or not. Other observation about these experiments are: for no action case, using GMMs on genuinized data harm more using CQCC features for LA conditions, and LFCC features for PA conditions; for other cases, CQCC features perform always better than LFCC features. The results withe both GMMs trained on randomly genuinized data (experiments \(13-15\)) do not match all the other experiment. We do not have good explanation for this phenomena. Only LA conditions with CQCC features behaved as accepted. ## IV Conclusion In this work, we presented a comparative analysis of time-domain related information of authentic and artificially modified speech in the context of speaker identification spoofing systems and associated countermeasures. In doing so, we aim to fill part of the gap that exists in the study of time domain information for the speaker identification spoofing domain. To cope with the inherent complexity of time domain approaches, we used a straightforward approach, which is perhaps the simplest possible: examine the _probability mass function_ (PMF) of the waveform coefficients. Our work was carried out within the framework of the ASVspoof 2019 [30] challenge and uses the datasets and core systems officially published for this occasion. The first important result of this article is the confirmation of our initial hypothesis regarding the importance of time domain information for the speaker spoofing domain, thanks to the significant difference that we observed between the PMF of genuine speech and the PMF of the spoofed speech, especially at low amplitudes. At these amplitudes, the non-speech portions of the audio signal contribute significantly to the PMF waveform. This observation may explain the counter-logical but common practice of not using VAD, and therefore of using non-speech parts, in countermeasure systems. This finding prompts us to suggest paying more attention to the low-energy parts of the audio signal when doing text-to-speech conversion or dedicated speaker spoofing conversion, such as gaps between words or unvoiced segments. In a previous work, we proposed a _genuinization_ algorithm in order to convert the samples of the spoofed speech, so that their new distribution becomes similar to the genuine speech PMF. The genuinization algorithm is based on continuous distributions, algorithm 1, and works well for LA conditions. Applying this algorithm to data for PA conditions did not yield good results and revealed a gap in the PMF, as shown in Figure 4. The reason seems to be due to very high probabilities around the zero value of the PA conditions PMF (illustrated in figure 5). To solve this problem, a \(5\)-bit perturbation was added (illustrated in figure 7). The new algorithm works well for PA conditions, figure 8, and LA conditions. After a qualitative inspection of the time domain PMF and successful genuinization of the spoofed speech, the next step was to test the effect of genuinization on the countermeasure system as provide by the ASVspoof 2019 challenge organizers. It was shown that both, the attacker and the countermeasure have to choose the strategy. For the attacker is to decide to genuinize the spoofed speech or not; while the countermeasure have to decide two issues: \(1^{st}\), how to train the GMMs, with or without genuinization of the data; \(2^{nd}\), decide whether to genuinized the received speech or not. From table II it can be seen that it is better that the countermeasure will not genuinize the received speech regardless from the decision of the attacker, when the GMMs where trained on the original data. This fact stayed valid in all other cases as well. Another interesting observation presented in experiments \(4-9\). Although the performances are not good, it is important to see that if the countermeasure action is to genuinized the received speech, it practically, regardless the attacker action the EER almost does not affected. This observation is also valid for all the other cases of other GMMs training possibilities. In table III the presented results were for different GMMs training without taking any action on the received speech. It is clear that if the attacker do not genuinize the spoofed speech, the best countermeasure strategy is to stay with the original models (experiment \(1\)). If the attacker choose to implement either genuinization or random genuinization, original models are far to be the best choice. Excluding the case of LA conditions with LFCC features, the increase in the EER is significant (experiments \(2\) and \(3\)). However, even for LA conditions with LFCC features, the results are far to be the best that can be achieved. The best solution is to train the GMM of the spoof speech on genuinized or randomly genuinized data. The genine speech GMM important not to train with randomly genuinized data (experiments \(13-15\)) while all the other options are practically equivalent (experiments \(4-12\)). In all these experiments the EER for genuinized speech is very low, especially for the CQCC features, while the EER is very high in case the attacker do not genuinized the speech (experiments \(4\), \(7\), and \(10\)). It emphasizes the necessity of an anti-spoofing system that can generalize the both cases. To conclude, much work should be done in several directions: \(1^{st}\), the spoofed data must take into account not only the spectral information, but also the waveform information; \(2^{nd}\), the fact that non-speech is so important for countermeasure and and also, so different in the waveform, points that it is poorly presented in the spoofed speech and have to be better described; \(3^{rd}\), even the very simple time domain manipulation can dramatically harm the countermeasure performance and it important to developed systems that are immune to such manipulations; \(4^{th}\), we presented a very simple time domain manipulation, but much work still to be done, such as taking better care for non-speech event and taking time dependencies into account and not only \(1^{st}\) order statistic. To conclude, it is only the first work in this direction and there is much to do by combining the time and frequency domains to a common for both, spoofing and anti-spoofing. ## Acknowledgment This work was supported by the ANR-JST CREST VoicePersonae project
2305.17386
HyperFormer: Learning Expressive Sparse Feature Representations via Hypergraph Transformer
Learning expressive representations for high-dimensional yet sparse features has been a longstanding problem in information retrieval. Though recent deep learning methods can partially solve the problem, they often fail to handle the numerous sparse features, particularly those tail feature values with infrequent occurrences in the training data. Worse still, existing methods cannot explicitly leverage the correlations among different instances to help further improve the representation learning on sparse features since such relational prior knowledge is not provided. To address these challenges, in this paper, we tackle the problem of representation learning on feature-sparse data from a graph learning perspective. Specifically, we propose to model the sparse features of different instances using hypergraphs where each node represents a data instance and each hyperedge denotes a distinct feature value. By passing messages on the constructed hypergraphs based on our Hypergraph Transformer (HyperFormer), the learned feature representations capture not only the correlations among different instances but also the correlations among features. Our experiments demonstrate that the proposed approach can effectively improve feature representation learning on sparse features.
Kaize Ding, Albert Jiongqian Liang, Bryan Perrozi, Ting Chen, Ruoxi Wang, Lichan Hong, Ed H. Chi, Huan Liu, Derek Zhiyuan Cheng
2023-05-27T06:35:23Z
http://arxiv.org/abs/2305.17386v1
# HyperFormer: Learning Expressive Sparse Feature Representations via Hypergraph Transformer ###### Abstract. Learning expressive representations for high-dimensional yet sparse features has been a longstanding problem in information retrieval. Though recent deep learning methods can partially solve the problem, they often fail to handle the numerous sparse features, particularly those tail feature values with infrequent occurrences in the training data. Worse still, existing methods cannot explicitly leverage the correlations among different instances to help further improve the representation learning on sparse features since such relational prior knowledge is not provided. To address these challenges, in this paper, we tackle the problem of representation learning on feature-sparse data from a graph learning perspective. Specifically, we propose to model the sparse features of different instances using hypergraphs where each node represents a data instance and each hyperedge denotes a distinct feature value. By passing messages on the constructed hypergraphs based on our Hypergraph Transformer (HyperFormer), the learned feature representations capture not only the correlations among different instances but also the correlations among features. Our experiments demonstrate that the proposed approach can effectively improve feature representation learning on sparse features. Sparse Features; Hypergraph; Graph Neural Networks + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Information systems Information retrieval + Footnote †: journal: Journal captures the instance correlations as well as feature correlations simultaneously. The resulted feature representations can improve the predictive power of different models on feature-sparse data. Our experiments on (i) CTR prediction and (ii) top-K item recommendation tasks demonstrate that HyperFormer is generalizable across different tasks, and further enhances state-of-the-art approaches of representation learning for sparse features. ## 2. Related Work **Learning with Sparse Features** has been a classic yet challenging problem in information retrieval and recommender system. A prevailing line of research tries to model the cross features in either the raw feature level or the embedding level. Compared to conventional approaches (Han et al., 2015; Wang et al., 2017; Wang et al., 2018), deep learning models have shown their superiority for handling high-dimensional sparse features (Han et al., 2015; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Methods such as Wide&Deep (Han et al., 2015), Deep Crossing (Wang et al., 2018), PNN (Wang et al., 2018), DCN (Wang et al., 2018), AutoInt (Wang et al., 2018), Fi-GNN (Li et al., 2018), DCN-v2 (Wang et al., 2018) have been proposed to automatically model the cross features. However, existing methods are not able to explicitly capture the correlations between instances and are also ineffective to handle the tail features that appear rarely in the data. **Graph Neural Networks (GNNs)** generally follow the neighborhood aggregation scheme (Han et al., 2015; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), learning the latent node representation via message passing among local or high-order neighbors in the graph (Han et al., 2015; Wang et al., 2018; Wang et al., 2018). More recently, GNNs have been actively explored to improve the performance of CTR prediction (Han et al., 2015; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) by capturing the interactions between features. Our approach leverages the idea of hypergraph (Han et al., 2015; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) and we build **HyperFormer** to perform representation learning on sparse features. In addition, our work focuses on developing a new embedding module rather than learning the cross-features, and our plug-and-play model is compatible to be trained with any feature interaction learning methods for making final predictions. ## 3. Methodology **Problem Definition.** For the sake of simplicity, we only consider sparse features and use multi-hot representation for sparse features, where each sparse feature is also called a field. The input of our problem is a high-dimensional sparse vector \(\mathbf{x}\in\mathbb{R}^{N}\) of multi-hot representation, where \(N\) is the total number of sparse feature values. Also, \(x_{i}=0\) means the \(i\)-th feature value does not exist in the instance and \(x_{i}=1\) means otherwise. The objective is to learn a low-dimensional embedding vector \(\mathbf{e}\in\mathbb{R}^{d}\) that represents the raw input features in the latent space. Existing works apply an embedding layer to project the input features into a low dimensional feature vector, which is commonly implemented by looking up from an embedding table \(\mathbf{F}=[\mathbf{f}_{1},\mathbf{f}_{2},...,\mathbf{f}_{N}]\in\mathbb{R}^{N \times d}\) and concatenating the retrieved embeddings into a dense real-value vector. Correspondingly, \(\mathbf{f}_{k}\) can be regarded as the dense representation for feature \(x_{k}\). In this paper, we argue that existing methods cannot explicitly consider the correlations between instances and the correlations between features, leading to the feature representations less expressive. ### Feature Hypergraph In this paper, we propose to alleviate the feature sparsity issue through relational representation learning. Since each specific feature can appear in multiple data instances, it can be naturally utilized as a bridge to capture instance correlations as well as feature correlations. For example, a group of users sharing the same feature values for "location" or "age". Such correlations among instances are inherently high-order rather than pair-wise, thus we propose to build _feature hypergraph_ to model the input data and try to enable message-passing on it to capture desired relational information. Specifically, we define the feature hypergraph as follows: **Definition 3.1**.: **Feature Hypergraph**: A feature hypergraph is defined as a graph \(G=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},\ldots,v_{n}\}\) represents the set of nodes in the graph, and \(\mathcal{E}=\{e_{1},\ldots,e_{m}\}\) represents the set of hyperedges. Specifically, each node represents a data instance and each hyperedge represents a unique feature value. Correspondingly, for any hyperedge \(e\), it can connect arbitrary number of nodes/instances (i.e., \(\sigma(e)\geq 1\)). **Scalability Extension.** Considering the fact that the scale of training data could be extremely large in practice, it is almost impossible to build a single feature hypergraph to handle all the data instances. To counter this issue, we propose to construct in-batch hypergraph based on data instances in the batch to further support mini-batch training. In Figure 1, we illustrate the steps for constructing the in-batch hypergraphs. For each batch, we randomly sample a batch of instances and update the hypergraph structure based on the data samples in the batch. From our experiments, the in-batch feature hypergraph is also effective for capturing the desired data dependencies and achieves satisfying performance improvements. ### Hypergraph Transformer To support representation learning on the constructed feature hypergraphs, we further propose a new model Hypergraph Transformer (HyperFormer) in this paper, which adopts the Transformer-like architecture (Wang et al., 2018) to exploit the hypergraph structure to encode both the instance correlations and feature correlations. Apart from conventional GNN models, each layer in HyperFormer learns the representations with two different hypergraph-guided message-passing functions, capturing high-order instance correlations and feature correlations simultaneously. Formally, a Transformer layer can be defined as: \[\begin{split}\mathbf{H}^{l}&=\text{TF}_{edge}\Big{(} \mathcal{Q}^{l}_{edge}=\mathbf{H}^{l-1},\mathbf{K}^{l}_{edge}=\mathbf{F}^{l-1},\mathbf{V}^{l}_{edge}=\mathbf{F}^{l-1}\Big{)},\\ \mathbf{F}^{l}&=\text{TF}_{node}\Big{(}\mathcal{Q}^{ l}_{node}=\mathbf{F}^{l-1},\mathbf{K}^{l}_{node}=\mathbf{H}^{l},\mathbf{V}^{l}_{node}= \mathbf{H}^{l}\Big{)},\end{split} \tag{1}\] where \(\text{TF}\Big{(}\mathbf{Q},\mathbf{K},\mathbf{V}\Big{)}=\text{FN}\Big{[} \text{softmax}(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d}})\mathbf{V}\Big{]}\) denotes the Transformer-like attention mechanism. In essence, \(\text{TF}_{edge}\) is a message-passing function that aggregates information from hyperedges to nodes Figure 1. Illustration of the proposed HyperFormer. and \(\text{TF}_{node}\) is another message-passing function that aggregates information from nodes to hyperedges. Specifically, we first look up from the feature embedding table F to initialize hyperedge representations and the initial node representation of each instance is computed by concatenating all its feature representations. Without loss of generality, we describe the two message-passing functions in a single HyperFormer layer \(l\) as follows: **Feature-to-Instance Message-Passing.** With all the hyperedges representations \(\{\mathbf{f}_{j}^{l-1}|\mathbf{v}_{ej}\in\mathcal{E}_{i}\}\), we first apply an feature-to-instance (edge-to-node) message-passing to learn the next-layer representation \(\mathbf{h}_{i}^{l}\) of node \(v_{l}\). Specifically, we set the node representation from the last HyperFormer layer \(l-1\) as the query. The representations of the connected hyperedges can be projected into keys and values. Formally, the similarity between the query and key can be calculated as: \[\alpha_{ij}=\frac{\exp((\mathbf{h}_{i}^{l-1}\mathbf{W}_{edge}^{O})^{\text{T}} \mathbf{k}_{j})}{\sum_{e_{p}\in\mathcal{E}_{i}}\exp((\mathbf{h}_{i}^{l-1} \mathbf{W}_{edge}^{O})^{\text{T}}\mathbf{k}_{p})},\quad\mathbf{k}_{p}=\mathbf{ f}_{p}^{l-1}\mathbf{W}_{edge}^{K}, \tag{2}\] in which \(\mathbf{W}_{edge}^{K}\) is the projection matrix for the key of the feature-to-instance transformer. Then the next layer node representation can be computed as: \[\mathbf{h}_{i}^{l}=\sigma\bigg{(}\sum_{e_{j}\in\mathcal{E}_{i}}\alpha_{ij} \mathbf{f}_{j}^{l-1}\mathbf{W}_{edge}^{V}\bigg{)}, \tag{3}\] where \(\sigma\) is the non-linearity such as ReLU and \(\mathbf{W}_{edge}^{V}\) is a trainable projection matrix for the value. **Instance-to-Feature Message-Passing.** With all the updated node representations, we again apply an instance-to-feature (node-to-edge) message-passing based on the Transformer layer to learn the next-layer representation of hyperedge \(e_{j}\). Similarly, this process can be formally expressed as: \[\mathbf{f}_{j}^{l}=\sigma\bigg{(}\sum_{\mathbf{x}_{b}\in\mathcal{V}_{j}}\beta _{jk}\mathbf{h}_{k}^{l}\mathbf{W}_{node}^{V}\bigg{)}, \tag{4}\] where \(\mathbf{f}_{j}^{l}\) is the output representation of hyperedge \(e_{j}\) and \(\mathbf{W}_{node}^{V}\) is the projection matrix. \(\beta_{jk}\) denotes the attention score of hyperedge \(e_{j}\) on node \(v_{k}\), which can be computed by: \[\beta_{jk}=\frac{\exp((\mathbf{f}_{j}^{l-1}\mathbf{W}_{node}^{O})^{\text{T}} \mathbf{k}_{k})}{\sum_{\mathbf{x}_{p}\in\mathcal{V}_{j}}\exp(\mathbf{f}_{j}^{ l-1}\mathbf{W}_{node}^{O})^{\text{T}}\mathbf{k}_{p})},\quad\mathbf{k}_{p}= \mathbf{h}_{p}^{l}\mathbf{W}_{node}^{K}, \tag{5}\] where \(\mathbf{W}_{node}^{O}\) and \(\mathbf{W}_{node}^{K}\) are the projection matrices for the query and key of the instance-to-feature message-passing. By stacking multiple HyperFormer layers, we are able to capture high-order instance correlations and feature correlations. The feature representations learned from the last HyperFormer \(\mathbf{F}^{L}\) can be directly plugged into any model architecture as the feature embedding layer and improve the prediction performance on the downstream tasks. ## 4. Experiments To evaluate the effectiveness of the proposed approach, we conduct our experiments on two real-world tasks that often suffer from the feature sparsity issue: (i) click-through-rate (CTR) prediction and (ii) top-k item recommendation. ### Task1: CTR Prediction Click-through-rate (CTR) prediction is a task that predicts how likely a user is going to click an advertisement. Typically, an instance sample is represented by high-dimensional and sparse features, such as user profile, ad attributes, and contextual features such as time, platform, and geographic location. We first try to evaluate the effectiveness of HyperFormer for CTR prediction. **Datasets.** For the task of CTR prediction, we adopt two public real-world benchmark datasets in which the features are extremely sparse and the statistics for those datasets can be found in Table 1. To adopt the **MovieLens-1M** dataset for CTR prediction, we follow (Zhu et al., 2017; Liu et al., 2018) to transform the original user ratings into binary values. The dataset is divided into 8:1:1 for training, validation, and testing, respectively. The **Criteo** dataset is widely adopted for CTR prediction that includes 45 million users' ad clicks on display ads over a 7-day period. As in (Zhu et al., 2017; Liu et al., 2018), we use the data from the first 6 days for training and randomly split the data from the last day into validation and test sets. **Baselines.** We include the following baselines methods for CTR prediction: Logistic Regression (**LR**) and Factorization Machine (**FM**) (Zhu et al., 2017). Different neural extensions of FM, including Neural Factorization Machine (**NFM**) (Krizhevsky et al., 2014), **xDeepFM**(Krizhevsky et al., 2014), and **HoFM**(He et al., 2015). **AutoInt**(Zhu et al., 2017) is designed to automatically learn the feature interaction with self-attention. **DCN-v2**(Liu et al., 2018) is an improved deep & cross network that models the explicit and bounded-degree feature interactions. To show the flexibility and effectiveness of HyperFormer, we integrate it into two representative baselines AutoInt and DCN-v2 then report their performance in the experiments. **General Comparison.** We evaluate the performance of different methods based on two widely-used metrics for CTR prediction: AUC and LogLoss in Table 2. FM is able to model the second-order feature interaction and thus outperforms LR, which can only learn from raw feature input. With the power of deep neural networks, xDeepFM and NFM can improve the performance of FM in both datasets by incorporating non-linear transformations and interactions among features. AutoInt further improves on NFM by adaptively modeling feature interactions using an attention mechanism. DCN-v2 is also shown to be an effective approach for CTR. More importantly, our experimental results demonstrate the effectiveness of HyperFormer, as it improves the performance of the two representative CTR models for both AUC and LogLoss. **Further Analysis on Tail Features.** Feature values usually follow a power-law distribution and those tail features only appear a few times among all the data examples. Without enough learning signals, it is hard for the low-frequency features to obtain informative embeddings, resulting in low CTR prediction accuracy for data samples that contain those low-frequency features. Our HyperFormer \begin{table} \begin{tabular}{c|c|c c c} \hline \hline \multirow{2}{*}{**CTR Prediction**} & Dataset & \#Sample & \#Field & \#Feature \\ \cline{2-5} & MovieLen-1M & 0.94M & 7 & 3,529 \\ & Criteo & 45.84M & 39 & 998,960 \\ \hline \hline \multirow{3}{*}{**Item Recommendation**} & Dataset & \#User & \#Item & \#Features \\ \cline{2-5} & Amazon-Movie & 19,873 & 10,176 & 8,504 \\ \cline{1-1} & Bookcrossing & 48,999 & 193,765 & 5,100 \\ \hline \hline \end{tabular} \end{table} Table 1. Dataset Statistics. is proposed to address this issue by modeling the correlations between features through hypergraph message passing. To examine whether HyperFormer achieves this goal, we first sort and slice all the feature values in MovieLens-1M by frequency. Then we retrieve the test inputs that contain each set of feature values for evaluation. We compare the CTR performance of DCN-v2 with and without HyperFormer in Figure 2. Our results show that HyperFormer enables DCN-v2 to achieve better performance on samples with low-frequency features, as measured by both LogLoss and AUC. This demonstrates that HyperFormer is effective in enhancing the quality of feature embeddings for tail features, leading to more accurate CTR predictions for items with tail features. ### Task2: Top-K Item Recommendation In many real-world recommendation systems, the goal is to retrieve the Top-K relevant items for a given user. Indeed, the input of these top-k recommendation frameworks also suffers from the same issue as the CTR task in that the user and item features are also extremely sparse and follow the long-tailed distribution. To further demonstrate the generalizability of HyperFormer, we evaluate its performance on the relational representation learning for recommendation systems. **Datasets.** In the experiment, we adopt two benchmark datasets for evaluation. **Amazon-Movie** consists of product reviews and metadata for the "Movie" category spanning from May 1996 to July 2014 (Mikolov et al., 2017). **Bookcrossing**(Mikolov et al., 2017) collects the user-item ratings within the community, including both user demographic and age as well as item features such as Title, Author, Year, and Publisher. After removing the inactive users and items, we obtain the final datasets as summarized in Table 1. We randomly sample 70% data for model training, 10% for validation and 20% for testing. **Baselines.** Due to its high efficiency and flexibility, **Two-tower** model with separate user and item towers is widely adopted as the fundamental learning architecture for large-scale top-k item retrieval (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). Specifically, the high-dimensional user and item features are input to the corresponding towers, and the preference scores are typically computed by a dot-product between user and item embeddings encoded by the corresponding towers. To solve the feature imbalance issue, **DAT**(Wang et al., 2018) was proposed to extend each tower with an extra learnable vector to memorize the cross-tower information. Recently, Yao et.al proposed **SSL**(Wang et al., 2018) to leverage latent feature correlations in a two-tower model by augmenting the data and incorporating an auxiliary self-supervised learning task. **General Comparison.** To evaluate HyperFormer on top-K item recommendation, we plug it into a two-tower recommendation model and compare it with the baseline methods above that were proposed to address the feature sparsity issue. We use NDCG@10 and Recall@10 as evaluation metrics and summarize the results for both Amazon-Movie and Bookcrossing in Figure 3. By introducing the category alignment loss and an extended vector capturing the cross-tower information, DAT can significantly improve two-tower in top-k recommendation for both datasets. We find that SSL significantly outperforms DAT in Bookcrossing but falls behind DAT in Amazon-Movie. However, the proposed HyperFormer consistently outperforms all other methods in both datasets, showcasing its effectiveness in feature representation learning. The advantage across both CTR prediction and top-k recommendation highlights the generalizability of HyperFormer and its potential to address feature sparsity in various real-world tasks. ## 5. Conclusion In this paper, we focus on the problem of representation learning on high-dimensional sparse features. We propose to build feature hypergraphs to model the instance correlations and feature correlations explicitly. The proposed Hypergraph Transformer further enables message-passing on the constructed feature hypergraphs, resulting in more informative feature representations that encode instance correlations and feature correlations within the data. The evaluation of different methods on click-through-rate prediction and item recommendation demonstrate the effectiveness of our approach in capturing the relational information within data for learning informative feature representations. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & \multicolumn{2}{c}{Movielens-1M} & \multicolumn{2}{c}{Criteo} \\ & AUC & LogLoss & AUC & LogLoss \\ \hline LR & 0.7716 & 0.4424 & 0.7820 & 0.4695 \\ FM & 0.8252 & 0.3998 & 0.7836 & 0.4700 \\ NFM & 0.8357 & 0.3883 & 0.7957 & 0.4562 \\ XDeepFM & 0.8286 & 0.4108 & 0.8009 & 0.4517 \\ HoFM & 0.8304 & 0.4013 & 0.8004 & 0.4508 \\ AutoInt & 0.8456 & 0.3797 & 0.8061 & 0.4455 \\ DCN-v2 & 0.8402 & 0.3811 & 0.8045 & 0.4462 \\ \hline **AutoInt+HyperFormer** & **0.8462** & **0.3770** & **0.8072** & **0.4444** \\ **DCN-v2+HyperFormer** & **0.8471** & **0.3755** & **0.8061** & **0.4453** \\ \hline \hline \end{tabular} \end{table} Table 2. AUC and Logloss on CTR prediction. Figure 3. Top-k Recommendation performance comparison. Figure 2. CTR performance across different instance groups. ## Acknowledgments This material is based upon work supported by, or in part by, the NSF grant 2229461.
2308.09144
Non-equilibrium fluctuations for SEP($α$) with open boundary
We analyze the non-equilibrium fluctuations of the partial symmetric simple exclusion process, SEP($\alpha$), which allows at most $\alpha \in \mathbb{N}$ particles per site, and we put it in contact with stochastic reservoirs whose strength is regulated by a parameter $\theta \in \mathbb{R}$. Setting $\alpha = 1$, we find the results of [22, 16, 17] and extend the known results to cover all range of $\theta$.
C. Franceschini, P. Gonçalves, M. Jara, B. Salvador
2023-08-17T18:24:00Z
http://arxiv.org/abs/2308.09144v1
# Non-equilibrium fluctuations for ###### Abstract We analyze the non-equilibrium fluctuations of the partial symmetric simple exclusion process, SEP(\(\alpha\)), which allows at most \(\alpha\in\mathbb{N}\) particles per site, and we put it in contact with stochastic reservoirs whose strength is regulated by a parameter \(\theta\in\mathbb{R}\). Setting \(\alpha=1\), we find the results of [22, 16, 17] and extend the known results to cover all range of \(\theta\). **Keywords:** Partial Exclusion Process; Boundary driven; Non-equilibrium Fluctuations; Non-stationary two-point correlations; Ornstein-Uhlenbeck Process. ## 1 Introduction Interacting particle systems are stochastic systems on which individual units (the so-called _particles_) perform Markovian evolutions influenced by the presence of other particles. The objective is to study the emergence of collective behavior out of simple interaction rules for the individual units of the system. Among the most studied interacting particle systems [23] is the so-called _exclusion process_, on which the interaction between particles is reduced to a simple _exclusion rule_, under which particles evolving on a graph can never share the same position. The exclusion model has been used as a landmark for a myriad of collective behavior, among which mass transport, interface growth and motion by mean curvature. The success of the exclusion process as an interacting particle system comes from one side from its striking combinatorial and algebraic properties, which makes the analysis of the collective behavior of particles a mathematically tractable problem, and from the other side from the fact that it is rich enough to allow modelling a great variety of collective behaviors. A generalization of the exclusion process that shares many of its algebraic properties is the so-called _partial exclusion process_: in this model, the exclusion rule is relaxed to allow at most \(\alpha\) particles per site, where \(\alpha\in\mathbb{N}\) is a fixed parameter. The partial exclusion process that we investigate here, the SEP(\(\alpha\)), was first introduced in Section B of [26]. We restrict ourselves to the choice of a simple symmetric dynamics on a one-dimensional lattice, i.e. nearest-neighbor jumps with \(p(1)=p(-1)=1/2\). For \(N\in\mathbb{N}\), we consider the finite lattice \(\Lambda_{N}=\{1,\ldots,N-1\}\) which we call bulk. For a site \(x\in\Lambda_{N}\), we fix the rate at which a particle jumps from \(x\) to \(x+1\) (resp. from \(x+1\) to \(x\)) to be equal to \(\eta(x)(\alpha-\eta(x+1))\) (resp. \(\eta(x+1)(\alpha-\eta(x))\)), where \(\eta(x)\) denotes the quantity of particles at site \(x\) on the configuration \(\eta\). If \(\alpha=1\), the model coincides with the so-called symmetric simple exclusion process (SSEP). This specific choice of the rates was introduced in [26], see equation (2.30) in that article. The SEP(\(\alpha\)) has been further studied in other settings, such as in [6] and [13] where the system is put in contact with stochastic reservoirs, in [12] under a random enviroment and also in [7], [8], always from a duality point of view. We note that for the choice of rates given above this model is what is called a _gradient model_, since the instantaneous current of the system at the bond \(\{x,x+1\}\), i.e. the difference between the jump rate from \(x\) to \(x+1\) and the jump rate from \(x+1\) to \(x\) can be written as the gradient of a local function. More precisely, that current is equal to \(a(\eta(x)-\eta(x+1))\). We also observe that the number of particles is conserved by the dynamics of the SEP(\(\alpha\)) and that the symmetry of the jump rates of the individual particles makes the system reversible with respect to Binomial measures of product form. Non-equilibrium phenomena have become increasingly relevant in recent years, and the study of how collective behavior is modified by breaking reversibility is an active research subject. A natural way to modify the SEP(\(\alpha\)) in order to make it non-reversible, is to attach to the lattice _density reservoirs_ with at least two different densities. This creates currents through the system, which _drive_ the system out of equilibrium. In this article, this will be the setting we will be working on, i.e. we will attach a stochastic reservoir to each boundary point of \(\Lambda_{N}\). These reservoirs will break the conservation of the total number of particles, since they can inject and remove particles, even-though the individual units of the system will still be conserved _locally_. With the aim of exploring various possible answers to the question whether the limiting collective behavior of particles retains the non-reversible behavior, we will choose the particles injection and removal rates to scale with the size \(N\) of the system, through a parameter \(\theta\in\mathbb{R}\), and to be such that the system is no longer in equilibrium. When \(\theta<0\), the reservoirs are fast and when \(\theta\geq 0\), the reservoirs are slow. The main question here is whether this non-reversible behavior is observed at the level of the scaling limits of the model. The hydrodynamic limit of the SEP(\(\alpha\)) turns out to be a non-reversible PDE, which answers this question at the level of the law of large numbers. The next question is whether the non-reversible behavior has a stochastic component, which motivates the analysis of the fluctuations of the density around its hydrodynamic limit. The question can thus be restated as whether a non-reversible behavior is observed in the limiting SPDE. The Macroscopic Fluctuation Theory (MFT), as formulated in [4, 5] can be used to predict the behavior of large scale limits of driven-diffusive systems. This description depends on two macroscopic quantities, the _diffusivity_ and the _mobility_ of the system. One assumes that these quantities are local functions of the thermodynamic variables. In the case of the SEP(\(\alpha\)), the density of particles \(\rho\in[0,\alpha]\) is the only thermodynamic variable. The diffusivity is constant and equal to \(\alpha\), while the mobility is quadratic and equal to \(\rho(\alpha-\rho)\). Our main result confirms the predictions of MFT for the Central Limit Theorem (CLT) fluctuations of the density of particles. In this article, we will be interested on the analysis of the fluctuations of the density of particles around its hydrodynamic limit. This corresponds to the derivation of the CLT associated to the hydrodynamic limit of the system. The limiting equation is no longer a PDE, but a linear SPDE on which the time evolution is given by the hydrodynamic equation, plus a stochastic conservative noise with a covariance structure given in terms of solutions of the hydrodynamic equation. More precisely, in this paper we will analyse the non-equilibrium time dependent fluctuations for SEP(\(\alpha\)) for all \(\theta\in\mathbb{R}\) and \(\alpha\in\mathbb{N}\). We remark that the equilibrium case can also be easily proved by the same type of arguments as in the case \(\alpha=1\), obtained in [15]. For that reason, we omit the proof of this case here and we refer the reader to that article for a proof. Now we recall the state-of-the-art of some of the scaling limits for this model. For the case of the exclusion process with open boundary and \(\alpha=1\), the hydrodynamic limit was derived in [1] for slow reservoirs and in [3] for fast reservoirs. In [14], the derivation of the hydrodynamic limit was extended to \(\alpha\in\mathbb{N}\) in both the slow and fast regimes, with a proof that relies on the entropy method introduced in [19]. An extension of these hydrodynamic limits to general domains based on duality can be found in [25]. The hydrodynamic equation of the SEP(\(\alpha\)) is the heat equation given by \(\partial_{t}\rho_{t}(u)=\alpha\Delta\rho_{t}(u)\), that needs to be complemented with suitable boundary conditions. Depending on the choice of the parameter \(\theta\), the boundary conditions are of Dirichlet type (for \(\theta<1\)), Robin type (for \(\theta=1\)) or Neumann type (for \(\theta>1\)). The non-equilibrium fluctuations for the case \(\alpha=1\) were analysed in several works, namely in: [22] when \(\theta=0\), where the non-equilibrium stationary fluctuations were derived as a consequence of its non-equilibrium fluctuations; [16] when \(\theta=1\) and [17] when \(\theta\in[0,\infty)\). The equilibrium fluctuations, also for the case \(\alpha=1\), were analysed in [15] for \(\theta\geq 0\). Nevertheless, the case \(\theta<0\) was an open problem up to now, apart in the equilibrium setting, which was derived in [2]. The main difficulty on the rigorous mathematical derivation of the non-equilibrium fluctuations relies on the fact that the systems typically exhibit long-range space-time correlations. For that reason, one has to face the problem of obtaining good estimates of the two-point centered correlation function, that we denote by \(\varphi_{i}^{N}\). This is one of the main topics discussed in this article and we consider that it is here that relies the major contribution of our work. For the case \(\alpha=1\), by writing down the Chapman-Kolmogorov equations directly for \(\varphi_{t}^{N}\), one gets \[\partial_{t}\varphi_{i}^{N}(x,y)=N^{2}\Delta_{N}^{i}\varphi_{t}^{N}(x,y)+g_{t }^{N}(x,y)\mathbb{1}((x,y)\in\mathcal{G}_{N}^{\pm}), \tag{1.1}\] where \(\Delta_{N}^{i}\) is the infinitesimal generator of a certain bi-dimensional random walk, \(\mathcal{G}_{N}^{\pm}\) is a certain finite set that we will define later and \(g_{t}^{N}\) is a non-positive function that only has support on \(\mathcal{G}_{N}^{\pm}\). From last identity, one can use Duhamel's formula to obtain an expression for such function. From that, we reduce the problem to estimating three simple quantities: the initial correlations \(\varphi_{0}^{N}\), the term \(g_{t}^{N}\) and the occupation time on \(\mathcal{G}_{N}^{\pm}\) of the bi-dimensional random walk with infinitesimal generator \(\Delta_{N}^{i}\). Unfortunately, for \(\alpha\geq 2\), if one tries to write down the Chapman-Kolmogorov equations directly for \(\varphi_{t}^{N}\) defined as in the case \(\alpha=1\), an additional interaction term appears at the diagonal \(\{x=y\}\), which breaks down the previous approach. To overcome such issue, we construct an extension of \(\varphi_{t}^{N}\) to the diagonal \(\{x=y\}\) to which a similar approach as the one previously described can be applied to obtain the decay in \(N\) of \(\varphi_{t}^{N}\). By analyzing this extension function, we are able to obtain a generalization of the results for \(\alpha=1\) that were derived in [22, 17, 16]. The novelty of our approach to obtain the decay in \(N\) of \(\varphi_{t}^{N}\) is the construction and use of such a well chosen extension function that can be compared with \(\varphi_{t}^{N}\) and also the use of some discrete versions of the maximum principle (see Appendix A) to, after applying Duhamer's formula, compare occupation times for different values of \(\theta\). After some trial and error, we discovered that the right choice of the extension function is related to the _duality function_ of the SEP(\(\alpha\)), see [6] and Remark C.1. Nevertheless, we observe that there are other ways on which one can arrive to the right extension function for the correlation function \(\varphi_{t}^{N}\). In order to follow a fully analytical method, for example, one can introduce a boundary layer at the diagonal to discover the best approximation of the heat equation with sources at the diagonal. To determine the non-equilibrium fluctuations of the system we follow the same strategy outlined in [22, 16, 17] (with similar ideas to the ones described in Chapter 11 of [20]), and, for that reason, some details in the proofs are omitted here. The idea of the argument is the classical probabilistic approach to functional convergence of stochastic processes, namely, to prove tightness of the sequence of density fluctuation fields and then characterize all limit points. If on top of the conditions that we will need to ask in order to prove tightness, we also ask that, at the initial time, the sequence of density fields converges to a mean-zero Gaussian process, then the convergence takes place for any time \(t\) and the unique limiting process is a generalized Ornstein-Uhlenbeck process which is a solution of (2.26). Now we comment on the main tools and difficulties of our approach. We first observe that depending on the range of \(\theta\), the density fluctuation fields have to be defined on proper spaces of test functions, which typically are quite regular and satisfy the boundary conditions of the hydrodynamic equation but with an appropriate choice of parameters. Second, in order to prove tightness, we use both Aldous and Kolmogorov-Centsov criteria (as in [17]), where this last one is mainly applied to the boundary integral terms of the Dynkin's martingales. Recall that on the proof of tightness at the level of the hydrodynamic limit, i.e. of the sequence of empirical measures associated with the density profile, the quadratic variation of the Dynkin's martingale \(\{M_{t}^{N}(\phi)\}_{N\in\mathbb{N}}\) converges to zero. Now, in the case of fluctuations, the corresponding Dynkin's martingale converges, as \(N\) goes to infinity, in the \(J_{1}\)-Skorohod space \(\mathcal{B}_{N}([0,T];\mathbb{R})\) of cadlag functions from \([0,T]\) to \(\mathbb{R}\), to a mean-zero Gaussian process which is a martingale with continuous trajectories and with a deterministic, non-degenerated quadratic variation. We also note that from our results we can obtain the non-equilibrium fluctuations starting the process from a product measure with slowly varying parameter or even a constant one. In particular, if we fix a profile \(\rho:[0,1]\to[0,1]\) and consider \(\mu^{N}\) as the product measure whose marginals are given by the Binomial(\(\alpha\), \(\rho(\frac{N}{N})\)) distribution, the result also holds, leading to an Ornstein-Uhlenbeck process in the limit. In our work, we also consider the case \(\theta<0\) for \(\alpha\in\mathbb{N}\) in the non-equilibrium scenario, extending therefore the results of [2]. This case is more demanding than the others since the boundary terms are of order \(O(N^{-\theta})\) and therefore, they blow up when taking \(N\to+\infty\). To overcome this difficulty, we take a space of test functions that have all derivatives equal to zero at the boundary. Since this space of test functions is too little we supplement the characterization of limit points by showing that the limit field when integrated in time satisfies the Dirichlet conditions as in the case \(\theta\in[0,1)\). This is reminiscent of item 2 (ii) of Theorem 2.13 of [2], where it was proved that when the system is in its equilibrium state, this extra condition gives in fact the uniqueness of the limit. Here we extended that result to the non-equilibrium setting, though we lack a proof of uniqueness in that general case. Here is a summary of our contributions in this article. First we provide a natural extension of the two-point correlation function to the diagonal in such a way that it satisfies a consistent set of equations that allows estimating the non-stationary two-point correlations of the SEP(\(\alpha\)) for any value of \(\alpha\in\mathbb{N}\) and \(\theta\in\mathbb{R}\). As a consequence, we characterize the non-equilibrium fluctuations of SEP(\(\alpha\)) for any value of \(\alpha\geq 2\) and \(\theta\in\mathbb{R}\). Moreover, our approach also allows characterizing the non-equilibrium fluctuations of SEP(1), for \(\theta<0\). To conclude we comment on the fluctuations starting from non-equilibrium stationary state (NESS). Observe that the Ornstein-Uhlenbeck equation (2.26) has a unique invariant measure, which is given by a Gaussian spatial process on the interval \([0,1]\). Observe as well that the SEP(\(\alpha\)) as defined here is irreducible, and in particular has a unique invariant measure. A relevant question is the derivation of a fluctuation result for the empirical density of particles of the SEP(\(\alpha\)) starting from its NESS. This question has been solved for the SSEP in [22, 17], and more recently in [18] for reaction-diffusion models. Unfortunately, our estimates are not sharp enough to allow for the limit exchange which needed to derive such a result. Recall that, for \(\alpha=1\), the matrix ansatz method (MPA) developed by [10] provides detailed information about the NESS of SEP(1) and recently [11] found a characterization of such measure. For SEP(1), the MPA enables one to obtain explicitly the n-point correlation function of the system for any value of \(\theta\in\mathbb{R}\), see, for example, Section 2.2 of [17] and references therein. Knowing the decay in \(N\) of such objects is one of the main ingredients to analyze both its stationary fluctuations as its hydrostatic limit. We observe that, when \(\alpha\neq 1\), the model we consider has no matrix ansatz formulation available. As a consequence, there is not much information about its non-equilibrium stationary measure. Even-though it is known that the two-point stationary correlations of SEP(\(\alpha\)) are negative (see Theorem 3.4 of [13]), nevertheless, its decay with \(N\) is still an open problem. In this paper, we will not treat the case of the fluctuations from the NESS since our method depends on having such bounds on correlations. From our results, we can not just simply take \(t\to\infty\) to obtain the stationary fluctuations of SEP(\(\alpha\)) because some of the estimates we use here depend on time and would blow up as \(t\) goes to infinity. This is left as future work. Nonetheless, for the case \(\theta=0\) and any \(\alpha\in\mathbb{N}\), since we can find explicit expressions for the two-point correlations for certain choices of the parameters at the boundary rates (see for example in [6] equation (6.8)), one can follow the same strategy of the proof developed here and easily obtain the non-equilibrium stationary fluctuations of the system when \(\theta=0\), we leave this to the reader. Now we provide an outline of this article. In Section 2 we introduce the SEP(\(\alpha\)); we recall some known facts regarding its equilibrium measure (see Section 2.1) and its hydrodynamic behavior (see Section 2.3); and we introduce the setting for the analysis of the non-equilibrium fluctuations (see Section 2.4) and state our main results, namely, Proposition 4.2 and Theorems 2.3 and 2.4. In Section 3 we provide the proof of Theorem 2.3, which relies on showing tightness and characterizing the limit points; and we also prove Theorem 2.4 by spotting the main differences with respect to the results known in the literature - in particular Proposition 2.5 of [2] and Theorem 2.13 of [2]. In Sections 4 and 5 we obtain a collection of auxiliary results that we use in our proofs mainly related to estimating the two-point correlation function. In Appendix A we state and provide the proofs of various versions of the maximum principle. In Appendix B we provide some details on the Chapman-Kolmogorov equation for \(\varphi_{t}^{N}\), when \(\alpha\geq 2\), with the aim of facilitating the reading of the article. In Appendix C we show two different arguments for the construction of the extension function that we use to bound \(\varphi_{t}^{N}\): the first one via stochastic duality and the second one by analytic methods.Finally, Appendix E is devoted to the proof of a replacement lemma. ## 2 The model and statement of results ### The model: the SEP(\(\alpha\)) Fix \(\alpha\in\mathbb{N}\) and for each \(N\in\mathbb{N}\) let \(\Lambda_{N}:=\{1,\ldots,N-1\}\) be the one-dimensional, discrete interval and let \(\overline{\Lambda}_{N}:=\Lambda_{N}\cup\{0,N\}\). We will call \(\Lambda_{N}\) the _bulk_. We say that \(x,y\in\Lambda_{N}\) are _nearest neighbors_ if \(|y-x|=1\), and we denote it by \(x\sim y\). We consider a Markov chain with state space \(\Omega_{N}:=\{0,\ldots,q\}^{\Lambda_{N}}\). We call the elements of \(\Omega_{N}\)_configurations_ and we denote them by \(\eta=\{\eta(x);x\in\Lambda_{N}\}\). We interpret \(\eta(x)\) as the number of particles at site \(x\in\Lambda_{N}\) and we call the functions \((\eta(x);x\in\Lambda_{N})\) the _occupation variables_. For each \(x\in\Lambda_{N}\), let us denote by \(\delta_{x}\) the configuration in \(\Omega_{N}\) with exactly one particle, located at \(x\), that is, \[\delta_{x}(y):=\left\{\begin{array}{l}1;\,y=x,\\ 0\;;\,y\neq x.\end{array}\right.\] For each \(f:\Omega_{N}\to\mathbb{R}\), let \(\mathcal{C}_{\text{bulk}}f=\mathcal{C}_{\text{bulk},N}f:\Omega_{N}\to\mathbb{R}\) be given by \[\mathcal{C}_{\text{bulk}}f(\eta):= \sum_{x=1}^{N-2}\eta(x)(\alpha-\eta(x+1))\big{\{}f(\eta+\delta_{ x+1}-\delta_{x})-f(\eta)\big{\}}\] \[+ \sum_{x=1}^{N-2}\eta(x+1)(\alpha-\eta(x))\big{\{}f(\eta+\delta_{ x}-\delta_{x+1})-f(\eta)\big{\}}\] for every \(\eta\in\Omega_{N}\). In this expression, we adopt the convention that \(0\cdot f(\eta+\delta_{y}-\delta_{x})=0\) whenever \(f(\eta+\delta_{y}-\delta_{x})\) is not well defined. The linear operator \(\mathcal{C}_{\text{bulk}}\) defined in this way is a Markov generator, which describes the _bulk_ dynamics. For every \(j\in\{\ell,r\}\), let \(0<\lambda^{j}\leq 1\) and \(\rho^{j}\in(0,\alpha)\) be fixed, and let \(\theta\in\mathbb{R}\) be fixed. Define \(x^{\ell}=1\) and \(x^{r}=N-1\). For \(f:\Omega_{N}\to\mathbb{R}\), let \(\mathcal{C}_{j}f=\mathcal{C}_{j,N}f:\Omega_{N}\to\mathbb{R}\) be given by \[\mathcal{C}_{j}f(\eta):=\lambda^{j}\rho^{j}(\alpha-\eta(x^{j}))\big{\{}f(\eta+ \delta_{x^{j}})-f(\eta)\big{\}}+\lambda^{j}(\alpha-\rho^{j})\eta(x^{j})\big{\{} f(\eta-\delta_{x^{j}})-f(\eta)\big{\}}\] for every \(\eta\in\Omega_{N}\). The SEP(\(\alpha\)) with _slow/fast reservoirs_ at \(0\) and \(N\) is the Markov chain \((\eta_{t};t\geq 0)\) in \(\Omega_{N}\) generated by the operator \[\mathcal{C}_{N}:=\mathcal{C}_{\text{bulk}}+\frac{1}{N^{\theta}}\big{(} \mathcal{C}_{\ell}+\mathcal{C}_{r}\big{)}.\] Observe that the operator \(\mathcal{E}_{N}\) depends on the parameters \(\alpha,\lambda^{\ell},\lambda^{r},\rho^{\ell},\rho^{r},\theta\). Sometimes it will be useful to state this dependence explicitly on the notation. Whenever we need to do this, we will use the generic index \(i\) to denote the vector of parameters \((\alpha,\lambda^{\ell},\lambda^{r},\rho^{\ell},\rho^{r},\theta)\). The dynamics of the SEP\((\alpha)\) with parameters \((\lambda^{\ell},\lambda^{r},\rho^{\ell},\rho^{r},\theta)\) is described in the figure below. The choice of such parametrization allows to interpret the reservoirs' dynamics in a similar way to the bulk dynamics. More precisely, let us define \[\epsilon=\lambda^{\ell}\rho^{\ell},\qquad\delta=\lambda^{r}\rho^{r},\qquad \gamma=\lambda^{\ell}(\alpha-\rho^{\ell}),\qquad\beta=\lambda^{r}(\alpha-\rho^ {r}). \tag{2.1}\] Interpreting \(\lambda^{j}\rho^{j}\) for \(j=\ell,r\) as the corresponding particle densities at the two reservoirs, then the jump rates of the reservoirs' dynamics corresponds to the jump rates of the bulk dynamics on which the occupation variables of sites outside the interval \(\Lambda_{N}\) are replaced by their corresponding densities. Hereafter we fix \(T>0\) and we consider a finite time horizon \([0,T]\). For each \(N\geq 1\), we denote by \(\mathbb{O}_{N}([0,T],\Omega_{N})\) the space of cadlag trajectories endowed with the \(J_{1}\)-Skorohod topology. We fix a sequence of probability measures \((\mu^{N})_{N\geq 1}\) on \(\Omega_{N}\). In order to see a non-trivial evolution of macroscopic quantities we need to speed up the process in the diffusive time scale \(tN^{2}\), and in that case \(\eta_{tN^{2}}\) has generator \(N^{2}\mathcal{E}_{N}\). Let \(\mathbb{P}_{\mu^{N}}\) be the probability measure on \(\mathbb{O}_{N}([0,T],\Omega_{N})\) induced by the Markov process \((\eta_{tN^{2}};t\geq 0)\) and by the initial measure \(\mu^{N}\). We denote the expectation with respect to \(\mathbb{P}_{\mu^{N}}\) by \(\mathbb{E}_{\mu^{N}}\). ### Stationary measures Since the SEP\((\alpha)\) is an irreducible continuous time Markov chain with a finite state space, then it admits a unique stationary measure. In fact this stationary measure can be identified for a certain choice of the parameters of the model. **Proposition 2.1**.: _If \(\rho^{\ell}=\rho^{r}=:\rho\), then the stationary (equilibrium) measure is given by an homogeneous product measure with Binomial marginal distributions with parameters \(\alpha\in\mathbb{N}\) and \(\frac{\rho}{a}\in(0,1)\):_ \[\gamma(\eta)=\prod_{x\in\Lambda_{N}}\binom{\alpha}{\eta(x)}\Big{(}\frac{\rho }{\alpha}\Big{)}^{\eta(x)}\big{(}1-\frac{\rho}{\alpha}\Big{)}^{a-\eta(x)}. \tag{2.2}\] See [6] for a proof when \(\theta=0\), for \(\theta\neq\) the proof is identical. We note that for \(\rho^{\ell}\neq\rho^{r}\) we do not have any information about this measure. ### Hydrodynamic limit Here we recall the hydrodynamic limit for the SEP\((\alpha)\) which was obtained in [14]. For \(\eta\in\Omega_{N}\), we define the empirical measure \(\pi^{N}(\eta,du)\) by \[\pi^{N}(\eta,du):=\frac{1}{N}\sum_{x\in\Lambda_{N}}\eta(x)\delta_{\frac{\pi}{ N}}\left(du\right),\] where \(\delta_{b}(du)\) is a Dirac measure at \(b\in[0,1]\). For every \(G:[0,1]\to\mathbb{R}\) continuous, we denote the integral of \(G\) with respect to \(\pi^{N}\) by \(\langle\pi^{N},G\rangle\) and we observe that \[\langle\pi^{N},G\rangle=\frac{1}{N}\sum_{x\in\Lambda_{N}}\eta(x)G\left(\frac{ x}{N}\right).\] We denote by \(\mathcal{M}\) the space of non-negative Radon measures on \([0,1]\) with total mass bounded by \(\alpha\) and equipped with the weak topology. Also, we denote by \(\partial_{N}([0,T],\mathcal{M})\) the space of cadlag trajectories in \(\mathcal{M}\) endowed with the Skorohod topology. We define \(\pi_{t}^{N}(\eta,du):=\pi^{N}(\eta_{t,N^{2}},du)\). **Definition 2.1**.: _Let \(\gamma:[0,1]\to[0,\alpha]\) be a measurable function. We say that a sequence of probability measures \((\gamma^{N})_{N\geq 1}\) on \(\Omega_{N}\) is associated to the profile \(\gamma\) if for every continuous function \(G:[0,1]\to\mathbb{R}\) and for every \(\delta>0\), it holds_ \[\lim_{N\to\infty}\gamma^{N}\Big{(}\eta\in\Omega_{N}:\big{|}(\pi^{N},G)-\int_{0 }^{1}G(u)\gamma(u)du\big{|}>\delta\Big{)}=0. \tag{2.3}\] From now on we make the following assumption on the sequence of probability measures: \[(\mu^{N})_{N\geq 1}\text{ is associated to a measurable function }\gamma:[0,1]\to[0,\alpha].\] (H1) In order to properly state the hydrodynamic limit, i.e. Theorem 2.2, we need to recall the notion of weak solutions stated in [14]. To this end, we need to consider a proper space of test functions. We denote by \(C^{1,\infty}([0,T]\times[0,1])\) the space of continuous functions defined on \([0,T]\times[0,1]\) that are continuously differentiable on the first variable and infinitely differentiable on the second variable. We also denote by \(C^{1,\infty}_{c}([0,T]\times[0,1])\) the space of functions \(G\in C^{1,\infty}([0,T]\times[0,1])\) such that for each time \(t\), the support of \(G_{t}\) is contained in \((0,1)\). We denote by \(C^{\infty}([0,1])\) the space of infinitely differentiable functions defined in \([0,1]\) and we denote by \(C^{\infty}_{c}([0,1])\) (resp. \(C^{\infty}_{c}([0,1])\)) the space of \(m\)-continuously differentiable (resp. infinitely differentiable) real-valued functions defined on \([0,1]\) with support contained in \((0,1)\). We denote by \((\cdot,\cdot)\) the inner product in \(L^{2}([0,1])\) and we denote by \(\|\cdot\|_{L^{2}}\) the corresponding \(L^{2}\)-norm. Now we define the Sobolev space \(\mathcal{R}^{1}\) on \([0,1]\). For that purpose, we define the semi inner-product \(\langle\cdot,\cdot\rangle_{1}\) on the set \(C^{\infty}([0,1])\) by \((G,H)_{1}:=\langle\partial_{u}G,\partial_{u}H\rangle\) for \(G,H\in C^{\infty}([0,1])\) and we denote the corresponding semi-norm by \(\|\cdot\|_{1}\). **Definition 2.2**.: _The Sobolev space \(\mathcal{R}^{1}\) on \([0,1]\) is the Hilbert space defined as the completion of \(C^{\infty}([0,1])\) with respect to the norm \(\|\cdot\|_{\mathcal{R}^{1}}^{2}:=\|\cdot\|_{L^{2}}^{2}+\|\cdot\|_{1}^{2}\) and its elements coincide a.e. with continuous functions. The space \(L^{2}(0,T;\mathcal{R}^{1})\) is the set of measurable functions \(f:[0,T]\to\mathcal{R}^{1}\) such that \(\int_{0}^{T}\|f_{t}\|_{\mathcal{R}^{1}}^{2}\mathcal{M}t<\infty\)._ We remark that in \(\mathcal{R}^{1}\) we can define the trace operator, and so it makes sense to talk about boundary values of functions in this space when interpreted in the trace sense. **Definition 2.3**.: _Let \(\gamma_{0}:[0,1]\to[0,\alpha]\) be a measurable function. We say that \(\rho:[0,T]\times[0,1]\to[0,\alpha]\) is a weak solution of the heat equation_ \[\begin{cases}\partial_{t}\rho_{t}(u)=a\Delta\,\rho_{t}(u),\quad(t,u)\in(0,T] \times(0,1)\\ \rho_{0}(u)=\gamma_{0}(u),\quad u\in[0,1].\end{cases} \tag{2.4}\] _with initial condition \(\gamma_{0}(\cdot)\) and:_ 1. _Dirichlet boundary conditions given by_ \[\rho_{t}(0)=\rho^{t}\quad\text{and}\quad\rho_{t}(1)=\rho^{r},\quad t\in(0,T],\] (2.5) _if_ \(\rho\in L^{2}(0,T;\mathcal{R}^{1})\)__\(\rho_{t}(0)=\rho^{t}\) _and_ \(\rho_{t}(1)=\rho^{r}\) _for a.e._ \(t\in(0,T]\)_, and for all_ \(t\in[0,T]\) _and all_ \(G\in C^{1,\infty}_{c}([0,T]\times[0,1])\) _it holds_ \[\langle\rho_{t},G_{t}\rangle-\langle\gamma_{0},G_{0}\rangle-\int_{0}^{t} \langle\rho_{s},\Big{(}a\Delta+\partial_{s}\Big{)}G_{s}\rangle ds=0.\] 2. _Robin boundary conditions given by_ \[\partial_{u}\rho_{t}(0)=\lambda^{t}\big{(}\rho_{t}(0)-\rho^{t}\big{)},\quad \partial_{u}\rho_{t}(1)=\lambda^{r}\big{(}\rho^{r}-\rho_{t}(1)\big{)},\quad t \in(0,T],\] (2.6) _if_ \(\rho\in L^{2}(0,T;\mathcal{R}^{1})\) _and for all_ \(t\in[0,T]\) _and all_ \(G\in C^{1,\infty}([0,T]\times[0,1])\) _it holds_ \[\langle\rho_{t},G_{t}\rangle-\langle\gamma_{0},G_{0}\rangle-\int_{0}^{t} \langle\rho_{s},\Big{(}a\Delta+\partial_{s}\Big{)}G_{s}\rangle ds+a\int_{0}^{t} \big{[}\rho_{s}(1)\partial_{u}G_{s}(1)-\rho_{s}(0)\partial_{u}G_{s}(0)\big{]}\ ds\] \[-a\int_{0}^{t}\big{[}G_{s}(0)\lambda^{t}\big{(}\rho_{s}(0)-\rho^{ t}\big{)}+G_{s}(1)\lambda^{r}\big{(}\rho^{r}-\rho_{s}(1)\big{)}\big{]}\ ds=0.\] 3. _Neumann boundary conditions given by_ \[\partial_{u}\rho_{t}(0)=\partial_{u}\rho_{t}(1)=0,\] (2.7) _if_ \(\rho\in L^{2}(0,T;\mathcal{R}^{1})\) _and for all_ \(t\in[0,T]\) _and any_ \(G\in C^{1,\infty}([0,T]\times[0,1])\) _it holds_ \[\langle\rho_{t},G_{t}\rangle-\langle\gamma_{0},G_{0}\rangle-\int_{0}^{t} \langle\rho_{s}\left(\alpha\Delta+\partial_{s}\right)G_{s}\rangle ds+\alpha \int_{0}^{t}\left[\rho_{s}(1)\partial_{u}G_{s}(1)-\rho_{s}(0)\partial_{u}G_{s} (0)\right]ds=0.\] We observe that there exists one and only one weak solution of the heat equation with any of the previous boundary conditions, see [1]. We are now ready to state the hydrodynamic limit of [14]. **Theorem 2.2**.: _Let \(\gamma:[0,1]\to[0,\alpha]\) be a measurable function and \(\{\mu^{N}\}_{N\geq 1}\) a sequence of probability measures associated to \(\gamma(\cdot)\), i.e. satisfying (H1). For any \(t\in[0,T]\), any continuous function \(G:[0,1]\to\mathbb{R}\) and any \(\delta>0\), it holds_ \[\lim_{N\to\infty}\mathbb{P}_{\mu^{N}}\big{(}\eta:\left|\frac{1}{N}\sum_{x\in \Lambda_{u}}G\left(\frac{x}{N}\right)\eta_{tN^{2}}(x)-\langle G,\rho_{t}\rangle \right|>\delta\big{)}=0,\] _where \(\rho_{t}(\cdot)\) is the unique weak solution of the heat equation with initial condition \(\gamma\) and for:_ _a) \(\theta<1\), Dirichlet boundary conditions (2.5);_ _b) \(\theta=1\), Robin boundary conditions (2.6);_ _c) \(\theta>1\), Neumann boundary conditions (2.7)._ Our focus on this article is to describe the fluctuations of the system around the hydrodynamical profile. And this is what we discuss in the next subsection. ### Non-equilibrium fluctuations #### 2.4.1 The space of test functions As we did before stating Theorem 2.2, in order to show the non-equilibrium fluctuations of the SEP(\(\alpha\)), we need to introduce a proper space of test functions. Observe that realizations of white noises are not well defined as measures, but only as distributions. Therefore, we need to introduce Schwarz-like spaces of test functions. Recall that a subscript or superscript \(i\) represents dependence on the parameters \(i=(\alpha,\lambda^{t},\lambda^{r},\rho^{t},\rho^{r},\theta)\) of the model. **Definition 2.4**.: _We define \(\mathcal{S}_{t}\) as the set of functions \(\phi\) in \(C^{\infty}([0,1])\) that satisfy, for all \(k\in\mathbb{N}\cup\{0\}\),_ 1. _if_ \(\theta<0\)_;_ \(\partial_{u}^{k}\phi(0)=\partial_{u}^{2k}\phi(1)=0\)_;_ 2. _if_ \(0\leq\theta<1\)_:_ \(\partial_{u}^{2k}\phi(0)=\partial_{u}^{2k}\phi(1)=0\)_;_ 3. _if_ \(\theta=1\)_:_ \(\partial_{u}^{2k+1}\phi(0)=\lambda^{t}\partial_{u}^{2k}\phi(0)\)_,_ \(\partial_{u}^{2k+1}\phi(1)=-\lambda^{r}\partial_{u}^{2k}\phi(1)\)_;_ 4. _if_ \(\theta>1\)_:_ \(\partial_{u}^{2k+1}\phi(0)=\partial_{u}^{2k+1}\phi(1)=0\)_._ As in [16, 17], the previous choice is to make \(\mathcal{S}_{t}\) invariant under taking second derivatives, which in turn implies that the Markov semigroup associated to the operator \(\alpha\Delta\) with the corresponding boundary conditions, which we denote by \(S_{t}^{i}\), is such that, if \(\phi\in\mathcal{S}_{t}\), then \(S_{t}^{i}\phi\in\mathcal{S}_{t}\). This property will be useful later on. Indeed, as in the proof of Proposition 3.1 of [16], for the case \(\theta=1\), and for the other values of \(\theta\) as in Remark 2.5. of [17], given \(\phi\in\mathcal{S}_{t}\), \(S_{t}^{i}\phi\) is solution to \[\begin{cases}\partial_{u}S_{t}^{i}\phi(u)=\alpha\Delta S_{t}^{i}\phi(u),\quad( t,u)\in[0,T]\times(0,1)\\ S_{0}^{i}\phi(u)=\phi(u),\quad u\in[0,1].\end{cases}\] with boundary conditions: 1. if \(\theta>1\) \[\partial_{u}S_{t}^{i}\phi(0)=\partial_{u}S_{t}^{i}\phi(1)=0;\] (2.8) 2. if \(\theta=1\) \[\partial_{u}S_{t}^{i}\phi(0)=\lambda^{t}S_{t}^{i}\phi(0)\quad\text{and}\quad \partial_{u}S_{t}^{i}\phi(1)=-\lambda^{r}S_{t}^{i}\phi(1);\] (2.9) 3. if \(\theta<1\) \[S_{t}^{i}\phi(0)=S_{t}^{i}\phi(1)=0.\] (2.10) Let us compute \(S_{t}^{i}\) by the separation of variables method. The aim is to look for solutions of the form \[S_{t}^{i}\phi(u)=g(t)f(u), \tag{2.11}\] with \(g\) a function of \(t\) and \(f\) a function of \(x\) to be computed. This leaves us with \(g(t)=Ce^{uat}\), where \(C,\mu\in\mathbb{R}\) to be computed, and the Sturm-Liouville problem \(f^{\prime\prime}(u)-cf(u)=0\), for \(u\in(0,1)\) with boundary conditions 1. if \(\theta>1\), \(f^{\prime}(0)=f^{\prime}(1)=0\); 2. if \(\theta=1\), \(f^{\prime}(0)=\lambda^{\ell}f(0)\) and \(f^{\prime}(1)=-\lambda^{r}f(1)\); 3. if \(\theta<1\), \(f(0)=f(1)=0\). The previous problems have a solution of the form \(f(u)=A\sin(\omega_{1}u)+B\cos(\omega_{2}u)\), where \(A,B,\omega_{1},\omega_{2}\) have to be computed. A simple but long computation shows that 1. if \(\theta>1\), \(f(u)=B(k)\cos(\pi ku),\text{ for some }k\in\mathbb{Z}\), where \(B(k)\) has to be computed. Thus, \[S_{t}^{i}\phi(u)=\sum_{k\in\mathbb{Z}}e^{-\pi^{k}k^{\alpha t}}(\phi,2\cos(\pi k \cdot))\cos(\pi ku).\] (2.12) 2. if \(\theta=1\), \(f(u)=B(k)\left[\frac{\lambda^{\ell}}{B_{k}}\sin(\beta_{k}u)+\cos(\beta_{k}u) \right],\text{ for some }k\in\mathbb{Z}\), where \(B(k)\) has to be computed and \(\beta_{k}\) are the solutions of \(\frac{(\lambda^{\ell}+\lambda^{r})x}{x^{2}+\lambda^{\ell}\lambda^{r}}=\tan(x)\). Thus, \[S_{t}^{i}\phi(u)=\sum_{k\in\mathbb{Z}}e^{-\beta_{k}^{2}at}B(k)\left[\frac{ \lambda^{\ell}}{\beta_{k}}\sin(\beta_{k}u)+\cos(\beta_{k}u)\right],\] (2.13) with \(B(k)\) such that \(\sum_{k\in\mathbb{Z}}B(k)\left[\frac{\lambda^{\ell}}{B_{k}}\sin(\beta_{k}u)+ \cos(\beta_{k}u)\right]=\phi(u)\). 3. if \(\theta<1\), \(f(u)=A(k)\sin(\pi ku),\text{ for some }k\in\mathbb{Z}\), where \(A(k)\) has to be computed. Thus, \[S_{t}^{i}\phi(u)=\sum_{k\in\mathbb{Z}}e^{-\pi^{k}k^{\alpha t}}(\phi,2\sin(\pi k \cdot))\sin(\pi ku).\] (2.14) For every \(\theta\in\mathbb{R}\), we showed that \(S_{t}^{i}\phi\) can be written in terms of the eigenvalues and eigenfunctions of the Laplace operator with different boundary conditions. From here we easily conclude that, for every \(\phi\in\mathcal{S}_{t}\), \(S_{t}^{i}\phi\in\mathcal{S}_{t}\). We equip \(\mathcal{S}_{t}\) with the topology induced by the family of seminorms \(\left\{\left\|\left|\cdot\left|\right|\right|_{j}\right\}_{j\in\mathbb{N},t(0)}\right\) where for \(\phi\in\mathcal{S}_{t}\) \[\left|\left|\left|\phi\right|\right|\right|_{j}:=\sup_{u\in[0,1]}|\phi^{(j)}(u )|. \tag{2.15}\] The space \(\mathcal{S}_{t}\) endowed with this topology turns out to be a nuclear Frechet space, i.e. a complete Hausdorff space whose topology is induced by a countable family of semi-norms and such that all summable sequences in \(\mathcal{S}_{t}\) are absolutely summable. We will denote by \(\mathcal{S}_{t}^{\prime}\) the topological dual of \(\mathcal{S}_{t}\), i.e. the set of linear bounded functionals over \(\mathcal{S}_{t}\) and we equip it with the weak topology. Let \(\mathcal{D}_{N}([0,T],\mathcal{S}_{t}^{\prime})\) denote the set of cadlag time trajectories of linear functionals acting on \(\mathcal{S}_{t}\). #### 2.4.2 The discrete profile and the density fluctuation field Observe that Theorem 2.2 can be understood as a law of large numbers for the random trajectories \(((\pi_{t}^{N},G);t\geq 0)\). Therefore, it is natural to study the corresponding central limit theorem. In order to do that, one needs to specify how to center and how to rescale the random variables \(\langle\pi_{t}^{N},G\rangle\). Whenever possible, the most natural way to do this is to consider the quantity \[\sqrt{N}\big{(}\langle\pi_{t}^{N},G\rangle-\mathbb{E}_{\mu^{\mu}}[\langle\pi_ {t}^{N},G\rangle]\big{)}.\] Thanks to the duality properties of the SEP\((\alpha)\), the expectation \(\mathbb{E}_{\mu^{\mu}}[\langle\pi_{t}^{N},G\rangle]\) can be computed in a fairly explicit way. Let us define the _expected density of particles_\(\rho_{t}^{N}(x)\) for all \(t\geq 0\) and \(x\in\overline{\Lambda}_{N}\) as \[\rho_{t}^{N}(x):=\mathbb{E}_{\mu^{N}}[\eta_{tN^{2}}(x)]\text{ for }x\in\Lambda_{N} \text{ and }\rho_{t}^{N}(0):=\rho^{t},\quad\rho_{t}^{N}(N):=\rho^{r}\.\] This last definition serves as a boundary condition for the expected density of particles. Using that the monomials \(\left(\frac{\eta_{x}}{\alpha};x\in\Lambda_{N}\right)\) are self-duality functions for the SEP(\(\alpha\)), one can show that \((\rho_{t}^{N}(x);t\geq 0,x\in\overline{\Lambda}_{N})\) is the unique solution of the discrete heat equation \[\left\{\begin{array}{c}\partial_{t}\rho_{t}^{N}(x)=N^{2}\Delta_{N}^{i}\rho_{ t}^{N}(x),x\in\Lambda_{N},t\geq 0,\\ \rho_{t}^{N}(0)=\rho^{t},t\geq 0,\\ \rho_{t}^{N}(N)=\rho^{t},t\geq 0,\end{array}\right. \tag{2.16}\] with initial condition \(\rho_{0}^{N}(x):=\mathbb{E}_{n^{N}}[\eta_{0}^{N}(x)]\). Here the operator \(\Delta_{N}^{i}\) is a discrete Laplacian with modified rates at the boundary depending on \(i\). More precisely, let us define the jump rate \[c^{i}:\{(x,y)\in\Lambda_{N}\times\overline{\Lambda}_{N};x\sim y\}\] as \[c^{i}_{x,y}:=\left\{\begin{array}{c}\alpha\ \ ;x,y\in\Lambda_{N}\\ \frac{\alpha\lambda^{t}}{N^{\theta}}\ ;x=1,y=0\\ \frac{\alpha\lambda^{t}}{N^{\theta}}\ ;x=N-1,y=N.\end{array}\right. \tag{2.17}\] Then the operator \(\Delta_{N}^{i}\) acts on functions \(f:\overline{\Lambda}_{N}\to\mathbb{R}\) as \[\Delta_{N}^{i}f(x)=c^{i}_{x,x-1}\big{(}f(x-1)-f(x)\big{)}+c^{i}_{x,x+1}\big{(} f(x+1)-f(x)\big{)}, \tag{2.18}\] for every \(x\in\Lambda_{N}\). The stationary solution of (2.16), that we denote by \(\rho_{is}^{N}(\cdot)\), is given, for every \(x\in\Lambda_{N}\) by \[\rho_{is}^{N}(x):=a_{N}^{i}x+b_{N}^{i}, \tag{2.19}\] where \[a_{N}^{i}=\frac{\lambda^{t}}{N^{\theta}-\lambda^{t}}(b_{N}-\rho^{t})\quad \text{ and }\quad b_{N}^{i}=\frac{\lambda^{t}\rho^{t}(N^{\theta}-\lambda^{t})+\lambda^{t }\rho^{t}(N^{\theta}+(N-1)\lambda^{t})}{\lambda^{t}\lambda^{t}\lambda^{t}(N- 1)+\lambda^{t}N^{\theta}+\lambda^{t}(N^{\theta}-\lambda^{t})}. \tag{2.20}\] **Definition 2.5**.: _We define the density fluctuation field \((Y_{t}^{N};t\geq 0)\) associated to the SEP(\(\alpha\)), \((\eta_{tN^{2}};t\geq 0)\), with initial measure \((\mu^{N})_{N\in\mathbb{N}}\) as the time trajectory of linear functionals acting on functions \(\phi\in\mathcal{S}_{i}\) as_ \[Y_{t}^{N}(\phi)=\frac{1}{\sqrt{N}}\sum_{x\in\Lambda_{N}}\phi\left(\frac{x}{N} \right)\tilde{\eta}_{tN^{2}}(x), \tag{2.21}\] _where, for each \(x\in\Lambda_{N}\), we centered \(\eta_{tN^{2}}(x)\) by taking \(\tilde{\eta}_{tN^{2}}(x):=\eta_{tN^{2}}(x)-\rho_{t}^{N}(x)\)._ For each \(N\in\mathbb{N}\), let \(\mathbb{Q}_{N}\) be the probability measure in \(\mathcal{Q}_{N}([0,T],\mathcal{S}_{t}^{\prime})\), induced by the density fluctuation field \((Y_{t}^{N})_{t\geq 0}\). Our goal is to prove, under suitable assumptions, that \((\mathbb{Q}_{N})_{N\in\mathbb{N}}\) weakly converges to \(\mathbb{Q}\), a probability measure on \(\mathcal{Q}_{N}([0,T],\mathcal{S}_{t}^{\prime})\), that can be uniquely characterized. A limit theorem of this form is known in the literature as the derivation of the _non-equilibrium fluctuations_ of the SEP(\(\alpha\)). To achieve our goal, it will be enough to: show that the sequence of measures \((\mathbb{Q}_{N})_{N\in\mathbb{N}}\) is tight, guaranteeing the weak convergence up to a subsequence and then characterize (uniquely) the limit point. Roughly speaking, this is the content of Theorems 2.3 and 2.4. Figure 2.2: Illustration through arrows of the jump rate \(c^{i}\) defined above. #### 2.4.3 Main results To properly state our results, we need to introduce some definitions and notations. A crucial estimate for the non-equilibrium fluctuations is a sharp estimate on the decay of both space and space-time correlation function of the SEP(\(\alpha\)). Define the two-dimensional set \(V_{N}:=\{(x,y)\in(\Lambda_{N})^{2}\mid x\leq y\}\) and its boundary by \[\partial V_{N}:=\{(x,y):x\in\{0,N\}\text{ and }y\in\bar{\Lambda}_{N}\}\cup\{(x,y):y \in\{0,N\}\text{ and }x\in\bar{\Lambda}_{N}\}.\] We denote its closure by \(\overline{V}_{N}:=V_{N}\cup\partial V_{N}\), and we denote its upper diagonal and its diagonal, respectively, by \[\mathcal{G}_{N}^{+}:=\{(x,y)\in V_{N}\mid y=x+1\}\text{ and }\mathcal{G}_{N}:=\{(x,y )\in V_{N}\mid y=x\}. \tag{2.22}\] **Definition 2.6**.: _Let \((\varphi_{i}^{N};t\geq 0)\) be the time-dependent, two-point correlation function, defined on \((x,y)\in V_{N}\) with \(x\neq y\) by_ \[\varphi_{i}^{N}(x,y):=\begin{cases}\mathbb{E}_{\mu^{N}}[\bar{\eta}_{tN^{2}}(x )\bar{\eta}_{tN^{2}}(y)],&\text{if }(x,y)\notin\partial V_{N},\\ 0,&\text{if }(x,y)\in\partial V_{N},\end{cases} \tag{2.23}\] _and extended symmetrically to \((\overline{\Lambda}_{N})^{2}\setminus\mathcal{G}_{N}\)._ Now we make some extra assumptions on the initial measures, besides (H1). We assume that there exists a continuous profile \(\gamma:[0,1]\to[0,\alpha]\) such that \[\frac{1}{N}\sum_{x=1}^{N}\left|\rho_{0}^{N}(x)-\gamma\left(\frac{x}{N}\right) \right|\xrightarrow{N\to\infty}0.\] (H2) We also assume that there exists a sequence of profiles \(g_{N}(\cdot)\) of class \(C^{6}\) that satisfy, for each \(N\geq 1\) \[\hat{a}_{i}^{j}g_{N}=\hat{a}_{n}(Na_{N}^{i}u+b_{N}^{i}),\] (H3) for \(u\in[0,1]\) and \(j=0,1,2,3\), where \(a_{N}^{i}\) and \(b_{N}^{i}\) were defined in (2.20) and such that, for every \(N\geq 1\), \[\max_{\begin{subarray}{c}x\in\Lambda_{N}\\ x\neq y\end{subarray}}\left|\rho_{0}^{N}(x)-g_{N}\left(\frac{x}{N}\right) \right|\lesssim\frac{1}{N}.\] (H4) We also assume that \[\max_{\begin{subarray}{c}(x,y)\in V_{N}\\ x\neq y\end{subarray}}\left|\varphi_{0}^{N}(x,y)\right|\lesssim\frac{1}{N}, \quad\max_{x\in\Lambda_{N}\setminus\{1,N-1\}}\left|\mathbb{E}_{\mu^{N}}\big{[} \alpha\eta_{0}(x)(\eta_{0}(x)-1)-(\alpha-1)\rho_{0}^{N}(x)^{2}\big{]}\right| \lesssim\frac{1}{N},\] (H5) and that for \(x=1\) and \(x=N-1\), \[\max_{\begin{subarray}{c}y\in\Lambda_{N}\\ x\neq y\end{subarray}}\left|\varphi_{0}^{N}(x,y)\right|\lesssim\frac{1}{N}\min \{1,N^{\theta-1}\},\quad\max_{x=1,N-1}\left|\mathbb{E}_{\mu^{N}}\big{[} \alpha\eta_{0}(x)(\eta_{0}(x)-1)-(\alpha-1)\rho_{0}^{N}(x)^{2}\big{]}\right| \lesssim\frac{1}{N}\min\{1,N^{\theta-1}\}.\] (H6) **Notation:** Above and in what follows, we denote by \(\lesssim\) an inequality that is correct up to a multiplicative constant independent of \(N\). Now we present the main results of this article. **Theorem 2.3** (Non-Equilibrium Fluctuations).: _Let \(\alpha\geq 1\) and \(\theta\in\mathbb{R}\). Let \(\gamma\in C^{6}([0,1])\) and \((\mu^{N})_{N\in\mathbb{N}}\) a sequence of probability measures satisfying (H1) -(H6). Then, the sequence of probability measures \(\{\mathbb{Q}_{N}\}_{N\in\mathbb{N}}\) is tight with respect to the \(J_{1}\)-Skorohod topology of \(\mathcal{Q}_{N}([0,T],\mathcal{S}_{t}^{\prime})\) and all limit points \(\mathbb{Q}\) are probability measures concentrated on paths \(Y\) satisfying_ \[Y_{t}(f)=Y_{0}(S_{t}^{i}f)+W_{t}^{i}(f), \tag{2.24}\] _for any \(f\in\mathcal{S}_{i}\) and any \(t\in[0,T]\). Above \(S_{t}^{i}:\mathcal{S}_{t}\to\mathcal{S}_{i}\) is the semigroup associated to the hydrodynamic equation (2.4) with the respective boundary conditions, and \(W_{t}^{i}\) is a mean-zero Gaussian random variable of variance_ \[\int_{0}^{t}\|S_{t-s}^{i}f\|_{L^{2}(\rho_{s})}^{2}ds,\] _where, for every \(s\in[0,T]\) and \(g,h\in L^{2}(\rho_{s})\),_ \[\langle h,g\rangle_{L^{2}(\rho_{s})} :=\int_{0}^{1}2\chi_{a}(\rho_{s}(u))h(u)g(u)du\] \[+\mathbb{1}(\theta=1)\left\{\left[\lambda^{t}(1-2\rho^{t})\rho_{s }(0)+\lambda^{t}\rho^{t}\alpha\right]h(0)g(0)+\left[\lambda^{t}(1-2\rho^{t}) \rho_{s}(1)+\lambda^{t}\rho^{t}\alpha\right]h(1)g(1)\right\}\] _and \(\rho_{s}\) is the unique weak solution of the corresponding hydrodynamic equation (2.4). Above,_ \[\chi_{a}(\rho)=\rho(\alpha-\rho) \tag{2.25}\] _represents the mobility of our model. Moreover, \(Y_{0}\) and \(W_{t}^{i}\) are uncorrelated in the sense that for all \(f,g\in\mathcal{S}_{t}\) it holds \(\mathbb{E}_{\mathbb{Q}}[Y_{0}(f)W_{t}^{i}(g)]=0\)._ In the last theorem, we do not guarantee the convergence of the hole sequence \(\{\mathcal{Q}_{N}\}_{N\in\mathbb{N}}\) but only the convergence up to a subsequence, whose limit points we are not able to prove their uniqueness (i.e. independence with respect to the convergent subsequence) with only the assumptions of the theorem. Nevertheless, when we also impose the convergence at the initial time \(t=0\) of \(Y_{t}^{N}\) to a Gaussian process, then, uniqueness holds and we prove that the hole sequence \(\{\mathcal{Q}_{N}\}_{N\in\mathbb{N}}\) converges to a measure \(\mathbb{Q}\) which is concentrated on the unique solution of the next martingale problem, which is an Ornstein-Uhlenbeck (O.U.) process. With this extra assumption at time \(t=0\), we prove uniqueness of the limit point and convergence of \(\{\mathcal{Q}_{N}\}_{N\in\mathbb{N}}\) follows. **Definition 2.7** (Ornstein-Uhlenbeck - Definition 2.4 of [2]).: _Fix some time horizon \(T>0\). Let \(C\) be a topological vector space, \(A:C\to C\) an operator letting \(C\) invariant and \(c:C\to[0,\infty)\) a continuous functional satisfying \(c(AH)=|\lambda|c(H)\), for all \(\lambda\in\mathbb{R}\) and \(H\in C\). Let \(C^{\prime}\) be the topological dual of \(C\) equipped with the weak-\(*\) topology. Denote by \(\mathcal{C}([0,T],C^{\prime})\) the set of continuous trajectories in \([0,T]\) of functionals in \(C^{\prime}\). We say that the process \(\{Y_{t};t\in[0,T]\}\in\mathcal{C}([0,T],C^{\prime})\}\) is a solution of the O.U. martingale problem \(OU(C,A,c)\) on the time interval \([0,T]\) with initial (random) condition \(y_{0}\in C^{\prime}\) if:_ 1. _for any_ \(H\in C\) _the two real-valued processes_ \(M_{\_}(H)\) _and_ \(N(H)\) _defined by_ \[M_{\_}(H) =Y_{t}(H)-Y_{0}(H)-\int_{0}^{t}Y_{s}(AH)ds,\] \[N_{\_}(H) =(M_{\_}(H))^{2}-tc^{2}(H),\] _are martingales with respect to the natural filtration of the process, that is,_ \(\{\mathcal{F}_{t}\ ;\ t\in[0,T]\}=\{\sigma(Y_{s}(H)\ |\ s\leq t,H\in C)\ ;\ t\in[0,T]\}\)_._ 2. \(Y_{0}=y_{0}\) _in law._ **Theorem 2.4** (Convergence to the Ornstein-Uhlenbeck process).: _Let \(\alpha\in\mathbb{N}\) and \(\theta\in\mathbb{R}\). Assume the conditions of Theorem 2.3 and also that the sequence of initial density fluctuation field \(\{Y_{0}^{N}\}_{N\in\mathbb{N}}\) converges, as \(N\to+\infty\), to a mean-zero Gaussian field \(Y_{0}\) with covariance given, for \(f,g\in\mathcal{S}_{t}\), by_ \[\sigma(f,g):=\mathbb{E}[Y_{0}(f)Y_{0}(g)]=\lim_{N\to+\infty}\mathbb{E}[Y_{0}^{ N}(f)Y_{0}^{N}(g)].\] _Then:_ 1. _if_ \(\theta\geq 0\)_, the sequence_ \(\{\mathbb{Q}_{N}\}_{N\in\mathbb{N}}\) _converges, as_ \(N\to+\infty\)_, to a measure_ \(\mathbb{Q}\) _which is concentrated on the unique solution_ \(Y_{t}\) _of the O.U. martingale problem_ \(OU(\mathcal{S}_{t},\alpha\Delta,\|\cdot\|_{L^{2}(\rho_{t})})\) _on the time interval_ \([0,T]\) _with the initial (random) condition equal to_ \(Y_{0}\)_. Thus,_ \(Y_{t}\) _is a generalized O.U. process, which is the unique (in law) formal solution of the stochastic partial differential equation:_ \[\partial_{t}Y_{t}=\alpha\Delta Y_{t}\,dt+\sqrt{2\chi_{a}(\rho_{t})}\nabla dW_{t},\] (2.26) _where_ \(dW_{t}\) _is a space-time white noise with unit variance and_ \(\alpha\Delta\) _is the same operator as in (_2.4_) with the corresponding boundary conditions depending on the value of_ \(\theta\)_. As a consequence, the covariance of the limit field_ \(Y_{t}\) _is given, for_ \(f,g\in\mathcal{S}_{t}\)_, by_ \[\mathbb{E}[Y_{t}(f)Y_{t}(g)]=\sigma(S_{t}^{i}f,S_{s}^{i}g)+\int_{0}^{s}( \partial_{u}S_{t-r}^{i}f,\partial_{u}S_{s-r}^{i}g)_{L^{2}(\rho_{r})}dt,\] _with_ \(\partial_{u}h(0)\) _(respectively,_ \(\partial_{u}h(1)\)_) identified with_ \(\partial_{u}h(0^{+})=\lim\limits_{x\downarrow 0}\partial_{u}h(x)\) _(respectively,_ \(\partial_{u}h(1^{-})=\lim\limits_{x\uparrow 1}\partial_{u}h(x)\)_), for_ \(h\in\mathcal{S}_{t}\)_._ 2. _if_ \(\theta<0\)_, the sequence_ \(\{\mathbb{Q}_{N}\}_{N\in\mathbb{N}}\) _converges, as_ \(N\to+\infty\)_, to a measure_ \(\mathbb{Q}\) _which is concentrated on the unique solution_ \(Y_{t}\) _of the Ornstein-Uhlenbeck martingale problem_ \(OU(\mathcal{S}_{t},\alpha\Delta,\|\cdot\|_{L^{2}(\rho_{t})})\) _on the time interval_ \([0,T]\) _with initial (random) condition equal_ \(Y_{0}\)_, and whose uniqueness (in law) of solution is guaranteed when remarking that_ \(Y_{t}\) _satisfies the following two extra conditions:_ 1. _regularity condition:_ \(\mathbb{E}[(Y_{t}(H))^{2}]\lesssim\|H\|_{L^{2}}\)_, for any_ \(H\in\mathcal{S}_{t}\)_;_ 2. _boundary condition: For each_ \(j\in\{0,1\}\)_, let_ \(\iota_{\epsilon}^{j}\) _be defined as, for_ \(j=0\)_,_ \(\iota_{\epsilon}^{0}(u):=\epsilon^{-1}\mathbb{I}_{\{0,\epsilon\}}(u)\) _and, for_ \(j=1\)_,_ \(\iota_{\epsilon}^{1}(u):=\epsilon^{-1}\mathbb{I}_{\{1-\epsilon,1\}}(u)\)__\(u\in[0,1]\)_. For any_ \(t\in[0,T]\) _and_ \(j\in\{0,1\}\)_, it holds that_ \[\lim\limits_{\epsilon\to 0}\mathbb{E}\Bigg{[}\Bigg{(}\int_{0}^{t}Y_{t}(\iota_{ \epsilon}^{j})ds\Bigg{)}^{2}\Bigg{]}=0.\] As a consequence of the previous result we obtain the non-equilibrium fluctuations starting from a local Gibbs state. **Corollary 2.4.1**.: _Fix a measurable profile \(\gamma_{0}:[0,1]\to[0,\alpha]\) satisfying (H3) and (H4); and start the process SEP\((\alpha)\) from the Binomial product measure with marginals given by_ \[\gamma_{\gamma_{\alpha}}^{N}\{\eta\mid\eta(x)=k\}=\binom{\alpha}{k}\bigg{[} \frac{\gamma_{\alpha}\big{(}\frac{x}{N}\big{)}}{a}\bigg{]}^{k}\bigg{[}1-\frac {\gamma_{\alpha}\big{(}\frac{x}{N}\big{)}}{a}\bigg{]}^{a-k},\] _for \(k\in\{0,\ldots,\alpha\}\). Let \(f,g\in\mathcal{S}_{t}\). Then Theorem 2.4 holds with_ \[\sigma(S_{t}^{i}f,S_{s}^{i}g)=\int_{0}^{1}\chi_{a}(\gamma_{0}(u))S_{t}^{i}f(u )S_{s}^{i}g(u)du.\] Observe that the remaining assumptions of Theorems 2.3 and 2.4 are satisfied by the starting measure \(\gamma_{\gamma_{\alpha}}^{N}\), so that above, we only need to impose (H3) and (H4) from the initial profile. To prove Theorem 2.3 and Theorem 2.4, we will need some auxiliary results that we will leave their proofs and details for after showing each of the above theorems. ## 3 Proof of Theorems 2.3 and 2.4 The proof of both theorems follows by showing first the tightness of the sequence of probability measures \(\{\mathbb{Q}_{N}\}_{N\in\mathbb{N}}\) with respect to the Skorohod topology of \(\mathcal{D}_{N}([0,T],\mathcal{S}_{t}^{\prime})\); and to show that all limit points \(\mathbb{Q}\) are probability measures concentrated on paths \(Y\) satisfying (2.24). We start now with the former. ### Tightness Recall that the spaces \(\mathcal{S}_{t}\) are nuclear Frechet spaces when endowed with the seminorms defined in (2.15). Therefore, in order to prove tightness, we can use Mitoma's criterium (that we recall below) and restrict ourselves to showing tightness of the sequence of real-valued processes \(\{Y_{t}^{N}(\phi)\}_{N\in\mathbb{N}}\), for every \(\phi\in\mathcal{S}_{t}\). **Theorem 3.1** (Mitoma's criterium - Theorem 4.1 of [24]).: _A sequence of processes \(\{X_{t}^{N};t\in[0,T]\}_{N\in\mathbb{N}}\) in \(\mathcal{D}([0,T],\mathcal{S}_{t}^{\prime})\) is tight with respect to the Skorohod topology if, and only if, for every \(H\in\mathcal{S}_{t}\), the sequence of real-valued processes \(\{X_{t}^{N}(t);t\in[0,T]\}_{N\in\mathbb{N}}\) is tight with respect to the Skorohod topology of \(\mathcal{D}([0,T],\mathbb{R})\)._ Recall that, from Lemma 5.1 of Appendix 1 of [20], \[M_{t}^{N}(\phi):=Y_{t}^{N}(\phi)-Y_{0}^{N}(\phi)-\int_{0}^{t}(N^{2}\mathcal{L} _{N}+\partial_{s})Y_{s}^{N}(\phi)ds, \tag{3.1}\] is a martingale for every \(\phi\in\mathcal{S}_{t}\). Therefore, in order to show that \(\{Y_{t}^{N}(\phi_{t})\}_{N\in\mathbb{N}}\) is tight, it is enough to show that \[\{Y_{0}^{N}(\phi)\}_{N\in\mathbb{N}}\,\ \{[M_{t}^{N}(\phi)]_{t\geq 0}\}_{N\in \mathbb{N}}\ \text{and}\ \left\{\int_{0}^{t}(N^{2}\mathcal{L}_{N}+\partial_{s})Y_{s}^{N}(\phi)ds \right\}_{N\in\mathbb{N}}\] are tight. We start by showing that \(\{Y_{0}^{N}(\phi)\}_{N\in\mathbb{N}}\) is tight. #### 3.1.1 Initial time By Helly-Bray theorem, it is enough to show that \[\lim_{A\to\infty}\limsup_{N\to+\infty}\mathbb{P}_{\mu^{N}}[|Y_{0}^{N}(\phi)|>A]=0.\] By Markov's inequality, for every \(A>0\) and for every \(N\in\mathbb{N}\), \[\mathbb{P}_{\mu^{N}}[|Y_{0}^{N}(\phi)|>A] \leq\frac{1}{A^{2}}\mathbb{E}_{\mu^{N}}[|Y_{0}^{N}(\phi)|^{2}]\] \[=\frac{1}{A^{2}}\frac{1}{N}\Big{(}\sum_{x\in\Lambda_{N}}[\phi\{ \frac{x}{N}\}]^{2}\mathbb{E}_{\mu^{N}}[\tilde{\eta}_{0}(x)^{2}]+\sum_{ \begin{subarray}{c}x,y\in\Lambda_{N}\\ y\neq x\end{subarray}}\phi\left(\frac{x}{N}\right)\phi\left(\frac{y}{N}\right) \varphi_{0}^{N}(x,y)\Big{)}.\] Using (H5) and the fact that the occupation variables are bounded by \(\alpha\), we can bound the last display from above by a constant independent of \(A\) and \(N\) times \[\frac{1}{A^{2}N}\left(\alpha^{2}N+N\right)\lesssim\frac{1}{A^{2}}.\] Therefore by taking \(A\to\infty\) the result follows. #### 3.1.2 The sequence of martingales For the martingales \(\{M_{t}^{N}(\phi)\ ;\ t\in[0,T]\}_{N\in\mathbb{N}}\), tightness is just a consequence of the fact that \(\{M_{t}^{N}(\phi)\ ;\ t\in[0,T]\}_{N\in\mathbb{N}}\) converges in law with respect to the Skorohod topology of \(\mathcal{D}([0,T],\mathbb{R})\) (see the next lemma) and therefore it has to be tight. **Lemma 3.2**.: _For \(\phi\in\mathcal{S}_{t}\), the sequence of martingales \(\{M_{t}^{N}(\phi)\ ;\ t\in[0,T]\}_{N\in\mathbb{N}}\) converges in law with respect to the topology of \(\mathcal{D}([0,T];\mathbb{R})\), as \(N\to+\infty\), towards a mean-zero Gaussian process \(W_{t}^{i}(\phi)\) with quadratic variation given by_ \[\int_{0}^{t}\|\nabla\phi\|_{L^{2}(\rho_{s})}^{2}ds :=\int_{0}^{t}\int_{0}^{1}2\chi_{a}(\rho_{s}(u))\nabla\phi(u)^{2}duds\] \[+\mathbb{1}(\theta=1)\int_{0}^{t}\Big{\{}\big{(}\lambda^{t}( \alpha-2\rho^{t})\rho_{s}(0)+\alpha\lambda^{t}\rho^{t}\big{)}\nabla\phi(0)^{2}\] \[+\big{(}\lambda^{r}(\alpha-2\rho^{r})\rho_{s}(1)+\alpha\lambda^{r }\rho^{r}\big{)}\nabla\phi(1)^{2}\big{\}}ds.\] Proof.: Let us fix \(\phi\in\mathcal{S}_{t}\). To prove that \(\{M_{t}^{N}(\phi)\ ;\ t\in[0,T]\}_{N\in\mathbb{N}}\) converges in law with respect to the topology of \(\mathcal{D}([0,T];\mathbb{R})\), as \(N\to+\infty\), it is enough to verify conditions (1)-(3) of Theorem 3.2 of [2]. Let us verify condition (1), that is, that \[\text{for any }N>1,\ \text{the quadratic variation of}\ M_{t}^{N}(\phi)\ \text{has continuous trajectories almost surely.} \tag{3.2}\] The quadratic variation of \(M_{t}^{N}(\phi)\) is given by \[\langle M^{N}(\phi)\rangle_{t}:=\int_{0}^{t}\Gamma_{s}^{N}(\phi)ds,\] where \(\Gamma_{s}^{N}(\phi):=N^{2}\mathcal{E}_{N}Y_{s}^{N}(\phi)^{2}-2N^{2}Y_{s}^{N}(\phi) \mathcal{E}_{N}Y_{s}^{N}(\phi)\). A long, but simple computation shows that this quadratic variation is given by \[\begin{split}\langle M^{N}(\phi)\rangle_{t}&=\frac{N }{N^{\theta}}\int_{0}^{t}\Big{(}\phi\big{(}\frac{1}{N}\big{)}^{2}\big{(}\lambda^ {t}(\alpha-2\rho^{t})\eta_{sN^{2}}(1)+\alpha\lambda^{t}\rho^{t}\big{)}\\ &\qquad\qquad+\phi\big{(}\frac{N-1}{N}\big{)}^{2}\big{(}\lambda^ {t}(\alpha-2\rho^{t})\eta_{sN^{2}}(N-1)+\alpha\lambda^{t}\rho^{t}\big{)}\Big{)} ds\\ &\quad+\int_{0}^{t}\frac{1}{N}\sum_{x=1}^{N-2}\nabla_{N}\phi \big{(}\frac{x}{N}\big{)}^{2}\Big{(}\eta_{sN^{2}}(x)\big{(}\alpha-\eta_{sN^{2} }(x+1)\big{)}+\eta_{sN^{2}}(x+1)\big{(}\alpha-\eta_{sN^{2}}(x)\big{)}\Big{)} ds,\end{split} \tag{3.3}\] where \[\nabla_{N}\phi\big{(}\frac{x}{N}\big{)}:=N\Big{(}\phi\big{(}\frac{x+1}{N} \big{)}-\phi\big{(}\frac{x}{N}\big{)}\Big{)} \tag{3.4}\] is the discrete gradient of \(\phi\). Therefore (3.2) follows from the fact that the number of particles is bounded by \(\alpha\) and from the observation that the integral in time of a bounded function is a continuous function of time. Let us verify condition (2) in Theorem 3.2 of [2], that is, that \[\lim_{N\to\infty}\mathbb{E}_{n^{N}}\Big{[}\sup_{0\leq s\leq T}|M_{s}^{N}(\phi )-M_{s^{\prime}}^{N}(\phi)|\Big{]}=0.\] Observe that the integral term in (3.1) is continuous, by exactly the same reason as in (3.2). Therefore, in order to prove the last limit, it is enough to show that \[\lim_{N\to\infty}\mathbb{E}_{n^{N}}\Big{[}\sup_{0\leq s\leq T}|Y_{s}^{N}(\phi )-Y_{s^{\prime}}^{N}(\phi)|\Big{]}=0.\] Since a jump only changes a configuration in (at most) two sites and the occupation variables are bounded, we can bound the last expectation from above by \(\frac{2}{\sqrt{N}}\|\phi^{t}\|_{\infty}\), from where the result follows. We are left to verify condition (3) in Theorem 3.2 of [2], that is, that \[\text{for any }t\in[0,T],\,\langle M^{N}(\phi)\rangle_{t}\text{ converges, as }N\to+\infty,\text{ and in probability to }\int_{0}^{t}\|\nabla\phi\|_{L^{2}(\rho_{s})}^{2}ds.\] Recall (3.3). We now argue that \(\int_{0}^{t}\Gamma_{s}^{N}(\phi)ds\) is an additive functional of the empirical measure plus some error that vanishes in the limit. To this end, we split the terms defining \(\Gamma_{s}^{N}(\phi)\) into bulk terms (the third line of (3.3)) and boundary terms (the first two lines of (3.3)). We present the argument for the leftmost term appearing in the bulk term, namely, \[\int_{0}^{t}\frac{1}{N}\sum_{x=1}^{N-2}\nabla_{N}\phi\big{(}\frac{x}{N}\big{)}^ {2}\eta_{sN^{2}}(x)\big{(}\alpha-\eta_{sN^{2}}(x+1)\big{)}ds, \tag{3.5}\] but for the remaining one, it is completely analogous. The argument also extends to the boundary terms. We leave all this to the reader. Let \(0<\epsilon<1/2\) and \[\Lambda_{N}^{\epsilon,t}:=\{1,\ldots,\epsilon(N-1)\}\quad\text{and}\quad \Lambda_{N}^{\epsilon,r}:=\{N-1-\epsilon(N-1),\ldots,N-1\} \tag{3.6}\] and we consider the sum divided into \(x\notin\Lambda_{N}^{\epsilon,r}\cup\Lambda_{N}^{\epsilon,t}\) and its complementary. Note that the terms in the complementary sets are uniformly (in \(N\)) bounded by \(\epsilon\). Now, using twice the replacement lemma (see Lemma 4.3 of [14], which we recall in Lemma E.1) with proper choices of the function \(\varphi\) appearing in the statement of Lemma E.1, we can rewrite the terms in (3.5) for \(x\notin\Lambda_{N}^{\epsilon,r}\cup\Lambda_{N}^{\epsilon,t}\) as \[\int_{0}^{t}\frac{1}{N}\sum_{x\notin\Lambda_{N}^{\epsilon,r}\cup\Lambda_{N}^{ \epsilon,t}}\nabla_{N}\phi\big{(}\frac{x}{N}\big{)}^{2}\overline{\eta}_{sN^{2} }^{\lfloor\epsilon N\rfloor}(x)\Big{(}\alpha-\overline{\eta}_{sN^{2}}^{ \lfloor\epsilon N\rfloor}(x+1)\Big{)}ds. \tag{3.7}\] Above, for \(L\in\mathbb{N}\), \[\overline{\eta}^{\epsilon}(z):=\frac{1}{L}\sum_{y=z+1}^{z+L}\eta(y)\quad \text{and}\quad\overline{\eta}^{L}(z):=\frac{1}{L}\sum_{y=-L}^{z-1}\eta(y). \tag{3.8}\] Now it is enough to note that \(\overline{\eta}^{\epsilon N}(x)=\langle\pi^{N},\iota_{\epsilon}^{s/N}\rangle\) and similarly for the left average. Above \(\iota_{\epsilon}^{s/N}(u):=\frac{1}{L}\frac{1}{L}(\iota_{N/x,N/N+\epsilon})(u)\). From the fact that \(\phi\in\mathcal{S}_{1}\) and the hydrodynamic limit namely Theorem 2.2, it follows the convergence in distribution, as \(N\to+\infty\) and then \(\epsilon\to 0\), to \(\int_{0}^{t}\|\nabla\phi\|_{L^{2}(\rho_{s})}^{2}ds\). Since the limit is deterministic, the convergence in probability also holds. #### 3.1.3 The integral term Observe that, for every \(\phi\in\mathcal{S}_{i}\), \[\int_{0}^{t}(N^{2}\mathcal{L}_{N}+\partial_{z})Y_{s}^{N}(\phi)ds= \int_{0}^{t}Y_{s}^{N}(\alpha\Delta_{N}\phi)ds \tag{3.9}\] \[-\int_{0}^{t}\frac{\alpha N^{3/2}}{N^{\theta}}\Big{[}\lambda^{t} \phi\left(\frac{1}{N}\right)\tilde{\eta}_{sN^{2}}(1)+\lambda^{r}\phi\left( \frac{N-1}{N}\right)\tilde{\eta}_{sN^{2}}(N-1)\Big{]}ds\] (3.10) \[-\int_{0}^{t}\alpha\sqrt{N}\Big{[}\nabla_{N}\phi\left(\frac{N-1} {N}\right)\tilde{\eta}_{sN^{2}}(N-1)-\nabla_{N}\phi\left(0\right)\tilde{\eta}_ {sN^{2}}(1)\Big{]}ds, \tag{3.11}\] where, for every \(x\in\Lambda_{N}\), \[\Delta_{N}\phi\left(\frac{x}{N}\right):=N^{2}\Big{[}\phi\left(\frac{x+1}{N} \right)+\phi\left(\frac{x-1}{N}\right)-2\phi\left(\frac{x}{N}\right)\Big{]}^{1}\] is the discrete Laplacian of \(\phi\) evaluated at \(\frac{x}{N}\). We will treat each of the integral terms (3.9), (3.10), and (3.11), separately. We will rely on the _Kolmogorov-Centsov's criterion_: **Proposition 3.3** (Kolmogorov-Centsov criterion - Problem 2.4.11 of [21]).: _A sequence \(\{X_{t}^{N};t\in[0,T]\}_{N\in\mathbb{N}}\) of continuous, real-valued, stochastic processes is tight with respect to the uniform topology of \(\mathcal{C}([0,T];\mathbb{R})\) if the sequence of real-valued random variables \(\{X_{0}^{N}\}_{N\in\mathbb{N}}\) is tight and there are constants \(K,\gamma_{1},\gamma_{2}>0\) such that, for any \(t,s\in[0,T]\) and any \(N\in\mathbb{N}\), it holds that_ \[\mathbb{E}[|X_{t}^{N}-X_{s}^{N}|^{\gamma_{1}}]\leq K|t-s|^{1+\gamma_{2}}.\] We start proving the tightness of (3.9): by the Cauchy-Schwarz inequality and Fubini's theorem, we have for every \(t_{1},t_{2}\in[0,T]\) such that \(t_{1}<t_{2}\), that \[\mathbb{E}_{\mu^{N}}\Big{[}\bigg{(}\int_{t_{1}}^{t_{2}}Y_{s}^{N}( \alpha\Delta_{N}\phi)ds\bigg{)}^{2}\Big{]} \leq\big{(}t_{2}-t_{1}\big{)}\int_{t_{1}}^{t_{2}}\mathbb{E}_{\mu ^{N}}\Big{[}Y_{s}^{N}(\alpha\Delta_{N}\phi)^{2}\Big{]}ds\] \[\lesssim\frac{t_{2}-t_{1}}{N}\int_{t_{1}}^{t_{2}}\sum_{x,y\in \Lambda_{N}}\mathbb{E}_{\mu^{N}}\Big{[}\tilde{\eta}_{sN^{2}}(x)\tilde{\eta}_ {sN^{2}}(y)\big{]}\Delta_{N}\phi\left(\frac{x}{N}\right)\Delta_{N}\phi\left( \frac{y}{N}\right)ds.\] Using the fact that the occupation variables are bounded by \(\alpha\) and from Proposition 4.2, last display is bounded from above by \[C(t_{2}-t_{1})^{2}\Big{[}\alpha^{2}\sup_{x\in\Lambda_{N}}\Delta_{N}\phi\left( \frac{x}{N}\right)^{2}+\sup_{\begin{subarray}{c}x,y\in\Lambda_{N}\\ y\neq x\end{subarray}}\Big{|}\Delta_{N}\phi\left(\frac{x}{N}\right)\Delta_{N} \phi\left(\frac{y}{N}\right)\Big{|}\Big{]}, \tag{3.12}\] for some constant \(C\) independent of \(N\). Now, since \(\phi\in\mathcal{S}_{i}\subseteq C^{\infty}([0,1])\), (3.12) is bounded from above by another constant times \[(\|\phi\|_{\infty}^{2}+\|\phi^{\prime\prime}\|_{\infty}^{2})(t_{2}-t_{1})^{2},\] which, by Proposition 3.3, shows the tightness of (3.9). Let us now prove the tightness of the remaining terms, i.e. (3.10) and (3.11). We present the proof for the terms related to the left boundary of (3.10) and (3.11); for the right boundary it is completely analogous. We start with the case \(\theta=1\). In this case we note that the terms related to the left boundary in (3.10) and (3.11) are equal to \[\int_{0}^{t}\alpha\sqrt{N}\Big{[}\lambda^{t}\phi\left(\frac{1}{N}\right)-\nabla _{N}\phi\left(0\right)\Big{]}\tilde{\eta}_{sN^{2}}(1)ds.\] Doing a Taylor expansion on \(\phi\) at \(x=0\) and noting that \(\phi\in\mathcal{S}_{i}\), since the occupation variables are bounded, we conclude that if \(X_{t}^{N}\) is defined as the integral term above, then \[\mathbb{E}[|X_{t}^{N}-X_{s}^{N}|^{2}]\lesssim|t-s|^{2}, \tag{3.13}\] and tightness follows. Now we analyse the case \(\theta>1\). In this case it is enough to prove that \(X_{t}^{N}\) defined as the next integral term \[\int_{0}^{t}\frac{N^{3/2}}{N^{\theta}}a\lambda^{t}\phi\left(\frac{1}{N}\right) \tilde{\eta}_{sN^{2}}(1)ds,\] satisfies (3.13) with \(\gamma_{1}=2\) and \(\gamma_{2}=\delta_{\theta}\) where \(\delta_{\theta}\) is defined in Lemma 4.3. This result also implies that all the integral terms in (3.10) and (3.11) are tight. But from Lemma 4.3, we have that \[\mathbb{E}\bigg{[}\bigg{|}\int_{s}^{t}\frac{N^{3/2}}{N^{\theta}}a\lambda^{t} \phi\left(\frac{1}{N}\right)\tilde{\eta}_{sN^{2}}(1)ds\bigg{|}^{2}\bigg{]} \lesssim|t-s|^{1+\delta_{\theta}},\] and we finish the proof for \(\theta>1\). Now we go to the case \(0\leq\theta<1\). Note that since \(\phi\in\mathcal{S}_{t}\), then \(\phi(0)=0\). Thus \[\int_{0}^{t}\frac{N^{3/2}}{N^{\theta}}a\lambda^{t}\phi\left(\frac{1}{N} \right)\tilde{\eta}_{sN^{2}}(1)ds=\int_{0}^{t}\frac{\sqrt{N}}{N^{\theta}}a \lambda^{t}\nabla_{N}\phi\left(0\right)\tilde{\eta}_{sN^{2}}(1)ds.\] Therefore, tightness in this case will follow if we show that \[\int_{0}^{t}a\sqrt{N}\nabla_{N}\phi\left(0\right)\tilde{\eta}_{sN^{2}}(1)ds= \int_{0}^{t}a\sqrt{N}\phi^{\prime}\left(0\right)\tilde{\eta}_{sN^{2}}(1)ds+O \left(\frac{1}{\sqrt{N}}\right),\] satisfies (3.13) with \(\gamma_{1}=2\) and \(\gamma_{2}=\delta_{\theta}\) where \(\delta_{\theta}\) is again defined as in Lemma 4.3. This is a simple consequence of Lemma 4.3. Finally, we treat the case \(\theta<0\). Note that now we need to prove tightness of \[\int_{0}^{t}\bigg{[}\frac{N^{3/2}}{N^{\theta}}a\lambda^{t}\phi \left(\frac{1}{N}\right)\tilde{\eta}_{sN^{2}}(1)-a\sqrt{N}\nabla_{N}\phi\left( 0\right)\tilde{\eta}_{sN^{2}}(1)\bigg{]}ds.\] From Lemma 4.3 the rightmost term in last display is tight. For the leftmost, we do a Taylor expansion of \(\phi\) of order \(\lfloor-\theta\rfloor+2\) around \(x=0\), and we use that \(\phi\in\mathcal{S}_{t}\), so that the leftmost term in last display writes as \[\int_{0}^{t}\frac{N^{3/2}}{N^{\theta+\lfloor-\theta\rfloor+2}}a \lambda^{t}\phi(t_{N})\tilde{\eta}_{sN^{2}}(1)ds,\] where \(t_{N}\) is a point between \(0\) and \(1/N\). Since \(3/2-\theta-\lfloor-\theta\rfloor-2<1/2\), then Lemma 4.3 shows that the Kolmogorov-Centsov's criteria is satisfied with \(\gamma_{1}=2\) and \(\gamma_{2}=\min\{\delta_{\theta},1\}>0\) and tightness follows. This ends the proof of tightness. ### Characterization of the limit points Having proven tightness, we already know that there exists a subsequence \(\{\mathbb{Q}_{N_{t}}\}_{s\in\mathbb{N}}\) of \(\{\mathbb{Q}_{N}\}_{s\in\mathbb{N}}\) which is convergent. Let us denote by \(\mathbb{Q}\) its limit. We want now to characterize \(\mathbb{Q}\). To do that, we will start by showing that \(\mathbb{Q}\) gives probability one to all the paths of functionals \(\{Y_{t}\mid t\geq 0\}\) with a decomposition of the form (2.24) - see Section 3.2.1. The strategy is to rewrite Dynkin's martingale \(M_{t}^{N}\), see (3.1), applied to a particular test function \(\phi\) defined in (3.14) and to prove that the integral term of \(M_{t}^{N}\) goes to zero as \(N\to+\infty\) in the \(L^{2}(\mathbb{P}_{t^{\mu}})\)-norm. This is what is done in the next subsection. #### 3.2.1 Proof of the decomposition given in (2.24) Let \(S_{t}^{i}\) be the semigroup associated to (2.4). We start by observing that, if \(\lambda^{t}=\lambda^{r}=\alpha\), then \(S_{t}^{i}=T_{at}^{\theta}\), where \(T_{at}^{\theta}\) is the corresponding semigroup when taking in (2.4) \(\lambda^{t}=\lambda^{r}=1\) and that coincides with the semigroup taken in Definition 4 of [17]. In this case, due to the previous relation between semigroups, we can simply repeat the proof presented in case \(\alpha=1\) in [16] taking (for every fixed \(t\in[0,T]\) and restricting the process to the time interval \([0,t]\)) as test function \[\phi(u,s):=S_{t-s^{\theta}}^{i}f(u), \tag{3.14}\] where \(f\in\mathcal{S}_{t}\), to obtain the decomposition of the limit point in the form \[Y_{t}(f)=Y_{0}(S_{t}^{i}f)+W_{t}^{i}(f),\] where \(W_{t}^{i}(f)\) is the mean-zero Gaussian process characterized in Lemma 3.2. For the previous choice of \(\lambda^{t}=\lambda^{r}=\alpha\), this test function coincides with \(T_{\alpha(t-)}^{\theta}f(u)\). For completeness, we present here the proof in the general case, which also follows the strategy of [16]. Taking \(\phi_{i}(\cdot)=\phi(\cdot,s)\) defined in (3.14), we have that \[M_{t}^{N}(\phi_{t})=Y_{t}^{N}(\phi_{t})-Y_{0}^{N}(\phi_{0})-\int_{0}^{t}[N^{2} \mathcal{C}_{N}Y_{s}^{N}(\phi_{s})+Y_{s}^{N}(\partial_{s}\phi_{s})]ds\] it is also a martingale. For every \(s\in[0,T]\), if \(f\in\mathcal{S}_{t}\), then \(\phi_{s}\in\mathcal{S}_{t}\). Remarking that the proof of Lemma 3.2 still holds if the test function is time-dependent (and \(C^{1}\) in time), we obtain that \(\{M_{t}^{N}(\phi_{t})\ ;\ t\in[0,T]\}_{N\in\mathbb{N}}\) converges in law with respect to the topology of \(\mathcal{D}([0,T];\mathbb{R})\), as \(N\to+\infty\), towards a mean-zero Gaussian process \(W_{t}^{i}(\phi_{t})=W_{t}^{i}(f)\) with quadratic variation given by \[\int_{0}^{t}||\nabla\phi_{s}||_{L^{2}(\rho_{s})}^{2}ds\] \[:=\int_{0}^{t}\int_{0}^{1}2\chi_{\alpha}(\rho)(u)\left(\nabla\phi _{s}(u)\right)^{2}duds\] \[+\mathbb{1}(\theta=1)\int_{0}^{t}\left\{\lambda^{t}[(\alpha-2 \rho^{t})\rho_{s}(0)+\rho^{t}\alpha]\left(\nabla\phi_{s}(0)\right)^{2}+\lambda ^{r}[(\alpha-2\rho^{r})\rho_{s}(1)+\rho^{r}\alpha]\left(\nabla\phi_{s}(1) \right)^{2}\right\}ds,\] Since, for every \(N\in\mathbb{N}\), \[M_{t}^{N}(\phi_{t})=Y_{t}^{N}(f)-Y_{0}^{N}(S_{t}^{i}f)-\int_{0}^{t}[N^{2} \mathcal{C}_{N}Y_{s}^{N}(\phi_{s})+Y_{s}^{N}(\partial_{s}\phi_{s})]ds, \tag{3.15}\] if we show that the time integral in the last display goes to zero as \(N\to+\infty\), then, using tightness and the previous reasoning about \(\{M_{t}^{N}(\phi_{t})\ ;\ t\in[0,T]\}_{N\in\mathbb{N}}\), taking the limit as \(N\to+\infty\), we have, up to a subsequence, that (3.15) converges in law with respect to the topology of \(\mathcal{D}([0,T];\mathbb{R})\), to \[W_{t}^{i}(f)=Y_{t}(f)-Y_{0}(S_{t}^{i}f),\] as we wanted. By the same computations done to obtain (3.9), (3.10) and (3.11), we have \[N^{2}\mathcal{C}_{N}Y_{s}^{N}(\phi_{s})+Y_{s}^{N}(\partial_{s} \phi_{s}) =\alpha Y_{s}^{N}(\Delta_{N}S_{t-s}^{i}f-\Delta S_{t-s}^{i}f)+Y_{s }^{N}(\alpha\Delta S_{t-s}^{i}f+\partial_{s}S_{t-s}^{i}f)\] \[-\frac{\alpha N^{3/2}}{N^{\theta}}\bigg{[}\lambda^{t}S_{t-s}^{i}f \left(\frac{1}{N}\right)\tilde{\eta}_{sN^{2}}(1)+\lambda^{t}S_{t-s}^{i}f \left(\frac{N-1}{N}\right)\tilde{\eta}_{sN^{2}}(N-1)\bigg{]} \tag{3.16}\] \[-\alpha\sqrt{N}\bigg{[}\nabla_{N}S_{t-s}^{i}f\left(\frac{N-1}{N} \right)\tilde{\eta}_{sN^{2}}(N-1)-\nabla_{N}S_{t-s}^{i}f\left(0\right)\tilde{ \eta}_{sN^{2}}(1)\bigg{]}, \tag{3.17}\] where \(\Delta\) represents the continuous Laplacian operator. Since \(S_{t-s}^{i}f\) is smooth (by the properties of the semigroup \(S_{t-s}^{i}\)), then \(\Delta_{N}S_{t-s}^{i}f-\Delta S_{t-s}^{i}f\) is of order \(O(N^{-2})\) and \(\alpha\Delta S_{t-s}^{i}f+\partial_{t}S_{t-s}^{i}f\) is identically zero because \(S_{t-s}^{i}f\) is solution to the heat equation with diffusion coefficient equal to \(\alpha\) with the corresponding boundary conditions depending on \(\theta\) - recall (2.8) for \(\theta>1\), (2.9) for \(\theta=1\), and (2.10) for \(\theta<1\). It remains now to analyse the terms in (3.16) and (3.17). Here we treat the terms regarding the left boundary, since for the right boundary it is completely analogous. 1. If \(\theta=1\), we have that \[-\frac{\alpha N^{3/2}}{N^{\theta}}\lambda^{t}S_{t-s}^{i}f\left( \frac{1}{N}\right)\tilde{\eta}_{sN^{2}}(1)+\alpha\sqrt{N}\nabla_{N}S_{t-s}^{i}f \left(\frac{1}{N}\right)\tilde{\eta}_{sN^{2}}(1)\] \[=\alpha\sqrt{N}\bigg{[}\left(\nabla_{N}S_{t-s}^{i}f\left(0\right)- \lambda^{t}S_{t-s}^{i}f\left(\frac{1}{N}\right)\right)\tilde{\eta}_{sN^{2}}(1),\] \[=\alpha\sqrt{N}\bigg{[}\left(\nabla_{N}S_{t-s}^{i}f\left(\frac{1 }{N}\right)-\partial_{u}S_{t-s}^{i}f\left(0\right)\right)-\lambda^{t}\left(S_ {t-s}^{i}f\left(\frac{1}{N}\right)-S_{t-s}^{i}f\left(0\right)\right)\bigg{]} \tilde{\eta}_{sN^{2}}(1)\] (3.18) \[+\alpha\sqrt{N}\left[\partial_{u}S_{t-s}^{i}f\left(0\right)- \lambda^{t}S_{t-s}^{i}f\left(0\right)\right]\tilde{\eta}_{sN^{2}}(1).\] (3.19) Since \(S_{t-s}^{i}f\) is smooth, both terms in (3.18) are of order \(O(N^{-1/2})\) and (3.19) is identically zero because \(S_{t-s}^{i}f\) satisfies the boundary conditions given in (2.9). This immediately implies that, if \(\theta=1\), then \(\int_{0}^{t}[N^{2}\mathcal{C}_{N}Y_{s}^{N}(\phi_{s})+Y_{s}^{N}(\partial_{s} \phi_{s})]ds\) goes to zero as \(N\to+\infty\). 2. If \(\theta>1\), since \(f\in\mathcal{S}_{i}\) and so \(S_{i}^{i}f\in\mathcal{S}_{i}^{\prime}\), we have that \[-\frac{N^{3/2}}{N^{\theta}}\alpha\lambda^{\ell}S_{i-\!f}^{i}f\left( \frac{1}{N}\right)\tilde{\eta}_{\textit{s}N^{2}}(1)+\alpha\sqrt{N}\nabla_{N}S_ {i-\!f}^{i}\left(\frac{1}{N}\right)\tilde{\eta}_{\textit{s}N^{2}}(1)\] \[=-\frac{\alpha\lambda^{\ell}}{N^{\theta-1/2}}\nabla_{N}S_{i-\!f} ^{i}f\left(0\right)\tilde{\eta}_{\textit{s}N^{2}}(1)+\alpha\sqrt{N}\left( \nabla_{N}S_{i-\!f}^{i}f\left(0\right)-\partial_{\!x}S_{i-\!f}^{i}f\left(0 \right)\right)\tilde{\eta}_{\textit{s}N^{2}}(1)\] (3.20) \[-N^{3/2-\theta}\alpha\lambda^{\ell}S_{i-\!f}^{i}\left(0\right) \tilde{\eta}_{\textit{s}N^{2}}(1).\] (3.21) Since \(S_{i-\!f}^{i}f\) is smooth and the occupation variables are bounded, then the first term of (3.20) is of order \(O(N^{1/2-\theta})\) and the second is of order \(O(N^{-1/2})\). Finally, integrating (3.21) between \(0\) and \(t\), and taking its \(L^{2}(\mathbb{P}_{\mu_{\nu}})\)-norm, by the Lemma 4.3 we conclude that the integral between \(0\) and \(t\) of this term goes to zero as \(N\to+\infty\), and we are done. 3. If \(0\leq\theta<1\), by the invariance of the semigroup \(S_{t}^{i}\) in \(\mathcal{S}_{i}\), we have that \[-\frac{N^{3/2}}{N^{\theta}}\alpha\lambda^{\ell}S_{i-\!f}^{i}\left( \frac{1}{N}\right)\tilde{\eta}_{\textit{s}N^{2}}(1)+\alpha\sqrt{N}\nabla_{N}S _{i-\!f}^{i}\left(\frac{1}{N}\right)\tilde{\eta}_{\textit{s}N^{2}}(1)\] \[=-\frac{\sqrt{N}}{N^{\theta}}\alpha\lambda^{\ell}\nabla_{N}S_{i- \!f}^{i}\left(0\right)\tilde{\eta}_{\textit{s}N^{2}}(1)+\alpha\sqrt{N}\nabla_ {N}S_{i-\!f}^{i}\left(0\right)\tilde{\eta}_{\textit{s}N^{2}}(1).\] (3.22) Integrating both terms in (3.22) between \(0\) and \(t\), and taking the \(L^{2}(\mathbb{P}_{\mu_{\nu}})\)-norm of each term, by Lemma 4.3, the integral between \(0\) and \(t\) of these terms go to zero as \(N\to+\infty\). We can then conclude that, if \(0\leq\theta<1\), then \(\int_{0}^{t}[N^{2}\mathcal{C}_{N}Y_{s}^{N}(\phi_{s})+Y_{s}^{N}(\partial_{\!x} \phi_{s})]ds\) goes to zero as \(N\to+\infty\). 4. Finally, if \(\theta<0\), since \(f\in\mathcal{S}_{i}\) implies that \(S_{i-\!f}^{i}\in\mathcal{S}_{i}\), then, writing the Taylor expansion of order \([-\theta]+1\) of \(S_{t-\!f}^{i}\) around \(0\) and substituting in (3.16) and (3.17), we immediatly conclude that \(\int_{0}^{t}[N^{2}\mathcal{C}_{N}Y_{s}^{N}(\phi_{s})+Y_{s}^{N}(\partial_{\!x} \phi_{s})]ds\) goes to zero as \(N\to+\infty\). This completes the proof of the decomposition part of Theorem 2.3. Putting all the previous results together we end the proof of Theorem 2.3. What distinguishes the two main theorems is the fact that in the first one, there is a convergence result (only up to subsequences) since we are not able to show uniqueness of solution of the martingale problem. Nevertheless, since in Theorem 2.4 we assume a convergence at the initial time, this gives the uniqueness of the limit point. In the next subsection we complete the proof of Theorem 2.4 by showing uniqueness of the limit. ### Proof of Theorem 2.4 The uniqueness of the O. U. process for \(\theta\geq 0\) follows from Proposition 2.5 of [2] once we show that \((S_{t}^{i})_{t\geq 0}\), the semigroup associated to (2.4), satisfies \[S_{t+\!e}^{i}H-S_{t}^{i}H=\epsilon\alpha\Delta S_{i}^{i}H+o(\epsilon,t), \tag{3.23}\] for every \(\epsilon>0\), \(t\geq 0\) and \(H\in\mathcal{S}_{i}\), where \(o(\epsilon,t)\) goes to \(0\), as \(\epsilon\) goes to \(0\), in \(\mathcal{S}_{i}\) uniformly on compact time intervals. But this is an immediate consequence of the explicit formulas given by (2.14), (2.12) and (2.13), if \(\theta>1\), \(\theta=1\) or \(\theta<1\), respectively. Moreover, for \(\theta<0\), the uniqueness of solution of the O. U. martingale problem follows by repeating the arguments of Theorem 2.13. of [2] and Proposition 2.5. of [2]. Finally, to show that the two extra conditions, i.e. _regularity_ and _boundary conditions_, hold, we only have to observe that the first follows from the boundedness of the occupation variables jointly with Proposition 4.2 and the second follows from Lemma 4.5. This finishes the proof of Theorem 2.4. ## 4 Auxiliary estimates This section is devoted to some estimates needed in order to proof our main results. Let us denote by \(\tilde{\nabla}_{N}^{+}\) the operator defined, for every \(f:\Lambda_{N}\to\mathbb{R}\) and \(x\in\Lambda_{N-1}\), by \[\tilde{\nabla}_{N}^{+}f(x):=N[f(x+1)-f(x)]. \tag{4.1}\] **Lemma 4.1**.: _Assume that \(\gamma\in C^{6}([0,1])\) satisfies (H2), that there exists a sequence \((g_{N})_{N\in\mathbb{N}}\) of functions of class \(C^{6}([0,1])\) that satisfies (H3) and (H4) and that \(\left(\mu^{N}\right)_{N\in\mathbb{N}}\) is a sequence of probability measures satisfying (H1). Then, there exists \(C>0\) such that_ \[\max_{x\in\Lambda_{N-1}}|\widehat{\nabla}_{N}^{+}\rho_{t}^{N}(x)|\leq C,\] _for every \(t\in[0,T]\)._ The proof of the previous lemma can be found in Appendix D. One of the key ingredients to prove fluctuations is to obtain sharp estimates for the decay in \(N\) of the time-dependent two-point correlation function, i.e. on \(\varphi_{t}^{N}\) defined in (2.23), which we recall that is not defined for \(x=y\). **Proposition 4.2**.: _Under the assumption (H5), we have that_ \[\sup_{t\in[0,T]}\max_{\begin{subarray}{c}x,y\in\Lambda_{N}\\ x\neq y\end{subarray}}|\varphi_{t}^{N}(x,y)|\lesssim\frac{1}{N}, \tag{4.2}\] _and, under the assumption (H6), for \(x=1\) and for \(x=N-1\),_ \[\sup_{t\in[0,T]}\max_{\begin{subarray}{c}x\in\Lambda_{N}\\ y\neq x\end{subarray}}|\varphi_{t}^{N}(x,y)|\lesssim R_{N}^{\theta}:=\begin{cases} \frac{1}{N},\text{ if }\theta>1,\\ \frac{N^{\theta}}{N},\text{ if }0\leq\theta\leq 1,\\ \frac{N^{\theta}}{N},\text{ if }-1<\theta<0,\\ \frac{1}{N^{2}},\text{ if }\theta\leq-1.\end{cases} \tag{4.3}\] The proof of the previous proposition can be found in Section 4.1. **Lemma 4.3**.: _Recall that, for \(y\in\Lambda_{N}\), we denote by \(\tilde{\eta}(y)\) the centered variable. Then, for every \(\theta\in\mathbb{R}\), for \(x\in\{1,N-1\}\) and \(t,s\in[0,T]\), it holds_ \[\mathbb{E}_{\mu^{N}}\Bigg{[}\left(\int_{s}^{t}d_{N}^{\theta}\tilde{\eta}_{N^{ 2}}(x)dr\right)^{2}\Bigg{]}\lesssim|t-s|^{1+\delta_{\theta}}+|t-s|^{2}(d_{N}^ {\theta})^{2}R_{N}^{\theta} \tag{4.4}\] _and_ \[\mathbb{E}_{\mu^{N}}\Bigg{[}\left(\int_{s}^{t}\tilde{\eta}_{N^{2}}(x)dr\right) ^{2}\Bigg{]}\lesssim\frac{N^{\theta}}{N^{2}}|t-s|+|t-s|^{2}R_{N}^{\theta}, \tag{4.5}\] _where \(d_{N}^{\theta}=\sqrt{N}1(\theta\leq 1)+N^{3/2-\theta}1(\theta>1)\), \(\delta_{\theta}=\frac{1-\theta}{2}1(\theta<3)+1(\theta\geq 3)\) and \(R_{N}^{\theta}\) was introduced in the last proposition. So, in particular, for \(x\in\{1,N-1\}\), for every \(t\in[0,T]\) and \(\theta\in\mathbb{R}\),_ \[\lim_{N\to+\infty}\mathbb{E}_{\mu^{N}}\Bigg{[}\left(\int_{0}^{t}d_{N}^{\theta }\tilde{\eta}_{sN^{2}}(x)dr\right)^{2}\Bigg{]}=0. \tag{4.6}\] The proof of the previous lemma is given in Section 4.3. For \(\theta<0\), for all \(\alpha\in\mathbb{N}\), we will also need the following estimates. **Proposition 4.4**.: _Let \(\theta<1\). Recall (3.6). If (H5) holds, then, for every \(\epsilon>0\) and every \(t\in(0,T]\), we have that_ \[\max_{\begin{subarray}{c}(x,y)\in\Lambda_{N}^{\epsilon}\times\Lambda_{N}\\ y\neq x\end{subarray}}|\varphi_{t}^{N}(x,y)|\lesssim\left(1+\frac{1}{\sqrt{t }}\right)\frac{\epsilon}{N}+o\left(\frac{1}{N}\right), \tag{4.7}\] _and the same results holds for \((x,y)\in\Lambda_{N}\times\Lambda_{N}^{\epsilon,r}\)._ The proof of the previous result can be found in Section 4.2. **Lemma 4.5**.: _Let \(\theta<1\). Then, the following limit holds, for every \(t\in[0,T]\) and \(j\in\{0,1\}\)._ \[\lim_{\epsilon\to 0}\limsup_{N\to+\infty}\mathbb{E}_{\mu_{N}}\Bigg{[}\left( \int_{0}^{t}Y_{s}^{N}(t_{\epsilon}^{j})ds\right)^{2}\Bigg{]}=0, \tag{4.8}\] _where \(\iota_{\epsilon}^{j}\) was defined in item 2. (b) of Theorem 2.4._ The proof of the previous result is given in Section 4.4. ### Proof of Proposition 4.2 In this proof we will use some random walks that, for simplicity of the presentation, we define now: 1. \(\{\mathcal{X}_{t}^{i};t\geq 0\}\) is the random walk evolving on the set of points \(V_{N}^{a}\) where \[V_{N}^{a}:=V_{N}\setminus\mathcal{D}_{N}\quad\text{ for }\alpha=1\quad\text{and}\quad V_{N}^{a}:=V_{N}\quad\text{for}\quad \alpha\geq 2,\] (4.9) that moves to nearest-neighbors at rate \(\alpha\), except at the line \(\mathcal{D}_{N}^{+}\) that moves right/up at rate \(\alpha\) and left/down at rate \(\alpha-1\) and that is reflected at the line \(\mathcal{D}_{N}^{+}\) if \(\alpha=1\), and at the line \(\mathcal{D}_{N}\) if \(\alpha\geq 2\). Moreover, it is absorbed at \(\partial V_{N}\): with rate \(a\lambda^{\frac{1}{\alpha}}/N^{\vartheta}\) at the set of points \(\{(0,y):y\in\overline{\Lambda}_{N}\}\) and with rate \(a\lambda^{r}/N^{\vartheta}\) at the set of points \(\{(x,N):x\in\overline{\Lambda}_{N}\}\). This random walk has generator \(\Delta_{N}^{i}\) which is the operator that acts on functions \(f:\overline{V}_{N}\to\mathbb{R}\) such that \(f(x,y)=0\) for every \((x,y)\in\partial V_{N}\) as \[\Delta_{N}^{i}f(u)=\sum_{\begin{subarray}{c}v\in V_{N}\\ v\sim u\end{subarray}}c_{u,v}^{i}[f(v)-f(u)],\] (4.10) for every \(u\in V_{N}\), with \(c_{x,y}^{i}\) defined, for \(\alpha=1\) by \[c^{i}:\left\{((x,y),(x^{\prime},y^{\prime}))\in V_{N}\times\overline{V}_{N}; |x-x^{\prime}|+|y-y^{\prime}|=1\right\}\to[0,\infty)\] as \[\left\{\begin{array}{l}c_{(x,y),(x^{\prime},y^{\prime})}^{i}:=c_{x,x^{\prime }}^{i}\mathbb{1}(x^{\prime}\neq y)\text{ if }|x-x^{\prime}|=1,\\ c_{(x,y),(x,y^{\prime})}^{i}:=c_{y,y^{\prime}}^{i}\mathbb{1}(x\neq y^{\prime}) \text{ if }|y-y^{\prime}|=1,\end{array}\right.\] and, for \(\alpha\geq 2\), \[\left\{\begin{array}{l}c_{(x,y),(x^{\prime},y^{\prime})}^{i}:=c_{x,x^{\prime }}^{i}-\mathbb{1}(x^{\prime}=y)\text{ if }|x-x^{\prime}|=1,\\ c_{(x,y),(x,y^{\prime})}^{i}:=c_{y,y^{\prime}}^{i}-\mathbb{1}(x=y^{\prime}) \text{ if }|y-y^{\prime}|=1,\end{array}\right.\] (4.11) with \(c_{x,y}^{i}\) as defined in equation (2.17). 2. \(\{\widetilde{\mathcal{X}}_{t}^{i};t\geq 0\}\) is the random walk evolving on the set of points \(V_{N}\) that moves to nearest-neighbors at rate \(\alpha\), except at the line \(\mathcal{D}_{N}^{+}\) that moves right/up at rate \(\alpha\) and left/down at rate \(\alpha-1\) and that is reflected at the line \(\mathcal{D}_{N}\) and at \(\partial V_{N}\). We denote by \(\mathfrak{C}_{N}^{i}\) the Markov generator of \(\{\widetilde{\mathcal{X}}_{t}^{i};t\geq 0\}\) which is the operator that acts on functions \(f:\overline{V}_{N}\to\mathbb{R}\) as, for every \(u\in V_{N}\), \[\mathfrak{C}_{N}^{i}f(u)=\sum_{\begin{subarray}{c}v\in V_{N}\\ v\sim u\end{subarray}}c_{u,v}^{i}[f(v)-f(u)],\] (4.12) where \(c_{u,v}^{i}\) are the same as defined in (4.11). For the standard simple symmetric exclusion process, i.e. the case \(\alpha=1\), Proposition 4.2 has been proved in a myriad of articles (see [22, 16, 17] and references therein). Let us review and adapt this proof. It is not difficult to check that for each \(x,y\in\Lambda_{N}\), the action of the generator \(\mathcal{L}_{N}\) on \(\eta(x)\eta(y)\) is given by a linear combination of the functions \((\eta(z)\eta(z^{\prime});z,z^{\prime}\in\Lambda_{N})\) - see equation (B.1) of Appendix B. This means that the correlation function \((\varphi_{t}^{N};t\geq 0)\) satisfies an autonomous, non-homogeneous evolution equation, which involves \((p_{t}^{N};t\geq 0)\) as parameters. For \(\alpha=1\), the correlation function \(\varphi_{t}^{N}\) is solution to \[\partial_{t}\varphi_{t}^{N}(x,y)=N^{2}\Delta_{N}^{i}\varphi_{t}^{N}(x,y)+g_{t} ^{N}(x,y)\mathbb{1}((x,y)\in\mathcal{D}_{N}^{\pm}), \tag{4.13}\] where \(\Delta_{N}^{i}\) is the operator defined in (4.10). Here \[g_{t}^{N}(x,x+1)=g_{t}^{N}(x+1,x)=-\left(\widehat{\varphi}_{N}^{+}\rho_{t}^{N} (x)\right)^{2},\] for every \(x\in\Lambda_{N-1}\) and \(g_{t}^{N}(x,y):=0\) otherwise. Observe that \(\Delta_{N}^{i}\) corresponds to the generator of the random walk \(\{\mathfrak{X}_{t}^{i};t\geq 0\}\) that moves to nearest-neighbor sites on \(V_{N}\) with annihilation at the boundary and the jumps to the diagonal \(\mathcal{D}_{N}\) are suppressed. As a consequence, (4.13) does not involve the values of \(\varphi_{t}^{N}\) at \(\mathcal{D}_{N}\). By Duhamel's formula, for every \((x,y)\in V_{N}\setminus\mathcal{D}_{N}\), we can represent \(\varphi_{t}^{N}\) by \[\varphi_{t}^{N}(x,y)=\mathbb{E}_{(x,y)}\Big{[}\varphi_{0}^{N}(\mathfrak{X}_{ tN^{2}}^{i})+\int_{0}^{t}g_{t-s}^{N}(\mathfrak{X}_{tN^{2}}^{i})\mathbb{1}( \mathfrak{X}_{tN^{2}}^{i}\in\mathcal{D}_{N}^{\pm})ds\Big{]}, \tag{4.14}\] where \(\mathbb{E}_{(x,y)}\) denotes the expectation of the law of the walk \(\{\mathfrak{X}_{t}^{i};t\geq 0\}\) starting from the point \((x,y)\). Now, to obtain the order of decay in \(N\) of \(\varphi_{t}^{N}\) we note that by (4.14), \[\max_{\begin{subarray}{c}(x,y)\in V_{N}\\ x\neq y\end{subarray}}|\varphi_{t}^{N}(x,y)|\leq\max_{\begin{subarray}{c}(x, w)\in V_{N}\\ x\neq w\end{subarray}}|\varphi_{0}^{N}(z,w)|+\sup_{i\geq 0}\max_{\begin{subarray}{c} \varepsilon\in\Lambda_{N-1}\\ x\geq y\end{subarray}}|g_{t}^{N}(z,z+1)|\max_{\begin{subarray}{c}(x,y)\in V_{ N}\\ x\neq y\end{subarray}}\mathbb{E}_{(x,y)}\bigg{[}\int_{0}^{\infty}\mathbb{1}( \mathfrak{X}_{tN^{2}}^{i}\in\mathcal{D}_{N}^{+})ds\bigg{]}. \tag{4.15}\] Observe that \[T_{N}^{i}(x,y):=\mathbb{E}_{(x,y)}\bigg{[}\int_{0}^{\infty}\mathbb{1}( \mathfrak{X}_{tN^{2}}^{i}\in\mathcal{D}_{N}^{+})dt\bigg{]} \tag{4.16}\] corresponds to the expected occupation time of the diagonals \(\mathcal{D}_{N}^{+}\) by the random walk \((\mathfrak{X}_{tN^{2}}^{i};t\geq 0)\). By (4.15), in order to estimate \(|\varphi_{t}^{N}(x,y)|\), we only need to estimate the simpler quantities \(|\varphi_{0}^{N}(z,w)|\) for every \((z,w)\in V_{N}\) with \(z\neq w\), \(|g_{t}^{N}(z,z+1)|\) for every \(z\in\Lambda_{N-1}\) and \(T_{N}^{i}(x,y)\) for every \((x,y)\in V_{N}\setminus\mathcal{D}_{N}\). For details on this, see equations (2.19), (2.20), Lemma 6.2. and Sections 6.1. and 6.2 of [17]. For \(\alpha\geq 2\), we would like to follow a similar strategy to the one outlined above. However, in this case, the Chapman-Kolmogorov equation for \(\varphi_{t}^{N}\) is more complicated. In the case \(\alpha=1\), the relation \(\eta(x)=\eta(x)^{2}\) has as a consequence that no diagonal terms appear in the equation satisfied by \(\varphi_{t}^{N}\). For \(\alpha\geq 2\), this relation is no longer satisfied, and therefore the Chapman-Kolmogorov equation has an additional term - see Appendix C. At first glance, it would be natural to extend \(\varphi_{t}^{N}\) to the diagonal \(\mathcal{D}_{N}\) by taking \(\varphi_{t}^{N}(x,x)\) as equal to \[\varphi_{t}^{N}(x,x):=\mathbb{E}_{\mu^{N}}[(\eta_{tN^{2}}(x)-\rho_{t}^{N}(x))^ {2}]. \tag{4.17}\] Figure 4.2: Illustration of the jump rates of the random walk \(\{\mathfrak{X}_{t}^{i};t\geq 0\}\). However, it turns out that a more convenient definition is to extend \(\varphi_{t}^{N}\) as \[\varphi_{t}^{N}(x,x):=\mathbb{E}_{\mu^{N}}\Big{[}\frac{\alpha}{\alpha-1}\eta_{tN ^{2}}(x)(\eta_{tN^{2}}(x)-1)-\rho_{t}^{N}(x)^{2}\Big{]}, \tag{4.18}\] and remark here the importance of \(\alpha\) being greater or equal to \(2\) for this quantity to be well defined. Some motivations and reasons for this choice of defining the function \(\varphi_{t}(x,x)\) are given in the Appendix C. Extending \(\varphi_{t}^{N}\) in this way, we can verify that \(\varphi_{t}^{N}\) satisfies the equation \[\partial_{t}\varphi_{t}^{N}(x,y)=N^{2}\Delta_{N}^{i}\varphi_{t}^{N}(x,y)+g_{t }^{N}(x,x+1)\mathbb{I}((x,y)\in\mathcal{D}_{N}^{+}), \tag{4.19}\] where \(\Delta_{N}^{i}\) is the operator defined in (4.10). To simplify, we will use the same notation as in the case \(\alpha=1\) to the occupation time (4.16) for this case, i.e. the case \(\alpha\geq 2\). Observe that (4.19) generalizes (4.13) in a very convenient way, because the right-hand side is structurally the same; the only difference being the definition of the operator \(\Delta_{N}^{i}\) which in nothing changes the strategy we followed to bound \(\varphi_{t}^{N}\) in case \(\alpha=1\). In particular, we have the analogous of (4.15) for \(\alpha\geq 2\) with the slight difference that now we need to take into account in the right-hand side of (4.15) the points \((z,w)\in V_{N}\) with \(z=w\). From here on, we separate the proof of the bounds in (4.2) and (4.3) in two parts: for Part 1 we treat the case \(\theta<2\); and for Part 2 we treat the other case, i.e. \(\theta\geq 2\). **Part 1: the case \(\theta<2\)** We already saw that \[\max_{\begin{subarray}{c}(x,y)\in V_{N}\\ x\neq y\end{subarray}}|\varphi_{t}^{N}(x,y)|\leq\max_{(z,w)\in V_{N}}|\varphi_ {0}^{N}(z,w)|+\sup_{t\geq 0}\max_{z\in\Delta_{N-1}}|g_{t}^{N}(z,z+1)|\max_{ \begin{subarray}{c}(x,y)\in V_{N}\\ x\neq y\end{subarray}}T_{N}^{i}(x,y), \tag{4.20}\] Using Lemma 5.1, the assumptions (H5) and (H6), and Lemma 4.1, we conclude that \[\sup_{t\geq 0}\max_{(x,y)\in V_{N}}|\varphi_{t}^{N}(x,y)|\lesssim\begin{cases} \frac{1}{N}+\frac{N^{\theta}}{N},\text{ if }\theta\leq 0,\\ \frac{1}{N}+\frac{N^{\theta}}{N^{\theta}},\text{ if }\theta>0,\end{cases}\] and so, for \(\theta<2\), \[\sup_{t\geq 0}\max_{(x,y)\in V_{N}}|\varphi_{t}^{N}(x,y)|\lesssim\frac{1}{N}. \tag{4.21}\] Moreover, for \(x=1,N-1\), \[\sup_{t\geq 0}\max_{y\in\Delta_{N}}|\varphi_{t}^{N}(x,y)|\lesssim R_{N}^{ \theta},\text{ if }\theta\leq 2\,.\] For the case \(\theta\geq 2\) repeating the previous arguments we get the bound \(\frac{N^{\theta}}{N^{\theta}}\) and this is not enough for our results. For this reason we need to consider another random walk. **Part 2: the case \(\theta\geq 2\)** Here we follow a different strategy to improve the bound for \(T_{N}^{i}\) found previously, following the ideas presented in [17] for the case \(\alpha=1\), and extending the argument for \(\alpha\in\mathbb{N}\). We rewrite (4.19) as \[\partial_{t}\varphi_{t}^{N}(x,y)=N^{2}\epsilon_{N}^{i}\varphi_{t}^{N}(x,y)+ \mathfrak{V}_{N}^{i}(x,y)\varphi_{t}^{N}(x,y)+g_{t}^{N}(x,x+1)\mathbb{I}(y=x+1),\] where \(\epsilon_{N}^{i}\) is, as defined in (4.12), the generator of the random walk \(\{\widehat{\mathcal{X}}_{t}^{i},t\geq 0\}\) and, \[\mathfrak{V}_{N}^{i}(x,y)=-\frac{\alpha N^{2}}{N^{\theta}}[\lambda^{i} \mathbb{I}(x=1)+\lambda^{r}\mathbb{I}(y=N-1)].\] By Feynman-Kac's formula, we have that \[\varphi_{t}^{N}(x,y)=\bar{\mathbb{E}}_{(x,y)}\Bigg{[}\varphi_{0}^{N}(\widehat {\mathcal{X}}_{tN^{2}}^{\mathbb{I}})^{\int_{0}^{t}\mathfrak{V}_{N}^{i}( \widehat{\mathcal{X}}_{tN^{2}}^{\mathbb{I}})ds}+\int_{0}^{t}g_{t-s}^{N}( \widehat{\mathcal{X}}_{tN^{2}}^{\mathbb{I}})\mathbb{I}(\widehat{\mathcal{X} }_{sN^{2}}^{\mathbb{I}}\in\mathcal{D}^{+})e^{\int_{0}^{t}\mathfrak{V}_{N}^{i}( \epsilon,\widehat{\mathcal{X}}_{tN^{2}}^{\mathbb{I}})dr}ds\Bigg{]},\] where \(\mathbb{E}_{(x,y)}\) denotes the expectation given that \(\widehat{\mathbf{x}}^{1}_{N,N^{2}}\) starts from the point \((x,y)\). Now, since \(\mathfrak{V}^{1}_{N}\) is negative, then \[\max_{\begin{subarray}{c}(x,y)\in\mathbb{V}_{N}\\ x\neq y\end{subarray}}\Big{|}\begin{subarray}{c}\mathbb{E}_{(x,y)}\Big{[} \varphi_{0}^{N}\big{(}\widehat{\mathbf{x}}^{1}_{N^{2}}\big{)}\varphi_{t}^{\int_{0 }^{t}\mathfrak{V}^{1}_{N}(\widehat{\mathbf{x}}^{1}_{N^{2}})ds}\Big{]}\Big{|} \lesssim\max_{(x,y)\in\mathbb{V}_{N}}|\varphi_{0}^{N}(z,w)|. \tag{4.22}\] For the other term, by changing the integrals using Fubini's theorem and using the fact that \(g^{N}_{t}\) and \(\mathfrak{V}^{1}_{N}\) are both negative, we have that \[\Big{|}\mathbb{E}_{(x,y)}\bigg{[}\int_{0}^{t}g^{N}_{t-s}\big{(} \widehat{\mathbf{x}}^{1}_{N^{2}}\big{)}\varphi_{0}^{\int_{0}^{t}\mathfrak{V}^{1}_ {N}(\widehat{\mathbf{x}}^{1}_{N^{2}})dr}ds\bigg{]}\bigg{|}\leq\int_{0}^{t}\mathbb{ E}_{(x,y)}\big{[}-g^{N}_{t-s}\big{(}\widehat{\mathbf{x}}^{1}_{N^{2}}\big{)}\big{]}ds.\] By similar arguments as in the case \(\theta<2\), we obtain that \[\Big{|}\mathbb{E}_{(x,y)}\bigg{[}\int_{0}^{t}g^{N}_{t-s}\big{(} \widehat{\mathbf{x}}^{1}_{N^{2}}\big{)}\varphi_{t}^{\int_{0}^{t}\mathfrak{V}^{1}_ {N}(\widehat{\mathbf{x}}^{1}_{N^{2}})dr}ds\bigg{]}\bigg{|}\leq\sup_{t\geq 0}\max_{s \in\mathbb{A}_{N-1}}|g^{N}_{t}(z,z+1)|\widehat{T}^{N}_{t}(x,y), \tag{4.23}\] where \[\widehat{T}^{N}_{t}(x,y):=\int_{0}^{t}\mathbb{E}_{(x,y)}\big{[} \mathbb{1}\big{(}\widehat{\mathbf{x}}^{1}_{N^{2}}\in\mathcal{G}^{+}_{N}\big{)} \big{]}ds\,. \tag{4.24}\] Observe that we did not bound the last integral (from \(0\) to \(t\)) by the integral over the interval from \(0\) to infinity and the reason is that the bound we will obtain for that time integral depends on \(t\) and blows up when \(t\to+\infty\). From Lemma 5.2 together with (4.22) and (4.23), we obtain \[\sup_{t\in[0,T]}\max_{(x,y)\in\mathbb{V}_{N}}|\varphi_{t}^{N}(x, y)|\lesssim\frac{T+1}{N},\] and, the same bound holds from \((x,y)\in\partial V_{N}\). This concludes the proof. ### Proof of Proposition 4.4 Recall that here we will only consider \(\theta<1\). Since the result of Proposition 4.4 for \(\alpha=1\) and \(\theta<0\) was not considered before, we will present a proof that works for every \(\alpha\in\mathbb{N}\) and every \(\theta<1\). Let \(\varepsilon>0\) and recall from the statement of Proposition 4.4 that we denote the set \(\{1,\dots,\varepsilon(N-1)\}\) by \(\Lambda^{\varepsilon,\varepsilon}_{N}\). We want to show that, for every \(\varepsilon>0\) and every \(t\in(0,T]\), \[\max_{\begin{subarray}{c}(x,y)\in\Lambda^{\varepsilon,\varepsilon}_{N}\\ y\neq x\end{subarray}}|\varphi_{t}^{N}(x,y)|\lesssim\left(1+\frac{1}{\sqrt{t }}\right)\frac{\varepsilon}{N}+o\left(\frac{1}{N}\right).\] Since \(\varphi_{t}^{N}\) is the solution to (4.13) then it admits the representation (4.14). As a consequence, for every \(t\in[0,T]\), we have that \[\max_{\begin{subarray}{c}(x,y)\in\Lambda^{\varepsilon,\varepsilon}_{N}\\ y\neq x\end{subarray}}|\varphi_{t}^{N}(x,y)| \leq\max_{\begin{subarray}{c}(x,y)\in\Lambda^{\varepsilon, \varepsilon}_{N}\\ y>x\end{subarray}}\left[|\mathbb{E}_{(x,y)}[\varphi_{0}^{N}(\mathbf{x}^{-i}_{N^{2} })]|+\Big{|}\mathbb{E}_{(x,y)}[\int_{0}^{t}g^{N}_{t-s}(\mathbf{x}^{i}_{sN^{2}}) \mathbb{1}(\mathbf{x}^{i}_{sN^{2}}\in\mathcal{D}^{\pm}_{N})ds\Big{]}\right]\] \[\leq\max_{(x,y)\in\mathbb{V}^{1}_{N}}|\varphi_{0}^{N}(z,w)|\max_ {\begin{subarray}{c}(x,y)\in\Lambda^{\varepsilon,\varepsilon}_{N}\\ y>x\end{subarray}}\mathcal{D}_{(x,y)}\big{[}\mathbf{x}^{i}_{sN^{2}}\notin\partial V _{N}\big{]}\] \[+\sup_{\tau\geq 0}\max_{\begin{subarray}{c}\alpha\in\Lambda^{ \varepsilon,\varepsilon}_{N-1}\\ \alpha\neq y\end{subarray}}|g^{N}_{\tau}(z,z+1)|\max_{\begin{subarray}{c}(x,y) \in\mathbb{V}^{1}_{N}\\ \alpha\neq y\end{subarray}}T^{i}_{(x,y)},\] where \(V^{\alpha}_{N}\) was defined in (4.9), \(\{\mathbf{x}^{\cdot i}_{t}\,;\,t\geq 0\}\) is the bi-dimensional random walk on \(V_{N}\) with Markov generator \(\Delta^{i}_{N}\) and \(\mathcal{P}_{(x,y)}\big{[}\mathbf{x}^{i}_{sN^{2}}\notin\partial V_{N}\big{]}\) represents the probability that, starting from \((x,y)\), at time \(tN^{2}\), the random walk \(\{\mathbf{x}^{\cdot i}_{t}\,;\,t\geq 0\}\) is still not absorbed at the boundary. Recalling the proof of the estimate of \(T^{i}_{N}(x,y)\) (see Lemma 5.1), one can easily see that \[\max_{\begin{subarray}{c}(x,y)\in\Lambda^{\varepsilon,\varepsilon}_{N}\\ y\neq x\end{subarray}}T^{i}_{N}(x,y)\lesssim\frac{\varepsilon}{N}+\frac{N^{ \theta}}{N^{3}}\mathbb{1}(0<\theta<1)+\frac{N^{\theta}}{N}\mathbb{1}(\theta<0).\] Moreover, by Lemma 4.1 and assumption (H5), we have that \[\max_{\begin{subarray}{c}(x,y)\in\Lambda_{N}^{e,f}\times\Lambda_{N}\\ x\neq y\end{subarray}}|\varphi_{t}^{N}(x,y)|\lesssim\frac{1}{N}\max_{ \begin{subarray}{c}(x,y)\in\Lambda_{N}^{e,f}\times\Lambda_{N}\\ y>x\end{subarray}}\mathcal{P}_{(x,y)}\big{[}\mathcal{X}_{NN^{\perp}}^{i}\notin \partial V_{N}\big{]}+\frac{\epsilon}{N}+\frac{N^{\theta}}{N^{3}}\mathbb{1}(0 <\theta<1)+\frac{N^{\theta}}{N}\mathbb{1}(\theta<0). \tag{4.25}\] We are only left with estimating \(\mathcal{P}_{(x,y)}\big{[}\mathcal{X}_{NN^{\perp}}^{i}\notin\partial V_{N}\big{]}\), when \((x,y)\in\Lambda_{N}^{e,f}\times\Lambda_{N}\) and \(y>x\). This is the content of the next result. **Proposition 4.6**.: _Let \(\alpha\in\mathbb{N}\) and \(\Lambda_{N}^{e,f}\) as defined in Proposition 4.4. For every \(t\in(0,T]\), there exists \(\varepsilon_{0}>0\) such that, for every \(0<\epsilon<\varepsilon_{0}\),_ \[\max_{\begin{subarray}{c}(x,y)\in\Lambda_{N}^{e,f}\times\Lambda_{N}\\ y>x\end{subarray}}\mathcal{P}_{(x,y)}\big{[}\mathcal{X}_{tN^{2}}^{i}\notin \partial V_{N}\big{]}\lesssim\frac{\epsilon}{\sqrt{t}}, \tag{4.26}\] _where \(\mathcal{P}_{(x,y)}\big{[}\mathcal{X}_{tN^{2}}^{i}\notin\partial V_{N}\big{]}\) represents the probability that, starting from \((x,y)\), at time \(tN^{2}\), the random walk \(\{\mathcal{X}_{t}^{i}\;;\;t\geq 0\}\) is still not absorbed at the boundary._ Using the bound in (4.26) and what we already proved in (4.25), we conclude that \[\max_{\begin{subarray}{c}(x,y)\in\Lambda_{N}^{e,f}\times\Lambda_{N}\\ y\neq x\end{subarray}}|\varphi_{t}^{N}(x,y)|\lesssim\left(1+\frac{1}{\sqrt{t}} \right)\frac{\epsilon}{N}+\omega\left(\frac{1}{N}\right), \tag{4.27}\] as we wanted. Proof of Proposition 4.6.: We divide the proof in two cases: \(\alpha=1\) and \(\alpha\geq 2\). **Part 1: the case \(\alpha=1\)** For \(\alpha=1\) the exclusion rule creates a natural order in the system. Indeed, starting the dynamics from a configuration \(\eta\) and enumerating the particles from left to right, such order lasts for every \(t\geq 0\). This implies that, the leftmost particle of \(\eta\) will remain the leftmost particle of the system until it is absorbed. This is the main idea behind the next argument. Given \((x,y)\in\Lambda_{N}^{e,f}\times\Lambda_{N}\) with \(x<y\), then \(\mathcal{P}_{(x,y)}\big{[}\mathcal{X}_{tN^{2}}^{i}\notin\partial V_{N}\big{]}\) represents the probability that, at time \(tN^{2}\), none of the two particles in the bulk were absorbed, knowing that one started close to the boundary, at the site \(x\in\Lambda_{N}^{e,f}\). Roughly speaking, since \(x<y\), if we track the movements, up to time \(tN^{2}\), of the particle that started at \(x\), i.e. the leftmost particle in the bulk, then, if it is absorbed with high probability, i.e. of the order \(1-\frac{\epsilon}{\sqrt{t}}\), then the event \(\{\mathcal{X}_{tN^{2}}^{i}\notin\partial V_{N}\}\) has to have a probability at least of order \(\frac{\epsilon}{\sqrt{t}}\). The advantage of tracking just the leftmost particle on the bulk relies on the fact that we can compare it with a simple random walk, whose absorption probabilities are known. Let us formalize this argument. Recall the definition of \(V_{N}^{a}\) from (4.9). We also define \(\overline{V}_{N}^{a}=V_{N}^{a}\cup\partial V_{N}^{a}\) the closure of \(V_{N}^{a}\). The proof will follow by a sequence of definitions of other processes that can be related with \(\{\mathcal{X}_{tN^{2}}^{i}\;;\;t\geq 0\}\). We will divide our strategy in three steps. **Step 1: Projecting \(\{\mathcal{X}_{t}^{i}\;;\;t\geq 0\}\) on the line** Recall that \(\mathcal{X}_{t}^{i}\cdot V_{N}^{a}\to\mathcal{G}([0,T];\overline{V}_{N}^{a})\) is a process evolving on the triangle \(\overline{V}_{N}^{a}\). We can now project this process in \(\overline{\Lambda}_{N}\), in the following way: let \(\overline{\Omega}_{N}:=\{\eta\in\{0,1\}^{\overline{\Lambda}_{N}}\;|\;\eta(0)= 0,\eta(N)=0,\text{ and }\sum_{x\in\Lambda_{N}}\eta(x)=2\}\) the set of initial configurations of the process on the line and define \(\xi_{2}^{2}:\overline{\Omega}_{N}\to\mathcal{G}([0,T];\{0,1\}^{\overline{ \Lambda}_{N}})\) to be such that, for every \((x,y)\in V_{N}^{a}\) setting \(\eta=\eta_{(x,y)}\in\overline{\Omega}_{N}\) with \(\eta(x)=1\) and \(\eta(y)=1\) (and therefore \(\eta(z)=0\) for every \(z\notin\{x,y\}\)), \[\xi_{t}^{2}(\eta_{(x,y)})(z)=\begin{cases}0,\text{ if }z\neq\Pi_{1}\mathcal{X}_{t}^{i} (x,y)\text{ and }z\neq\Pi_{2}\mathcal{X}_{t}^{i}(x,y),\\ 1,\text{ if }z=\Pi_{1}\mathcal{X}_{t}^{i}(x,y)\text{ or }z=\Pi_{2}\mathcal{X}_{t}^{i}(x,y), \end{cases} \tag{4.28}\] where again \(\Pi_{1}\) and \(\Pi_{2}\) are the projection functions on the first and second coordinates, respectively. Since there exists a bijection between \(V_{N}^{a}\) and \(\overline{\Omega}_{N}\), the previous definition completely defines the process \(\xi^{2}\). **Step 2: Construction of a lazy random walk that follows the movements of the leftmost particle** To \(\xi^{2}\), which can be interpreted as a SEP(1) with only two particles and an absorbing boundary, we will associate another process on the line that will be defined as follows: let \(\tilde{\Omega}_{N}:=\{\eta\in\{0,1\}^{\overline{N}_{N}}\mid\eta(0)=\eta(N)=0\), and \(\sum_{x=2}^{N}\eta(x)=1\}\) the set of initial configurations on the line with only one particle that starts on the bulk and define \(\xi^{1}:\tilde{\Omega}_{N}\to\mathbb{D}([0,T];\{0,1\}^{\overline{N}_{N}})\) as, for every \((x,y)\in V_{N}^{a}\) setting \(\eta=\eta_{(x)}\in\tilde{\Omega}_{N}\) to be such that \(\eta(x)=1\) (and therefore \(\eta(z)=0\) for every \(z\neq x\)), then \[\xi^{1}_{\tau}(\eta_{(x)})(z)=\begin{cases}0,\text{ if }z\neq\Pi_{1}\mathbf{ \mathcal{X}}^{i}_{t}(x,y),\\ 1,\text{ if }z=\Pi_{1}\mathbf{\mathcal{X}}^{i}_{t}(x,y),\end{cases} \tag{4.29}\] where \(\Pi_{1}\) is the projection function on the first coordinate. Thus, \(\xi^{1}_{\cdot}\) is the process that follows the left and right movements of \(\mathbf{\mathcal{X}}^{i}\) in \(V_{N}^{a}\), i.e. it follows the particle in the system that starts at \(x\). To define \(\xi^{1}_{\cdot}\) we are using the fact that, as we remarked above, the two particles on the line cannot exchange the order of their positions. We observe that, because of the exclusion rule, if, eventually, the clock of the leftmost particle rings and the jump is suppressed, \(\xi^{1}_{\cdot}\) remains still until the clock of the leftmost particle rings again for an allowed movement. It is clear that \(\xi^{1}_{\cdot}\leq\xi^{2}_{\cdot}\), in the sense that, for every \(z\in\overline{N}_{N}\) and every \(t\in[0,T]\), \(\xi^{1}_{t}(z)\leq\xi^{2}_{\cdot}(z)\). Then, given \((x,y)\in\Lambda_{N}^{\epsilon,t}\times\Lambda_{N}\) with \(x<y\), we see that \[\mathcal{P}_{(x,y)}\big{[}\mathbf{\mathcal{X}}^{i}_{tN^{2}}\notin \partial V_{N}\big{]} \leq\mathcal{P}_{(x,y)}\big{[}\text{ the leftmost particle of }\mathbf{\mathcal{X}}^{i}\text{ was not absorbed until time }tN^{2}\big{]}\] \[=\mathcal{P}_{\eta_{(x,y)}}\big{[}(\xi^{2}_{tN^{2}}(\cdot))(0)=0 \,,\,(\xi^{2}_{tN^{2}}(\cdot))(N)=0\big{]}\] \[\leq\mathcal{P}_{\eta_{(x_{1})}}\big{[}(\xi^{1}_{tN^{2}}(\cdot))(0 )=0\big{]}.\] **Step 3: Comparison with a random walk that ignores the exclusion rule of the initial process** Let \(\tilde{\xi}^{1}_{\cdot}\) be the process that follows \(\xi^{1}_{\cdot}\) up to the first time that a jump is suppressed. Here, the process \(\tilde{\xi}^{1}_{\cdot}\) realizes the jump and starts following not the leftmost particle but the rightmost particle until a new jump for \(\xi^{1}_{\cdot}\) was suppressed. Again, \(\tilde{\xi}^{1}_{\cdot}\) realizes the jump returning to follow the leftmost particle, and so on. This new process \(\tilde{\xi}^{1}_{\cdot}\) also satisfies \(\tilde{\xi}^{1}_{\cdot}\leq\xi^{2}_{\cdot}\) and can be seen as the non-lazy version of \(\xi^{1}_{\cdot}\) and that describes a continuous time simple symmetric random walk. Observe that \[\mathcal{P}_{\eta_{(x_{1})}}\big{[}(\xi^{1}_{tN^{2}}(\cdot))(0)=0\big{]}\leq \mathcal{P}_{\eta_{(x_{1})}}\big{[}(\tilde{\xi}^{1}_{tN^{2}}(\cdot))(0)=0\big{]}.\] This is again a consequence of the fact that the two particles on the initial process can not exchange order and so, if the rightmost particle is absorbed at \(x=0\) then for sure the leftmost was already absorbed. Then, since \(0\) and \(N\) are absorbing states, if \(\xi^{1}_{tN^{2}}(\cdot)\) and \(\tilde{\xi}^{1}_{tN^{2}}(\cdot)\) start with the same configuration, at each time \(t\), the point where \(\xi^{1}_{tN^{2}}(\cdot)\) has a non-zero value is always less or equal to the point where \(\tilde{\xi}^{1}_{tN^{2}}(\cdot)\) has a non-zero value. Therefore \(\{(\xi^{1}_{tN^{2}}(\cdot))(0)=0\}\subset\{(\tilde{\xi}^{1}_{tN^{2}}(\cdot))(0 )=0\}\). This implies that \[\mathcal{P}_{(x,y)}\big{[}\mathbf{\mathcal{X}}^{i}_{tN^{2}}\notin\partial V_{N} \big{]}\leq\mathcal{P}_{\eta_{(x_{1})}}\big{[}(\xi^{1}_{tN^{2}}(\cdot))(0)=0 \big{]}\leq\mathcal{P}_{\eta_{(x_{1})}}\big{[}(\tilde{\xi}^{1}_{tN^{2}}(\cdot ))(0)=0\big{]}\leq\mathcal{P}_{\eta_{(x_{1})}}\big{[}\tau_{1}>tN^{2}\big{]},\] where \(\tau_{1}=\inf\{t\geq 0\mid(\tilde{\xi}^{1}_{tN^{2}}(\cdot))(0)=1\}\) represents the first time that \(\tilde{\xi}^{1}_{\cdot}\) hits \(0\). So, since \(x\in\Lambda_{N}^{\epsilon}\) and \(\tilde{\xi}^{1}_{tN^{2}}(\cdot)\) describes a continuous time simple symmetric random walk, we have that, for fixed \(t\), there exists \(\epsilon_{0}>0\) such that, for every \(0<\epsilon\leq\epsilon_{0}\), \(\mathcal{P}_{\eta_{(x_{1})}}\big{[}\tau_{1}>tN^{2}\big{]}\) is of order \(O(\frac{\epsilon}{\sqrt{t}})\). **Part 2: the case \(\alpha\geq 2\)** Clearly in this case the natural ordering is lost, therefore we implement some changes in the previous argument. Recall that we are working with an absorbing SEP(\(\alpha\)) starting with only two particles, then for every pair \(\{x,x+1\}\), for \(x\in\Lambda_{N-1}\), the jump rates \(c_{x,x+1}\) and \(c_{x+1,x}\) can take values: \(\alpha-1\), \(\alpha\) or \(2\alpha\) and, as \(\alpha\) increases, the jump rates increase. This means that, the time at which a jump will occur, will be as shorter as bigger is the jump rate, namely the value of \(\alpha\). In particular, if \(\alpha_{1}\geq\alpha_{2}>1\) then the hitting time of the boundary \(\partial V_{N}\) for the SEP(\(\alpha_{2}\)) is greater or equal to the hitting time for SEP(\(\alpha_{1}\)), given that both processes start with the same skeleton of two particles in \(\Lambda_{N}\). This implies that, to complete the proof of (4.26), it is enough to treat the case \(\alpha=2\). **The case \(\alpha=2\)** Let \(\mathcal{Z}:V_{N}\to\mathcal{D}([0,T];\overline{V}_{N})\) be the representation of SEP(2) with only two particles on the system. Fix \((x,y)\in V_{N}\) which will represent the starting point of \(\mathcal{Z}\), and, to simplify notation, let us denote \(\mathcal{Z}(x,y)\) only by \(\mathcal{Z}\). We remark that \(\mathcal{Z}\colon\in\mathcal{D}([0,T];\overline{V}_{N})\), which is a process that takes values on \(\overline{V}_{N}\), can be interpreted as \(\mathcal{Z}=\xi(\Gamma(\cdot))\), where \(\xi\) is the skeleton of \(\mathcal{Z}\), and \(\Gamma(\cdot)\) represents the Poisson point process associated with the marked Poisson point process \(N(\cdot)\) of SEP(2) given the initial configuration \((x,y)\). I.e., for every \(s\in[0,T]\), \(\Gamma(sN^{2})\) is the number of jumps of the process up to time \(sN^{2}\), which corresponds to counting how many marks, up to time \(sN^{2}\), the marked Poisson point process \(N(\cdot)\) had. Observe that, for every \(t\in[0,T]\), \[\Gamma_{m}(t):=\int_{0}^{t}dN^{m}\leq\Gamma(t)\leq\Gamma_{M}(t):=\int_{0}^{t} dN^{M},\] where \(N^{m}\) is a Poisson process with parameter \(1\) and \(N^{M}\) is a Poisson process with parameter \(4\). The choice of these parameters is due to the fact that, for every \(x\in\Lambda_{N-1}\), the jump rates \(c^{i}_{x,y+1}\) and \(c^{i}_{x+1,x}\) only take three possible values: \(1\), \(2\) or \(4\). So we choose the parameter of \(N^{m}\) as the \(\min\{1,2,4\}\) and of \(N^{M}\) as the \(\max\{1,2,4\}\). Then, denoting \[\tau_{2}=\inf\{t\geq 0\mid\mathcal{Z}_{t}\in\partial V_{N}\},\] \[\tau_{m}=\inf\{t\geq 0\mid\xi(\Gamma_{m}(t))\in\partial V_{N}^{ 2}\},\] \[\tau_{M}=\inf\{t\geq 0\mid\xi(\Gamma_{M}(t))\in\partial V_{N}^{ 4}\},\] we get that \[\mathcal{P}_{(x,y)}\big{[}\tau_{m}>tN^{2}\big{]}\leq\mathcal{P}_{(x,y)}\big{[} \tau_{2}>tN^{2}\big{]}\leq\mathcal{P}_{(x,y)}\big{[}\tau_{M}>tN^{2}\big{]}. \tag{4.30}\] Since, the processes \(\xi(\Gamma_{m}(\cdot))\) and \(\xi(\Gamma_{M}(\cdot))\) have Poisson clocks with a parameter which is uniform on the triangle \(V_{N}\), they now can be interpret as a continuous time simple symmetric random walk. Tracking the movements of the particle that started at site \(x\) and everytime the particles meet and are on top of each other we start moving the particle that jumps from the top of the other, we can deduce that, for fixed \(t\), there exists \(e_{0}>0\) such that, for every \(0<e\leq e_{0}\), \(\mathcal{P}_{(x,y)}\big{[}\tau_{m}>tN^{2}\big{]}\) and \(\mathcal{P}_{(x,y)}\big{[}\tau_{M}>tN^{2}\big{]}\) are both of order \(\frac{e}{\sqrt{t}}\). Remark that, since we are taking a bounded interval of time \([0,tN^{2}]\), the number of meetings between the two particles, when they get on top of each other, is finite, so the number of times that, eventually, we change what is the particle that we will follow next is finite, guaranteeing that the process is well defined. From (4.30), we conclude that \(\mathcal{P}_{(x,y)}\big{[}\tau_{2}>tN^{2}\big{]}\) is also of order \(\frac{e}{\sqrt{t}}\). ### Proof of Lemma 4.3 Developing the square in the expectation, using the symmetry of the integrating function on the square and applying Fubini's theorem, we get \[\mathbb{E}_{\mu^{N}}\bigg{[}\left(\int_{s}^{t}\tilde{\eta}_{sN^{2}}(x)ds \right)^{2}\bigg{]}=2\int_{s}^{t}\int_{s}^{r}\varphi_{\nu,r}^{N}(x,x)d\nu dr, \tag{4.31}\] where, for \(x,y\in\Lambda_{N}\), \[\varphi_{\nu,r}^{N}(x,y)=\mathbb{E}_{\mu^{N}}[\tilde{\eta}_{sN^{2}}(x)\tilde{ \eta}_{sN^{2}}(y)]. \tag{4.32}\] Let us fix \(\nu\in[s,t]\) and \(x\in\Lambda_{N}\). For every \(r\geq v\) and \(y\in\Lambda_{N}\), a simple computation shows that \(\Psi_{r}^{N}(y):=\varphi_{\nu,r}^{N}(x,y)\) is solution to \[\begin{cases}\partial_{r}\Psi_{r}^{N}(y)=N^{2}\Delta_{N}^{i}\Psi_{r}^{N}(y), \text{ if }y\in\Lambda_{N},\\ \Psi_{r}^{N}(y)=\varphi_{\nu}^{N}(x,y),\text{ if }y\in\Lambda_{N},\\ \Psi_{r}^{N}(0)=\Psi_{r}^{N}(N)=0,\end{cases} \tag{4.33}\] where, for every \(f:\overline{\Lambda}_{N}\to\mathbb{R}\) such that \(f(0)=f(N)=0\) \[N^{2}\Delta_{N}^{i}f(y)=\begin{cases}aN^{2}[f(y+1)+f(y-1)-2f(y)],\text{ if }y\notin\{1,N-1\},\\ \frac{aN^{2}}{N^{N}}N^{2}[f(0)-f(1)]+aN^{2}[f(2)-f(1)],\text{ if }y=1,\\ \frac{aN^{2}}{N^{N}}N^{2}[f(N)-f(N-1)]+aN^{2}[f(N-2)-f(N-1)],\text{ if }y=N-1.\end{cases} \tag{4.34}\] Then the solution of the previous equation can be written in terms of the fundamental solution \(P_{r}^{N,\theta}(x,y)\) of the initial value problem (4.33) as: \[\Psi_{r}^{N}(y)=\sum_{z=1}^{N-1}p_{r-\nu}^{N,\theta}(y,z)\mathbb{E}_{\mu^{N}}[ \bar{\eta}_{\nu N^{2}}(y)\bar{\eta}_{\nu N^{2}}(z)]\,. \tag{4.35}\] Plugging last identity in (4.31) and using (4.3) and the fact that the occupation variables are bounded, we obtain \[\mathbb{E}_{\mu^{N}}\!\left[\left(\int_{s}^{t}\bar{\eta}_{\nu N^{2} }(x)dr\right)^{\!2}\right]\!\lesssim\!\int_{s}^{t}\int_{s}^{r}\left\{P_{r-\nu} ^{N,\theta}(x,x)+\sum_{\begin{subarray}{c}z=1\\ z\neq x\end{subarray}}^{N-1}p_{r-\nu}^{N,\theta}(x,z)R_{N}^{\theta}\right\}d \nu dr\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \lesssim\int_{s}^{t}\int_{s}^{r}\left\{p_{r-\nu}^{N,\theta}(x,x)+R_{N}^{ \theta}\right\}d\nu dr, \tag{4.36}\] where above we used the fact that \(\sum_{\begin{subarray}{c}z\in\Lambda_{N}\\ z\neq x\end{subarray}}P_{r}^{N,\theta}(x,z)\) is (uniformly in time) bounded by one. To finish the proof we just need to estimate \(\int_{s}^{t}\int_{s}^{r}P_{r-\nu}^{N,\theta}(x,x)d\nu dr\) for \(x\in\{1,N-1\}\). Let us define \(\tilde{P}_{r-\nu}^{N,\theta}(x,y)\) the fundamental solution of (4.33) when \(\lambda^{\ell}=\lambda^{r}=1\). Remark that \[f_{r-\nu}^{N,\theta}(x,y):=P_{r-\nu}^{N,\theta}(x,y)-\tilde{P}_{r-\nu}^{N, \theta}(x,y) \tag{4.37}\] is the fundamental solution to \[\begin{cases}\partial_{x}g_{x,y}^{N,\theta}(x,y)=N^{2}\Delta_{N}^{i}g_{x,\nu }^{N,\theta}(x,y)+N^{2}K^{N,\theta}\tilde{P}_{r-\nu}^{N,\theta}(x,y),\text{ if }y \in\Lambda_{N},\\ g_{x,\nu}^{N,\theta}(x,y)=0,\text{ if }y\in\Lambda_{N},\\ g_{x,\nu}^{N,\theta}(x,0)=g_{x,\nu}^{N,\theta}(x,N)=0,\end{cases} \tag{4.38}\] where \[K^{N,\theta}\tilde{P}_{r-\nu}^{N,\theta}(x,y):=-\frac{\alpha(1-\lambda^{\ell })}{N^{\theta}}\tilde{P}_{r-\nu}^{N,\theta}(1,y)\mathbb{I}(x=1)-\frac{\alpha (1-\lambda^{r})}{N^{\theta}}\tilde{P}_{r-\nu}^{N,\theta}(x,N-1)\mathbb{I}(y=N -1).\] Thus, \(\tilde{P}_{r-\nu}^{N,\theta}(x,y)\) is a probability and since \(\lambda^{\ell},\lambda^{r}\leq 1\), then \(K^{N,\theta}\tilde{P}_{r-\nu}^{N,\theta}(x,y)\leq 0\), and so, by the Maximum Principle, Theorem A.3, we obtain \[f_{r-\nu}^{N,\theta}(x,y)\leq 0\quad\Longleftrightarrow\quad P_{r-\nu}^{N, \theta}(x,y)\leq\tilde{P}_{r-\nu}^{N,\theta}(x,y). \tag{4.39}\] Using Proposition 4.7 presented in the next section we have that, for every \(t\in[0,T]\) and \(x\in\Lambda_{N}\) \[P_{t}^{N,\theta}(x,x)\leq\tilde{P}_{t}^{N,0}(x,x)+\left(\frac{N^{\theta}}{ \lambda^{\ell}}-1\right)\tilde{P}_{t}^{N,0}(1,x)+\left(\frac{N^{\theta}}{ \lambda^{r}}-1\right)\tilde{P}_{t}^{N,0}(N-1,x),\quad\text{if }\theta\geq 0\] and \[P_{t}^{N,\theta}(x,x)\leq\tilde{P}_{t}^{N,0}(x,x),\quad\text{if }\theta<0\,.\] Moreover, a simple computation similar to Lemma 4.3 of [1], relying in a comparison to the case \(\theta=0\) and \(\lambda^{l}=\lambda^{r}=\alpha\), shows that for \(x\in\{1,N-1\}\) \[\int_{s}^{t}\int_{s}^{r}\tilde{P}_{r-\nu}^{N,0}(x,1)d\nu dr\lesssim\frac{|t-s| }{N^{2}}\] and by symmetry the same is true for \(\int_{s}^{t}\int_{s}^{r}\tilde{P}_{r-\nu}^{N,0}(x,N-1)d\nu dr\). From this we get that \[\mathbb{E}_{\mu^{N}}\!\left[\left(\int_{s}^{t}\bar{\eta}_{\nu N^{2}}(x)dr \right)^{\!2}\right]\lesssim\frac{N^{\theta}}{N^{2}}|t-s|+(t-s)^{2}R_{N}^{ \theta}\,.\] From the definitions of \(R_{N}^{\theta}\) in (4.3) the proof of (4.5) ends. To conclude (4.6) we only have to observe that, by the definition of \(d_{N}^{\theta}\), (4.5) implies that \[\mathbb{E}_{\mu^{N}}\!\left[\left(\int_{s}^{t}d_{N}^{\theta}\bar{\eta}_{\nu N^{ 2}}(x)dr\right)^{\!2}\right]\lesssim|t-s|\begin{cases}N^{\theta-1}\text{ if }\theta<1\\ N^{1-\theta}\text{ if }\theta>1\end{cases}\quad+\ (t-s)^{2}(d_{N}^{\theta})^{2}R_{N}^{ \theta}\,. \tag{4.40}\] Since \((d_{N}^{\theta})^{2}R_{N}^{\theta}=N^{2(1-\theta)}\mathbb{1}(1<\theta)+\frac{N^{ \theta}}{N}\mathbb{1}(0\leq\theta\leq 1)+N^{\theta}\mathbb{1}(-1<\theta<0)+ \frac{1}{N}\mathbb{1}(\theta\leq-1)\), (4.6) follows. On the other hand, (4.4) follows once we prove that \[\int_{s}^{t}\int_{s}^{r}(d_{N}^{\theta})^{2}N^{\theta}P_{r\to v}^{N,0}(x,1)dvdr \lesssim|t-s|^{1+\delta_{\theta}},\] where \(\mathcal{G}_{\theta}\) is the same as in the statement of the lemma. To obtain this, namely the analogous of equation (5.4) of [17], we can simply repeat the argument used in Section 5.2 of [17]. To this aim we remark that \(\tilde{P}_{r\to v}^{N,0}(x,1)=P_{\alpha(r-)}^{1,N,0}(x,1)\), where \(P_{s}^{1,N,0}(x,y)\) is the unique solution of the initial value problem (5.4) of [17] taking \(\theta=0\), i.e. fixed \(x\in\Lambda_{N}\), we have \[\begin{cases}\hat{\alpha}_{t}P_{t}^{1,N,0}(x,y)=N^{2}\Delta_{N}^{1,1}P_{t}^{1,N,0}(x,y),\quad y\in\Lambda_{N},t>0,\\ P_{t}^{1,N,0}(x,0)=P_{t}^{1,N,0}(x,N)=0,\quad t>0,\\ P_{0}^{1,N,0}(x,y)=\delta_{0}(x-y),\quad y\in\Lambda_{N},\end{cases}\] where \(\Delta_{N}^{1,t}\) coincide with the operator \(\Delta_{N}^{i}\) when taking \(\alpha=1=\lambda^{t}=\lambda^{r}\) and \(\delta_{0}(x)=1\) if \(x=0\), otherwise it is equal to zero. The equality follows simply because they solve the same initial value problem, whose solution is unique. ### Proof of Lemma 4.5 Recall that for \(u\in[0,1]\) we defined \(\iota_{\epsilon}^{0}(u):=\epsilon^{-1}\mathbb{1}_{\{0,\epsilon\}}(u)\) and \(\iota_{\epsilon}^{1}(u):=\epsilon^{-1}\mathbb{1}_{\{1-\epsilon,1\}}(u)\). Here we will only give the details for the case \(j=0\) since, for \(j=1\), the proof is analogous. By expanding the square, using Fubini's Theorem and the definition of the density field \(Y_{s}^{N}\), we obtain \[\mathbb{E}_{\mu_{N}}\Bigg{[}\Bigg{(}\int_{0}^{t}Y_{s}^{N}(\iota_{\epsilon}^{0 })ds\Bigg{)}^{2}\Bigg{]}=\frac{2}{e^{2}N}\sum_{x,y\in\Lambda_{N}^{s,t}}\int_ {0}^{t}\int_{0}^{s}\varphi_{\nu,s}^{N}(x,y)dvds,\] where \(\varphi_{\nu,s}^{N}(x,y)\) was defined in (4.32). Using the identity (4.35), last display is equal to \[\frac{2}{\epsilon^{2}N}\sum_{x\in\Lambda_{N}^{s,t}}\int_{0}^{t} \int_{0}^{s}\varphi_{\nu,s}^{N}(x,x)dvds+\frac{2}{\epsilon^{2}N}\sum_{ \begin{subarray}{c}x,y\in\Lambda_{N}^{s,t}\\ y\neq x\end{subarray}}\int_{0}^{t}\int_{0}^{s}\varphi_{\nu,s}^{N}(x,y)dvds\] \[= \frac{2}{\epsilon^{2}N}\sum_{x\in\Lambda_{N}^{s,t}}\int_{0}^{t} \int_{0}^{s}P_{s\to v}^{N,\theta}(x,x)\mathbb{E}_{\mu^{N}}[(\tilde{\eta}_{\nu N ^{2}}(x))^{2}]dvds+\frac{2}{\epsilon^{2}N}\sum_{x\in\Lambda_{N}^{s,t}}\int_{0 }^{t}\int_{0}^{s}\sum_{\begin{subarray}{c}y\in\Lambda_{N}\\ z\neq x\end{subarray}}P_{s\to v}^{N,\theta}(x,z)\varphi_{\nu}^{N}(z,x)dvds \tag{4.41}\] \[+ \frac{2}{\epsilon^{2}N}\sum_{\begin{subarray}{c}x,y\in\Lambda_{N} ^{s,t}\\ y\neq x\end{subarray}}\int_{0}^{t}\int_{0}^{s}P_{s\to v}^{N,\theta}(x,y) \mathbb{E}_{\mu^{N}}[(\tilde{\eta}_{\nu N^{2}}(y))^{2}]dvds+\frac{2}{\epsilon^ {2}N}\sum_{\begin{subarray}{c}x,y\in\Lambda_{N}^{s,t}\\ y\neq x\end{subarray}}\int_{0}^{t}\int_{0}^{t}\sum_{\begin{subarray}{c}y\in \Lambda_{N}\\ z\neq y\end{subarray}}P_{s\to v}^{N,\theta}(x,z)\varphi_{\nu}^{N}(z,y)dvds. \tag{4.42}\] We remark that, for every \(x\in\Lambda_{N}\), \(\sum_{\begin{subarray}{c}z\in\Lambda_{N}\\ z\neq x\end{subarray}}P_{s\to v}^{N,\theta}(x,z)\leq 1\). Using (4.2), we can bound the rightmost term in (4.41) by \[\frac{2}{N\epsilon^{2}}\left|\sum_{x\in\Lambda_{N}^{s,t}}\int_{0}^{t}\int_{0}^ {s}\sum_{\begin{subarray}{c}z\in\Lambda_{N}\\ z\neq x\end{subarray}}P_{s\to v}^{N,\theta}(x,z)\varphi_{\nu}^{N}(z,x)dvds \right|\leq\frac{2t^{2}}{\epsilon}\sup_{v\in[0,T]}\max_{\begin{subarray}{c}x, z\in\Gamma_{N}\\ z\neq x\end{subarray}}|\varphi_{\nu}^{N}(x,z)|\lesssim\frac{t^{2}}{\epsilon N},\] which goes to zero when taking \(N\) to infinity. Moreover, using (4.7), we can bound the rightmost term of (4.42) by \[\frac{2}{N\epsilon^{2}}\left|\sum_{\begin{subarray}{c}x,y\in\Lambda _{N}^{\alpha,l}\\ y\neq x\end{subarray}}\int_{0}^{t}\int_{0}^{s}\sum_{\begin{subarray}{c}x\in \Lambda_{N}\\ x\neq y\end{subarray}}P_{x\sim y}^{N,\theta}(x,z)\varphi_{v}^{N}(z,y)dvds\right| \lesssim N\int_{0}^{t}\int_{0}^{s}\max_{\begin{subarray}{c}(x,y) \in\Lambda_{N}\times\Lambda_{N}^{\alpha,l}\\ x\neq y\end{subarray}}|\varphi_{v}^{N}(z,y)|dvds\] \[\lesssim\epsilon\int_{0}^{t}\int_{0}^{s}\left(1+\frac{1}{\sqrt{v }}\right)dvds+o\left(\frac{1}{N}\right) \tag{4.43}\] \[\lesssim C_{t}\epsilon+o\left(\frac{1}{N}\right),\] where \(C_{t}\) is a constant that depends on \(t\). Since, in the last bound, the first term is uniformly bounded in \(N\), this term will only go to zero when taking \(\epsilon\) to zero. For the remaining terms, since the occupation variables are bounded for every \(x\in\Lambda_{N}\), we can bound the first term in (4.41) and (4.42) by \[\frac{2}{N\epsilon^{2}}\left|\int_{0}^{t}\int_{0}^{s}\sum_{x\in \Lambda_{N}^{\alpha,l}}P_{x\sim y}^{N,\theta}(x,x)\mathbb{E}_{\mu^{N}}[(\bar{ \eta}_{\nu N^{2}}(x))^{2}]dvds\right|\lesssim\frac{1}{N\epsilon^{2}}\sum_{ \begin{subarray}{c}x\in\Lambda_{N}^{\alpha,l}\\ y\neq x\end{subarray}}\int_{0}^{t}\int_{0}^{s}P_{x\sim y}^{N,\theta}(x,x)dvds \tag{4.44}\] and \[\frac{2}{N\epsilon^{2}}\left|\int_{0}^{t}\int_{0}^{s}\sum_{ \begin{subarray}{c}x,y\in\Lambda_{N}^{\alpha,l}\\ y\neq x\end{subarray}}P_{x\sim y}^{N,\theta}(x,y)\mathbb{E}_{\mu^{N}}[(\bar{ \eta}_{\nu N^{2}}(y))^{2}]dvds\right|\lesssim\frac{1}{N\epsilon^{2}}\sum_{ \begin{subarray}{c}x,y\in\Lambda_{N}^{\alpha,l}\\ y\neq x\end{subarray}}\int_{0}^{t}\int_{0}^{s}P_{x\sim y}^{N,\theta}(x,y)dvds, \tag{4.45}\] respectively. The idea now is to estimate \(P_{t}^{N,\theta}(x,y)\) using \(\tilde{P}_{t}^{N,0}(x,y)\), where \(\tilde{P}_{t}^{N,0}(x,y)\) represents \(\mathbb{P}[\mathcal{X}_{tN^{2}}^{i}=y|\mathcal{X}_{0}^{i}=x]\), where \(\mathcal{X}_{tN^{2}}^{i}\) is the random walk defined in point \(1\). in the begining of Section 4.1 in the case we choose \(\theta=0\) and \(\lambda^{\ell}=\lambda^{\varepsilon}=\alpha\). To do this, we will use the maximum principles of Appendix A. Inspired by the bound for \(P_{t}^{N,\theta}(x,y)\) proved for \(\theta\geq 0\) in Lemma 4.2 of [17], we will show the following estimates. **Proposition 4.7**.: _Let \(\{\mathcal{X}_{tN^{2}}^{i}\ ;\ t\geq 0\}\) be the random walk on \(\Lambda_{N}\) with infinitesimal generator \(N^{2}\Delta_{N}^{i}\) which was defined in (4.34) and let \(P_{t}^{N,\theta}(x,y)\) be the transition probability for this random walk, i.e. for every \((x,y)\in\overline{V}_{N}\),_ \[P_{t}^{N,\theta}(x,y)=\mathbb{P}_{x}[\mathcal{X}_{tN^{2}}^{i}=y]=\mathbb{P}[ \mathcal{X}_{tN^{2}}^{i}=y|\mathcal{X}_{0}^{i}=x],\] _which coincides with the fundamental solution of (4.33). Denote by \(\tilde{P}_{t}^{N,\theta}\) the transition probability of the random walk \(\{\mathcal{X}_{tN^{2}}^{i}\ |\ t\geq 0\}\) when we take \(\theta=0\) and \(\lambda^{\ell}=\lambda^{\varepsilon}=\alpha\). Then, for every \(t\in[0,T]\) and \((x,y)\in V_{N}\), for \(\theta\geq 0\),_ \[P_{t}^{N,\theta}(x,y)\leq\tilde{P}_{t}^{N,0}(x,y)+\left(\frac{N^{\theta}}{ \lambda^{\ell}}-1\right)\tilde{P}_{t}^{N,0}(1,y)+\left(\frac{N^{\theta}}{ \lambda^{\varepsilon}}-1\right)\tilde{P}_{t}^{N,0}(N-1,y)],\] _and, for \(\theta<0\),_ \[P_{t}^{N,\theta}(x,y)\leq\tilde{P}_{t}^{N,0}(x,y).\] Remark that Proposition 4.7 is valid for every \(\alpha\in\mathbb{N}\), extending what was known for the case \(\alpha=1\) and \(\theta\geq 0\) to the case \(\alpha\geq 2\) with \(\theta\geq 0\) as well as the case \(\theta<0\) for all \(\alpha\in\mathbb{N}\). Proof of Proposition 4.7.: Let \(\theta\in\mathbb{R}\) and fix \(t_{0}\in[0,T]\) and \(y_{0}\in\Lambda_{N}\). Define the function \(h_{t_{0},y_{0}}^{N,\theta}:\overline{V}_{N}\to\mathbb{R}\) to be such that, for \(x\in\Lambda_{N}\), \[h_{t_{0},y_{0}}^{N,\theta}(x)=P_{t_{0}}^{N,\theta}(x,y_{0})-\tilde{P}_{t_{0}}^ {N,0}(x,y_{0}),\] and at the boundary we define it as \[\begin{cases}h_{t_{0},y_{0}}^{N,\theta}(0)=\left(\frac{N^{\theta}}{\lambda^{ \varepsilon}}-1\right)\tilde{P}_{t_{0}}^{N,0}(1,y_{0})\text{ and }h_{t_{0},y_{0}}^{N,\theta}(N)=\left(\frac{N^{\theta}}{\lambda^{ \varepsilon}}-1\right)\tilde{P}_{t_{0}}^{N,0}(N-1,y_{0})\text{ if }\theta\geq 0\\ h_{t_{0},y_{0}}^{N,\theta}(0)=h_{t_{0},y_{0}}^{N,\theta}(N)=0\text{ if }\theta<0.\end{cases}\] Using the fact that, for every \(t\in[0,T]\) and \((x,y)\in V_{N}\), \(P_{t}^{N,\theta}(x,y)\) and \(\tilde{P}_{t}^{N,0}(x,y)\) are fundamental solutions of (4.33) for \(\theta\in\mathbb{R}\) and for \(\theta=0\) and \(\lambda^{\ell}=\lambda^{\ell}=\alpha\), respectively, we get \[0=\tilde{\sigma}_{t}h_{t_{0},y_{0}}^{N,\theta}(x) =N^{2}\Delta_{N}^{i}h_{t_{0},y_{0}}^{N,\theta}(x)\] \[+\frac{\alpha N^{2}}{N^{\theta}}\big{[}(N^{\theta}-\lambda^{\ell} )\tilde{P}_{t}^{N,0}(1,y_{0})\mathbb{I}(x=1)+(N^{\theta}-\lambda^{\ell})\tilde {P}_{t}^{N,0}(N-1,y_{0})\mathbb{I}(x=N-1)\big{]}\,\mathbb{I}(\theta<0).\] Since, for \(\theta<0\), \(\frac{N^{2}(N^{\theta}-\lambda)}{N^{\theta}}\leq 0\) where \(j\in\{\ell,r\}\), then, by the maximum principle, Theorem A.2, if \(\theta\geq 0\) and Theorem A.1 if \(\theta<0\), for every \(x\in\overline{V}_{N}\), we have that, for every \(\theta\in\mathbb{R}\), \[h_{t_{0},y_{0}}^{N,\theta}(x)\leq\max\{h_{t_{0},y_{0}}^{N,\theta}(0),h_{t_{0 },y_{0}}^{N,\theta}(N)\}.\] This then implies that, for every \(t\in[0,T]\) and \((x,y)\in V_{N}\), \[P_{t}^{N,\theta}(x,y)\leq\tilde{P}_{t}^{N,0}(x,y)+\left(\frac{N^{\theta}}{ \lambda^{\ell}}-1\right)\tilde{P}_{t}^{N}(1,y_{0})+\left(\frac{N^{\theta}}{ \lambda^{\ell}}-1\right)\tilde{P}_{t}^{N,0}(N-1,y),\quad\text{for $\theta\geq 0$}\] and \[P_{t}^{N,\theta}(x,y)\leq\tilde{P}_{t}^{N,0}(x,y),\quad\text{for $\theta<0$}\,,\] as we wanted to show. We conclude this section with the following auxiliary results. **Lemma 4.8**.: _Let \(\theta<1\) then the following holds:_ 1. _For every_ \(\epsilon>0\) _and_ \(t\in[0,T]\)__ \[\limsup_{N\to+\infty}\sum_{x\in\Lambda_{N}^{\epsilon,d}}\int_{0}^{t}\int_{0}^ {s}\tilde{P}_{s\to v}^{N,0}(x,x)dvds\lesssim te\,.\] (4.46) 2. _For every_ \(\epsilon>0\) _and_ \(t\in[0,T]\)__ \[\limsup_{N\to+\infty}\sum_{x\in\Lambda_{N}^{\epsilon,d}}\int_{0}^{t}\int_{0}^ {s}[\tilde{P}_{s\to v}^{N,0}(x,1)+\tilde{P}_{s\to v}^{N,0}(x,N-1)]dvds\lesssim te.\] (4.47) 3. _For every_ \(p\geq 1\) _and_ \(t\in[0,T]\)__ \[\limsup_{\epsilon\to 0}\limsup_{N\to+\infty}\frac{1}{e^{pN}}\sum_{x\in\Lambda_{N}^{ \epsilon,d}}\int_{0}^{t}\int_{0}^{s}P_{s\to v}^{N,\theta}(x,x)dvds=0.\] (4.48) 4. _For any_ \(t\in[0,T]\)__ \[\limsup_{\epsilon\to 0}\limsup_{N\to+\infty}\frac{1}{e^{2N}}\sum_{\begin{subarray}{ c}x,y\in\Lambda_{N}^{\epsilon,d}\\ y\neq x\end{subarray}}\int_{0}^{t}\int_{0}^{s}\tilde{P}_{s\to v}^{N,0}(x,y)dvds =0.\] (4.49) 5. _For any_ \(t\in[0,T]\)__ \[\limsup_{\epsilon\to 0}\limsup_{N\to+\infty}\frac{N^{\theta}}{\epsilon}\sum_{x\in \Lambda_{N}^{\epsilon,d}}\int_{0}^{t}\int_{0}^{t}\int_{0}^{s}\tilde{P}_{s\to v }^{N,0}(x,1)+\tilde{P}_{s\to v}^{N,0}(x,N-1)dvds=0.\] (4.50) _We also note that the same results hold by replacing \(\Lambda_{N}^{\epsilon,d}\) by \(\Lambda_{N}^{\epsilon,r}\)._ Combining Proposition 4.7 and Lemma 4.8, for any \(t\in[0,T]\) we have that \[\limsup_{\epsilon\to 0}\limsup_{N\to+\infty}\frac{1}{e^{2}}\sum_{ \begin{subarray}{c}x,y\in\Lambda_{N}^{\epsilon,d}\\ y\neq x\end{subarray}}\int_{0}^{t}\int_{0}^{s}P_{s\to v}^{N,\theta}(x,y)dvds \lesssim\lim_{\epsilon\to 0}\limsup_{N\to+\infty}\frac{1}{Ne^{2}}\sum_{ \begin{subarray}{c}x,y\in\Lambda_{N}^{\epsilon,d}\\ y\neq x\end{subarray}}\int_{0}^{t}\int_{0}^{s}\tilde{P}_{s\to v}^{N,0}(x,y)dvds\] \[+\frac{N^{\theta}}{\epsilon}\sum_{y\in\Lambda_{N}^{\epsilon,d}} \int_{0}^{t}\int_{0}^{s}[\tilde{P}_{s\to v}^{N,0}(1,y)+\tilde{P}_{s\to v}^{N,0 }(N-1,y)]dvds\mathbb{I}(0\leq\theta<1)=0\] and the same holds for \(\Lambda_{N}^{e,r}\). With this we complete the proof of Lemma 4.5. Indeed, the previous observation together with equation (4.48) imply that the terms on the right-hand side of (4.45) and (4.44), respectively, also go to zero when taking the limit as \(N\to+\infty\) and then as \(\epsilon\to 0\), from which the proof is complete. Proof of Lemma 4.8.: To show all the estimates above recall that for every \(t\in[0,T]\) and \(x,y\in\Lambda_{N}\) we can explicitly write \(\tilde{P}_{t}^{N,0}(x,y)\) via the eigenvalues and eigenfunctions of the operator \(N^{2}\Delta_{N}^{i}\), see also Lemma 4.3. of [17]. Indeed, \[\tilde{P}_{t}^{N,0}(x,y)=\sum_{i\in\Lambda_{N}}e^{-a\Delta_{i}^{N}t}v_{l}^{N}(x )v_{l}^{N}(y), \tag{4.51}\] where for every \(x,l\in\Lambda_{N}\), \(v_{l}^{N}(x)=\sqrt{\frac{\pi}{N}}\sin\left(\frac{\pi lx}{N}\right)\) and \(\lambda_{l}^{N}=4N^{2}\sin^{2}\left(\frac{\pi l}{2N}\right)\) are respectively the eigenfunctions and eigenvalues of \(N^{2}\Delta_{N}^{i,t}\). We start with item \(i\). For \(x=y\) after two times integration of (4.51) we get \[\sum_{x\in\Lambda_{N}^{i,t}}\int_{0}^{t}\int_{0}^{s}\tilde{P}_{s\to y}^{N,0}( x,x)d\nu ds=\sum_{l\in\Lambda_{N}}t^{2}\psi(\alpha\lambda_{l}^{N}t)\sum_{x \in\Lambda_{N}^{i,t}}\frac{2}{N}\sin^{2}\left(\frac{\pi lx}{N}\right),\] where \(\psi(u):=\frac{e^{-u-1+q}}{u^{2}}\). We observe that, for every \(u\geq 0\), \(|\psi(u)|\leq\min\{1,\frac{1}{u}\}\), then \[\sum_{l\in\Lambda_{N}}t^{2}\psi(\alpha\lambda_{l}^{N}t)\sum_{x\in\Lambda_{N}^ {i,t}}\frac{2}{N}\sin^{2}\left(\frac{\pi lx}{N}\right)\lesssim\sum_{l\in \Lambda_{N}}\frac{2t\epsilon}{\pi^{2}a^{2}}\frac{\frac{\pi^{2}l^{2}}{4N^{2}} }{\sin^{2}\left(\frac{\pi l}{2N}\right)}\,.\] Noticing that \(\frac{x^{2}}{4sin^{2}(x)}\) is bounded for \(0\leq x\leq 2\) we finally have that \[\limsup_{N\to+\infty}\sum_{x\in\Lambda_{N}^{i,t}}\int_{0}^{t}\int_{0}^{s} \tilde{P}_{s\to y}^{N,0}(x,x)d\nu ds\lesssim\limsup_{N\to+\infty}\sum_{l\in \Lambda_{N}}\frac{2t\epsilon}{\pi^{2}a^{l2}}\lesssim t\epsilon\.\] Now we prove item _ii._ Again we start with the expression (4.51) for \(y=1\) and \(y=N-1\). We observe that, for every \(t\in[0,T]\) and \(x\in\Lambda_{N}\), since \(\sin\left(\frac{\pi l(N-1)}{N}\right)=-\cos(\pi l)\sin\left(\frac{\pi l}{N}\right)\), then \[\tilde{P}_{t}^{N,0}(x,1)+\tilde{P}_{t}^{N,0}(x,N-1)=\sum_{l\in\Lambda_{N}} \frac{2[1-\cos(\pi l)]}{N}e^{-a\lambda_{l}^{N}t}\sin\left(\frac{\pi lx}{N} \right)\sin\left(\frac{\pi l}{N}\right).\] Thus, integrating twice in time both sides above, we get \[\sum_{x\in\Lambda_{N}^{i,t}}\int_{0}^{t}\int_{0}^{s}\tilde{P}_{s\to y}^{N,0}( x,1)+\tilde{P}_{s\to y}^{N,0}(x,N-1)d\nu ds=\sum_{l\in\Lambda_{N}}t^{2}\psi( \alpha\lambda_{l}^{N}t)2[1-\cos(\pi l)]\sin\left(\frac{\pi l}{N}\right)\sum_{ x\in\Lambda_{N}^{i,t}}\frac{1}{N}\sin\left(\frac{\pi lx}{N}\right).\] As before, using the expression of \(\lambda_{l}^{N}\) we can bound the left-hand side of the last display by \[\sum_{l\in\Lambda_{N}}\frac{4t\epsilon}{\pi^{2}l^{2}}\frac{\frac{\pi^{2}l}{4N^ {2}}}{\sin^{2}\left(\frac{\pi l}{2N}\right)}.\] Using again that \(\frac{x^{2}}{4sin^{2}(x)}\) is bounded for \(0\leq x\leq 2\) we conclude that \[\limsup_{N\to+\infty}\sum_{x\in\Lambda_{N}^{i,t}}\int_{0}^{t}\int_{0}^{s}[ \tilde{P}_{s\to y}^{N,0}(x,1)+\tilde{P}_{s\to y}^{N,0}(x,N-1)]d\nu ds\lesssim \lim_{N\to+\infty}\sum_{l\in\Lambda_{N}}\frac{4t\epsilon}{\alpha\pi^{2}l^{2}} \lesssim t\epsilon\.\] Now we prove item _iii._ It simply follows from equations (4.46), (4.47) and Proposition 4.7. For every \(p\geq 1\) we can conclude that \[\lim_{\epsilon\to 0}\lim_{N\to+\infty}\frac{1}{ePN}\sum_{x\in\Lambda_{N}^{i,t}} \int_{0}^{t}\int_{0}^{s}P_{s\to y}^{N,\theta}(x,x)d\nu ds\lesssim\begin{cases} \lim_{\epsilon\to 0}\lim_{N\to+\infty}\frac{t[1+N^{\theta}]}{\epsilon e^{-1}N}&\text{ if }0\leq \theta<1\\ \lim_{\epsilon\to 0}\lim_{N\to+\infty}\frac{t[1+N^{\theta}]}{\epsilon e^{-1}N}& \text{ if }\theta<0\end{cases}=0. \tag{4.52}\] Now we prove item \(iv\). A simple computation shows that \[\frac{1}{N\epsilon^{2}}\sum_{\begin{subarray}{c}x,y=1\\ y\neq x\end{subarray}}^{\epsilon(N-1)}\int_{0}^{t}\int_{0}^{s}\tilde{B}_{i-y}^{N,0}(x,y)\,dvds=\sum_{l=1}^{N-1}\alpha^{2}t^{2}\psi(\alpha\lambda_{l}^{N}t)\sum _{\begin{subarray}{c}x,y=1\\ y\neq x\end{subarray}}^{\epsilon(N-1)}\frac{2}{N^{2}\epsilon^{2}}\sin\left( \frac{\pi lx}{N}\right)\sin\left(\frac{\pi ly}{N}\right).\] Trying to recover a Riemann sum from the right-hand side of the last identity, we can write \[\sum_{\begin{subarray}{c}x,y=1\\ y\neq x\end{subarray}}^{\epsilon(N-1)}\frac{2}{N^{2}}\sin\left(\frac{\pi lx}{ N}\right)\sin\left(\frac{\pi ly}{N}\right) =\int_{0}^{\epsilon}\int_{0}^{\epsilon}\sin\left(\pi lz\right)\sin \left(\pi lw\right)dzdw \tag{4.53}\] \[\sum_{\begin{subarray}{c}x,y=1\\ y\neq x\end{subarray}}^{\epsilon(N-1)}\frac{2}{N^{2}}\sin\left(\frac{\pi lx}{ N}\right)\sin\left(\frac{\pi ly}{N}\right)-\int_{0}^{\epsilon}\int_{0}^{ \epsilon}\sin\left(\pi lz\right)\sin\left(\pi lw\right)dzdw, \tag{4.54}\] and we remark that \[\frac{1}{\epsilon^{2}}\int_{0}^{\epsilon}\int_{0}^{\epsilon}\sin\left(\pi lz \right)\sin\left(\pi lw\right)dzdw=\frac{1}{\epsilon^{2}}\left(\int_{0}^{ \epsilon}\sin\left(\pi lz\right)dz\right)^{2}=\left(\frac{1-\cos\left(\pi l \epsilon\right)}{\pi l\epsilon}\right)^{2}.\] Therefore, \[\frac{1}{\epsilon^{2}}\sum_{l=1}^{N-1}t^{2}\psi(\lambda_{l}^{N}t )\int_{0}^{\epsilon}\int_{0}^{\epsilon}\sin\left(\pi lz\right)\sin\left(\pi lw \right)dzdw\] \[=\sum_{l=1}^{\min\left(N-1,\left(\epsilon\pi^{-1}\right)^{-1} \right)}t^{2}\psi(\lambda_{l}^{N}t)\left(\frac{1-\cos\left(\pi l\epsilon \right)}{\pi l\epsilon}\right)^{2}+\sum_{l=\min\left(\begin{subarray}{c}N-1,\left(\epsilon\pi^{-1}\right)^{-1}\right)}^{N-1}t^{2}\psi(\lambda_{l}^{N}t) \left(\frac{1-\cos\left(\pi l\epsilon\right)}{\pi l\epsilon}\right)^{2}. \tag{4.55}\] For the leftmost term of (4.55): by a third order Taylor expansion of \(\cos\left(\pi l\epsilon\right)\) around zero, the fact that \(x^{p}\leq\sqrt{x}\), for every \(p\geq 1\) and \(x\in[0,1]\), also that \(l\leq\left(\epsilon\pi\right)^{-1}\), i.e. \(\pi l\epsilon\leq 1\) and that \(\psi(u)\leq 1/u\), then, for each \(l\) in the above conditions, there exists \(\xi_{l}\in(0,\pi l\epsilon)\), such that \[\sum_{l=1}^{\min\left(N-1,\left(\epsilon\pi^{-1}\right)^{-1} \right)}t^{2}\psi(\lambda_{l}^{N}t)\left(\frac{1-\cos\left(\pi l\epsilon\right) }{\pi l\epsilon}\right)^{2} =\sum_{l=1}^{\min\left(N-1,\left(\epsilon\pi^{-1}\right)^{-1} \right)}t^{2}\psi(\lambda_{l}^{N}t)\left(\frac{\pi l\epsilon}{2}-\cos(\xi_{l} )\frac{(\pi l\epsilon)^{2}}{3!}\right)^{2}\] \[\lesssim\sum_{l=1}^{\min\left(N-1,\left(\epsilon\pi^{-1}\right)^ {-1}\right)}\frac{t}{\lambda_{l}^{N}}\sqrt{\pi l\epsilon}\] \[\lesssim\sqrt{\epsilon}\sum_{l=1}^{N-1}\frac{t}{(\pi l)^{3/2}} \frac{\pi^{2}l^{2}}{4N^{2}\sin^{2}\left(\frac{\pi l}{2N}\right)}\lesssim t \sqrt{\epsilon}.\] For the rightmost term of (4.55), for \(\epsilon>0\) and close to zero, for \(N\in\mathbb{N}\) sufficiently large, we have that \(\min\{N-1,\left(\epsilon\pi\right)^{-1}\}=\left(\epsilon\pi\right)^{-1}\) and therefore \[\sum_{\begin{subarray}{c}l=\left(\epsilon\pi\right)^{-1}\\ l\in\mathbb{N}\end{subarray}}^{N-1}t^{2}\psi(\lambda_{l}^{N}t)\left(\frac{1-1 \cos\left(\pi l\epsilon\right)}{\pi l\epsilon}\right)^{2} \lesssim\sum_{\begin{subarray}{c}l=\left(\epsilon\pi\right)^{-1} \\ l\in\mathbb{N}\end{subarray}}^{N-1}\frac{t}{\lambda_{l}^{N}}\left(\frac{1}{\pi l \epsilon}\right)^{2}=\sum_{\begin{subarray}{c}l=\left(\epsilon\pi\right)^{-1} \\ l\in\mathbb{N}\end{subarray}}^{N-1}\frac{t}{\pi^{4}l^{4}\epsilon^{2}}\underbrace{ \frac{\pi^{2}l^{2}}{4N^{2}\sin^{2}\left(\frac{\pi l}{2N}\right)}}_{\leq 5}\] \[\lesssim\sum_{\begin{subarray}{c}l=\left(\epsilon\pi\right)^{-1} \\ l\in\mathbb{N}\end{subarray}}^{N-1}\frac{t}{\pi^{4}l^{4}\epsilon^{2}}\lesssim( \epsilon\pi)^{3-\delta}\sum_{\begin{subarray}{c}l=\left(\epsilon\pi\right)^{-1} \\ l\in\mathbb{N}\end{subarray}}^{N-1}\frac{t}{\pi^{4}l^{1+\delta}\epsilon^{2}} \lesssim t\epsilon^{1-\delta},\] where \(0<\delta<1\). Putting these estimates together in (4.55), we finally obtain that \[\frac{2}{\epsilon^{2}}\sum_{l=1}^{N-1}t^{2}\psi(\lambda_{l}^{N}t)\int_{0}^{ \epsilon}\int_{0}^{\epsilon}\sin\left(\pi lz\right)\sin\left(\pi lw\right)dzdw \lesssim t\max\{\sqrt{\epsilon},\epsilon^{1-\delta}\}\longrightarrow 0\text{ as }N\to+\infty\text{ and then } \epsilon\to 0.\] Finally, \[\frac{1}{\epsilon^{2}}\sum_{l=1}^{N-1}t^{2}\psi(\lambda_{l}^{N}t) \left[\sum_{\begin{subarray}{c}x,y=1\\ y\neq x\end{subarray}}^{e(N-1)}\frac{1}{N^{2}}\sin\left(\frac{\pi lx}{N}\right) \sin\left(\frac{\pi ly}{N}\right)-\int_{0}^{\epsilon}\int_{0}^{\epsilon}\sin \left(\pi lz\right)\sin\left(\pi lw\right)dzdw\right]\] \[\leq\frac{1}{\epsilon^{2}}\sum_{l=1}^{N-1}\frac{t}{(\pi l)^{2}} \underbrace{\frac{\pi^{2}l^{2}}{4N^{2}\sin^{2}(\frac{\pi l}{2N})}}_{\leq 5} \left|\sum_{\begin{subarray}{c}x,y=1\\ y\neq x\end{subarray}}^{e(N-1)}\frac{1}{N^{2}}\sin\left(\frac{\pi lx}{N}\right) \sin\left(\frac{\pi ly}{N}\right)-\int_{0}^{\epsilon}\int_{0}^{\epsilon}\sin \left(\pi lz\right)\sin\left(\pi lw\right)dzdw\right|\] \[\leq\frac{5t}{\pi^{2}}\sum_{l=1}^{N-1}\frac{1}{l^{2}}\left|\sum_ {\begin{subarray}{c}x,y=1\\ y\neq x\end{subarray}}^{e(N-1)}\frac{1}{\epsilon^{2}N^{2}}\sin\left(\frac{\pi lx }{N}\right)\sin\left(\frac{\pi ly}{N}\right)-\frac{1}{\epsilon^{2}}\int_{0}^{ \epsilon}\int_{0}^{\epsilon}\sin\left(\pi lz\right)\sin\left(\pi lw\right)dzdw \right|.\] To finish the argument, it is enough to show that \[\lim_{\epsilon\downarrow 0}\limsup_{N\to+\infty}\sum_{l=1}^{N-1}\frac{1}{l^{2}} \left|\sum_{\begin{subarray}{c}x,y=1\\ y\neq x\end{subarray}}^{e(N-1)}\frac{1}{\epsilon^{2}N^{2}}\sin\left(\frac{ \pi lx}{N}\right)\sin\left(\frac{\pi ly}{N}\right)-\frac{1}{\epsilon^{2}} \int_{0}^{\epsilon}\int_{0}^{\epsilon}\sin\left(\pi lz\right)\sin\left(\pi lw \right)dzdw\right|=0. \tag{4.56}\] A simple computation shows that since \[\frac{1}{\epsilon^{2}}\int_{0}^{\epsilon}\int_{0}^{\epsilon}\sin\left(\pi lz \right)\sin\left(\pi lw\right)dzdw=\frac{1}{\epsilon^{2}}\sum_{x,y=0}^{e(N-1) }\int_{\frac{\pi l}{N}}^{\frac{\pi+1}{N}}\int_{\frac{\pi}{N}}^{\frac{\pi+1}{ N}}\sin\left(\pi lz\right)\sin\left(\pi lw\right)dzdw,\] and \(\sin(x)\) is a Lipschitz continuous function, then, for every \(l\in\Lambda_{N}\), \[\sum_{l=1}^{N-1}\frac{1}{l^{2}}\left|\sum_{\begin{subarray}{c}x, y=1\\ y\neq x\end{subarray}}^{e(N-1)}\frac{1}{\epsilon^{2}N^{2}}\sin\left(\frac{\pi lx}{N} \right)\sin\left(\frac{\pi ly}{N}\right)-\frac{1}{\epsilon^{2}}\int_{0}^{ \epsilon}\int_{0}^{\epsilon}\sin\left(\pi lz\right)\sin\left(\pi lw\right)dzdw\right|\] \[= \sum_{l=1}^{N-1}\frac{1}{l^{2}}\left|\frac{1}{\epsilon^{2}}\sum_ {\begin{subarray}{c}x,y=1\\ y\neq x\end{subarray}}^{e(N-1)}\int_{\frac{\pi l}{N}}^{\frac{\pi+1}{N}}\left[ \sin\left(\frac{\pi lx}{N}\right)\sin\left(\frac{\pi ly}{N}\right)-\sin\left( \pi lz\right)\sin\left(\pi lw\right)\right]dzdw-\sum_{x=1}^{e(N-1)}\frac{1}{ \epsilon^{2}N^{2}}\sin^{2}\left(\frac{\pi lx}{N}\right)\right|\] \[\leq \sum_{l=1}^{N-1}\frac{2}{l^{2}}\sum_{x=0}^{e(N-1)}\int_{\frac{\pi l }{N}}^{\frac{\pi+1}{N}}\left|\sin\left(\frac{\pi lx}{N}\right)-\sin\left(\pi lz \right)\right|dz+\frac{\pi^{2}}{6\epsilon N}\] \[\leq \sum_{l=1}^{N-1}\frac{2\pi}{l\epsilon}\sum_{x=0}^{e(N-1)}\int_{ \frac{\pi l}{N}}^{\frac{\pi+1}{N}}\left(z-\frac{x}{N}\right)dz+\frac{\pi^{2}}{ 6\epsilon N}\lesssim\frac{\log(N)}{N}+\frac{1}{eN}\longrightarrow 0\text{ as }N \rightarrow+\infty,\] which proves (4.56). Item \(v\). For the final estimate we observe that the result immediately follows from (4.47) when \(\theta<0\). For \(0\leq\theta<1\) the idea is to improve the estimates done in (4.47). Indeed we can write \[\frac{N^{\theta}}{\epsilon}\sum_{x\in\Lambda_{N}^{d}}\int_{0}^{t}\int_{0}^{s} \tilde{P}_{s\to v}^{N,0}(x,1)+\tilde{P}_{s\to v}^{N,0}(x,N-1)dvds \lesssim\frac{1}{N^{1-\theta}}\sum_{l\in\Lambda_{N}}\frac{t}{\pi al} \lesssim\frac{t}{N^{(1-\theta)/2}}\sum_{l\in\Lambda_{N}}\frac{1}{\pi al^{1+(1 -\theta)/2}}\] where in the first bound we used the same reasoning of item \(ii\). and that \(\sin(2x)=2\sin(x)\cos(x)\) while for the last one we used that \(l<N\). The result follows again by considering the limit as \(N\rightarrow\infty\), since \(1+(1-\theta)/2\) is bigger than one the series converges. ## 5 Results on occupation times In this section we collect some of the results that were necessary regarding occupation times of all the random walks we used in the article. The proof of our results uses an artefact that consists in comparing our random walk with another one for which explicit results are known. To that end, in the first subsection below we make a comparison with an absorbed random walk and in the following subsection we make a comparison with a reflected random walk. ### Comparison with an absorbed random walk **Lemma 5.1**.: _Recall the function \(T_{N}^{i}\) defined in (4.16). Then, for every \((x,y)\in V_{N}\)_ \[T_{N}^{i}(x,y)\lesssim\begin{cases}\frac{1}{N}\mathbb{1}((x,y)\notin U_{N})+ \frac{1}{N^{2}}\mathbb{1}((x,y)\in U_{N})+\frac{N^{4}}{N^{3}},\text{ if }\theta\leq 0,\\ \frac{1}{N}\mathbb{1}((x,y)\notin U_{N})+\frac{1}{N^{2}}\mathbb{1}((x,y)\in U _{N})+\frac{N^{4}}{N^{3}},\text{ if }\theta>0,\end{cases}\] _where \(U_{N}=\{(x,y)\in V_{N}\mid x=1\text{ or }y=N-1\}\)._ Proof of Lemma 5.1.: To prove the result we will use the random walk \((\mathcal{X}_{LN^{2}}^{i};t\geq 0)\) generated by the operator (4.10) with the choice \(\lambda^{\ell}=\lambda^{r}=\alpha\) and \(\theta=0\). Denote by \(\mathcal{Y}_{N}^{\text{a}}\) the expected occupation time of the diagonals \(\mathcal{D}_{N}^{+}\) by that random walk. A simple computation shows that \(\mathcal{Y}_{N}^{\text{a}}(x,y)\) is the solution of \[\begin{cases}N^{2}\Delta_{N}^{0,i}\mathcal{Y}_{N}^{\text{a}}(x,y)=-\delta_{y= x+1},\text{ if }(x,y)\in V_{N}\\ \mathcal{Y}_{N}^{\text{a}}(x,y)=0,\text{ if }(x,y)\in\partial V_{N}.\end{cases}\] where \(\Delta_{N}^{0,i}\) is the operator defined in (4.10) with the choice \(\lambda^{\ell}=\lambda^{r}=\alpha\) and \(\theta=0\). Solving explicitly the previous system of linear equations, we obtain \[\mathcal{Y}_{N}^{\text{a}}(x,y)=\frac{(N-y)x}{N^{2}(\alpha N-1)}-\frac{1}{2N( \alpha N-1)}\mathbb{1}(y=x), \tag{5.1}\] and therefore \[\max_{(x,y)\in V_{N}}\mathcal{Y}_{N}^{\text{a}}(x,y)\lesssim\frac{1}{N},\ \ \max_{x\in\mathcal{X}_{N}}\mathcal{Y}_{N}^{\text{a}}(x,N-1)\lesssim\frac{1}{N^{2 }}\ \ \ \text{ and }\ \ \ \max_{y\in\mathcal{X}_{N}}\mathcal{Y}_{N}^{\text{a}}(1,y)\lesssim\frac{1}{N^{ 2}}. \tag{5.2}\] Now, let us consider the function \[W_{N}^{i}(x,y):=T_{N}^{i}(x,y)-\mathcal{Y}_{N}^{\text{a}}(x,y)+C_{N}^{i}(x,y),\] where \(C_{N}^{i}\) is given on \((x,y)\in\overline{V}_{N}\) by \[C_{N}^{i}(x,y)=\big{(}\frac{N^{\theta}}{\lambda^{\ell}+\lambda^{r}}-1\big{)} \min_{(x,y)\in V_{N}^{\text{a}}}\text{sgn}(\theta)T_{N}^{0,i}(z,w)\mathbb{1}( (x,y)\in V_{N}).\] Recall the expression of \(\mathcal{T}_{N}^{\text{a}}\) given in (5.1). A simple computation shows that \[\max_{(x,y)\in V_{N}}\mathcal{Y}_{N}^{\text{a}}(x,y)=\begin{cases}\frac{2(N- \lfloor N/2\rfloor)(\mathbb{N}/2)-N}{2N^{2}(\alpha N-1)},\text{ if }N/2-\lfloor N/2\rfloor<\lceil N/2\rceil-N/2,&\text{( chosing in \eqref{eq:C_N} \ $x=y=\lfloor N/2\rfloor$)},\\ \frac{2(N-\lfloor N/2\rfloor)(\mathbb{N}/2)-N}{2N^{2}(\alpha N-1)},\text{ if }N/2- \lfloor N/2\rfloor\geq\lceil N/2\rceil-N/2,&\text{(chosing in \eqref{eq:C_N} \ $x=y=\lceil N/2\rceil$)}.\end{cases}\] and \[\min_{(x,y)\in V_{N}}\mathcal{Y}_{N}^{\text{a}}(x,y)=\frac{1}{N^{2}(\alpha N-1 )}\ \ \ \text{(chosing in \eqref{eq:C_N} \ $x=1,y=N-1$)}.\] Recall that \(T_{N}^{i}\) is the solution of \[\begin{cases}N^{2}\Delta_{N}^{i}T_{N}^{i}(x,y)=-\delta_{y=x+1},\text{ if }(x,y)\in V_{N},\\ T_{N}^{i}(x,y)=0,\text{ if }(x,y)\in\partial V_{N}.\end{cases}\] Then, a simple computation shows that \(W_{N}^{i}\) is solution to \[\begin{cases}N^{2}\Delta_{N}^{i}W_{N}^{i}(x,y)+\Big{(}N^{2}\Delta_{N}^{i}-N^{2 }\Delta_{N}^{0,i}\Big{)}\mathcal{Y}_{N}^{\text{a}}+N^{2}\Delta_{N}^{i}C_{N}^{i} (x,y)=0,\text{ if }(x,y)\in V_{N},\\ W_{N}^{i}(x,y)=0,\text{ if }(x,y)\in\partial V_{N},\end{cases} \tag{5.3}\] A simple computation shows that for every \((x,y)\in V_{N}\) \[\Big{(}N^{2}\Delta_{N}^{i}- N^{2}\Delta_{N}^{0,i}\Big{)}\mathcal{Y}_{N}^{\text{a}}(x,y)+N^{2} \Delta_{N}^{i}C_{N}^{i}(x,y)\] \[=(1+\mathbb{1}(y=x))N^{2}\Big{(}\alpha\Big{[}1-\frac{\lambda^{ \ell}}{N^{\theta}}\Big{]}\mathcal{Y}_{N}^{\text{a}}(1,y)\mathbb{1}(x=1)+ \alpha\Big{[}1-\frac{\lambda^{r}}{N^{\theta}}\Big{]}\mathcal{Y}_{N}^{\text{a}} (x,N-1)\mathbb{1}(y=N-1)\Big{)}\] \[+(1+\mathbb{1}(y=x))N^{2}\Big{(}\frac{\lambda^{\ell}\alpha}{N^{ \theta}}C_{N}^{i}(1,y)\mathbb{1}(x=1)+\frac{\lambda^{r}\alpha}{N^{\theta}}C_{N}^{ i}(x,N-1)\mathbb{1}(y=N-1)\Big{)}.\] Observe that the unique solution \(f\) of \[\begin{cases}N^{2}\Delta_{N}^{i}f(x,y)=0,\text{ if }(x,y)\in V_{N},\\ f(x,y)=0,\text{ if }(x,y)\in\partial V_{N},\end{cases}\] is \(f(x,y)=0\) for all \((x,y)\in\overline{V}_{N}\). Moreover, from the definition of \(C_{N}^{i}\), for every \((x,y)\in V_{N}\), it holds \[\begin{split}\Big{(}N^{2}\Delta_{N}^{i}-N^{2}\Delta_{N}^{0i} \Big{)}\mathcal{G}_{N}^{\alpha}(x,y)+N^{2}\Delta_{N}^{i}C_{N}^{i}(x,y)\leq 0. \end{split}\] Therefore, \(W_{N}^{i}\) is the solution of the initial value problem given by \[\begin{cases}N^{2}\Delta_{N}^{i}W_{N}^{i}(x,y)\geq 0,\text{ if }(x,y)\in V_{N},\\ W_{N}^{i}(x,y)=0,\text{ if }(x,y)\in\partial V_{N}.\end{cases}\] Applying a version of the maximum principle for discrete elliptic operators that are Markov generators, i.e. Theorem A.1 below, we get for every \((x,y)\in\overline{V}_{N}\) that \(W_{N}^{i}(x,y)\leq 0\), i.e. \[T_{N}^{i}(x,y) \leq\mathcal{G}_{N}^{\alpha}(x,y)-C_{N}^{i}(x,y)\] \[\lesssim\begin{cases}\frac{1}{N}\mathbb{1}((x,y)\notin U_{N})+ \frac{1}{N^{2}}\mathbb{1}((x,y)\in U_{N})+\frac{N^{\alpha}}{N},\text{ if } \theta\leq 0,\\ \frac{1}{N}\mathbb{1}((x,y)\notin U_{N})+\frac{1}{N^{2}}\mathbb{1}((x,y)\in U _{N})+\frac{N^{\alpha}}{N^{\alpha}},\text{ if }\theta>0,\end{cases}\] where \(U_{N}=\{(x,y)\in V_{N}\mid x=1\text{ or }y=N-1\}\). This ends the proof. ### Comparison with a reflected random walk **Lemma 5.2**.: _Recall (4.24). Then, for every \(t\in[0,T]\)_ \[\max_{(x,y)\in V_{N}\setminus\mathcal{G}_{N}}\widetilde{T}_{t}^{N}(x,y) \lesssim\frac{t+1}{N}.\] Proof of Lemma 5.2.: Recall that \(\{\tilde{\mathcal{X}}_{tN^{2}}\ ;\ t\geq 0\}\) represents a two-dimensional random walk on \(V_{N}\) that jumps to every nearest-neighbor site at rate \(\alpha\), except at the diagonal \(\mathcal{G}_{N}^{+}\) where it jumps left/up at rate \(\alpha\) and right/down at rate \(\alpha-1\) and moreover, it is reflected at \(\partial V_{N}\). Let \(\tilde{\mathbb{E}}_{(x,y)}\) denote the expectation given that \(\tilde{\mathcal{X}}_{tN^{2}}\) starts from the point \((x,y)\). From Dynkin's formula, for every function \(f:V_{N}\to\mathbb{R}\) and for every \((x,y)\in V_{N}\setminus\mathcal{G}_{N}\), \[0=\tilde{\mathbb{E}}_{(x,y)}\Big{[}M_{t}^{N}(f)\Big{]}=\tilde{\mathbb{E}}_{(x,y)}\bigg{[}f(\tilde{\mathcal{X}}_{tN^{2}})-f(\tilde{\mathcal{X}}_{0})-\int_{ 0}^{t}N^{2}\mathcal{C}_{N}f(\tilde{\mathcal{X}}_{tN^{2}})ds\bigg{]}. \tag{5.4}\] where \(\mathfrak{C}_{N}^{i}\) is, as defined in (4.12). From (5.4) we get \[\tilde{\mathbb{E}}_{(x,y)}\bigg{[}\int_{0}^{t}N^{2}\mathfrak{C}_{N}f(\tilde{ \mathcal{X}}_{tN^{2}})ds\bigg{]}\leq\max_{z,w\in V_{N}}\{f(z)-f(w)\}.\] For the choice \(f(x,y)=-(x-\frac{1}{2})^{2}-(y-(N-\frac{1}{2}))^{2}\), a long but elementary computation shows that for every \((x,y)\in V_{N}\): \[N^{2}\mathfrak{C}_{N}f(x,y)=\begin{cases}-4\alpha N^{2},\text{ if }|x-y|\geq 2 \text{ but }(x,y)\neq(1,N-1),\\ -2\alpha N^{2},\text{ if }(x,y)=(1,N-1),\\ N^{2}(2N-4\alpha-2),\text{ if }|x-y|=1,\\ 2aN^{2}(2N-1),\text{ if }y=x\text{ and }y,x\neq 1,N-1,\\ 2aN^{2}(2N-7),\text{ if }y=x=1\text{ or }y=x=N-1.\end{cases}\] From last display, we conclude that \[\tilde{\mathbb{E}}_{(x,y)}\bigg{[}\int_{0}^{t}N^{2}\mathfrak{C}_{N }f(\tilde{\mathcal{X}}_{tN^{2}})ds\bigg{]} =N^{2}(2N-4\alpha-2)\int_{0}^{t}\tilde{\mathbb{E}}_{(x,y)}\big{[} \mathbb{1}(\tilde{\mathcal{X}}_{tN^{2}}\in\mathcal{G}_{N}^{+}\big{]}ds\] \[+2aN^{2}(2N-1)\int_{0}^{t}\tilde{\mathbb{E}}_{(x,y)}\big{[} \mathbb{1}(\tilde{\mathcal{X}}_{tN^{2}}\in\mathcal{G}_{N}\setminus\{(1,1),(N-1,N -1)\})\big{]}ds\] \[+2aN^{2}(2N-7)\int_{0}^{t}\left(\tilde{\mathbb{E}}_{(x,y)}\big{[} \mathbb{1}(\tilde{\mathcal{X}}_{tN^{2}}=(1,1))\big{]}+\tilde{\mathbb{E}}_{(x,y)} \big{[}\mathbb{1}(\tilde{\mathcal{X}}_{tN^{2}}=(N-1,N-1))\big{]}\right)ds\] \[-2aN^{2}\int_{0}^{t}\tilde{\mathbb{E}}_{(x,y)}\big{[}\mathbb{1}( \tilde{\mathcal{X}}_{tN^{2}}=(1,N-1))\big{]}ds-4aN^{2}\int_{0}^{t}\tilde{ \mathbb{E}}_{(x,y)}\big{[}\mathbb{1}(\tilde{\mathcal{X}}_{tN^{2}}\in\mathcal{G} \big{]}ds,\] where \(\mathcal{C}=\{(x,y)\in V_{N}\mid|x-y|\geq 2\ \text{and}\ (x,y)\neq(1,N-1)\}\). By noting that the time integral of the rightmost term in the first line of last display is equal to \(T_{t}^{i,N}(x,y)\), we conclude that \[T_{t}^{i,N}(x,y) \leq-\frac{2aN^{2}(2N-1)}{N^{2}(2N-4a-2)}\int_{0}^{t}\tilde{\mathbb{ E}}_{(x,y)}\big{[}\mathbb{1}(\tilde{\mathcal{X}}_{iN^{2}}\in\mathcal{G}_{N} \setminus\{(1,1),(N-1,N-1)\})\big{]}ds\] \[\quad-\frac{2aN^{2}(2N-7)}{N^{2}(2N-4a-2)}\int_{0}^{t}\tilde{ \mathbb{E}}_{(x,y)}\big{[}\mathbb{1}(\tilde{\mathcal{X}}_{iN^{2}}\in\{(1,1),(N -1,N-1)\})\big{]}ds\] \[\quad+\frac{2aN^{2}}{N^{2}(2N-4a-2)}\int_{0}^{t}\big{(}\tilde{ \mathbb{E}}_{(x,y)}\big{[}\mathbb{1}(\tilde{\mathcal{X}}_{iN^{2}}=(1,N-1)) \big{]}+2\tilde{\mathbb{E}}_{(x,y)}\big{[}\mathbb{1}(\tilde{\mathcal{X}}_{iN^ {2}}\in\mathcal{C}\big{]}\big{)}ds\] \[\quad+\frac{1}{N^{2}(2N-4a-2)}\max_{z,w\in V_{N}}\{f(z)-f(w)\}.\] For \(N\geq 2a+1\), since the first two terms of the last bound for \(T_{t}^{i,N}(x,y)\) are negative, we have that \[\max_{(x,y)\in V_{N}}T_{t}^{i,N}(x,y) \lesssim\frac{t}{N}+\frac{(x,y)(z,w)\in V_{N}}{N^{2}(2N-4a-2)}\] \[\lesssim\frac{t+1}{N}.\] ## Appendix A Maximum principles **Theorem A.1**.: _Let \(\mathcal{E}\) be the Markov generator of the continuous time Markov chain \(\{X_{t}\}_{t\geq 0}\) and denote by \(\mathcal{G}(\mathcal{E})\) its domain. Let \(\Omega\) be a discrete set with a non-empty \(\partial\Omega\). If \(f\in\mathcal{G}(\mathcal{E})\) with domain \(\Omega\) is solution to_ \[\begin{cases}\mathcal{E}f\geq 0\text{ in }\Omega,\\ f(x)=0\text{ in }\partial\Omega,\end{cases}\] _then \(f\leq 0\) in \(\Omega\)._ Proof.: Let \(f\) be the solution of \[\begin{cases}\mathcal{E}f=h\text{ in }\Omega,\\ f(x)=0\text{ in }\partial\Omega,\end{cases}\] with \(h\geq 0\) in \(\Omega\). Then, given the stopping time \(\tau_{\partial\Omega}=\inf\{t\geq 0\mid X_{t}\in\partial\Omega\}\), \(f\) can be represented, for every \(x\in\Omega\cup\partial\Omega\) by \[f(x)=-\mathbb{E}_{x}\big{[}\int_{0}^{\tau_{\partial\Omega}}h(X_{t})dt\big{]}.\] Since \(h\geq 0\) in \(\Omega\) by assumption, the result is a simple consequence of the previous formula. **Theorem A.2**.: _Let \(A\) be a finite set. Define \(\mathcal{F}(A)\) as the set of functions \(f:A\to\mathbb{R}\). Consider a connected graph \((A,E)\) and define the non-empty subset of \(A\), that we denote by \(\partial A\), that is the set of vertices with degree one. Let \(\mathcal{E}:\mathcal{F}(A)\to\mathcal{F}(A)\) be an operator of the form_ \[\mathcal{E}f(\eta)=\sum_{\{\xi,\cdot\}\in E}c(\eta,\xi)[f(\xi)-f(\eta)],\] _where \(c(\cdot,\cdot)\) is a positive function. If there exists \(f\in\mathcal{F}(A)\) solution to \(\mathcal{E}f=0\) in \(A\setminus\partial A\), then_ \[\max_{x\in A}f(x)\leq\max_{w\in\partial A}f(w)\quad\text{and}\quad\min_{x\in A }f(x)\geq\min_{w\in\partial A}f(w).\] Proof.: We prove the maximum case, since, to obtain the minimum, we only have to take \(g=-f\) and the result follows. If \(f\) is constant, there is nothing to prove. So, assume this is not the case and let us proceed by contradiction. Since \(A\) is finite, if \(f\) was such that \(\max_{x\in A\setminus\partial A}f(x)>\max_{w\in\partial A}f(w)\), then there would exist \(y\in A\setminus\partial A\) such that \(f(y)=\max_{x\in A}f(x)\) and \(f(y)>f(w)\) for all \(w\in\partial A\). Then \[0=\mathcal{E}f(y)=\sum_{\{\xi,\cdot\}\in E}c(y,\xi)[f(\xi)-f(y)],\] (A.1) and, because \(c>0\), (A.1) imply that \[\max_{x\in\alpha}f(x)=\frac{1}{a_{y}}\sum_{\{\xi,y\}\in E}c(y,\xi)f(\xi),\] (A.2) where \(a_{y}:=\sum_{\{\xi,y\}\in E}c(y,\xi)\). Since the left-hand-side of last display is a weighted average, in order to an average to attain the maximum of a function, then all the points have to be equal to the maximum value. This means that, for all the vertices that are connected to \(y\) by an edges, the maximum of \(f\) is also attained there. Repeating the argument now for this vertices, we obtain that, for all the vertices that are connected to them by an edge, the maximum of \(f\) is also attained there, and so on. Because \(G\) is connected, we know that for every two points of the graph there must exists a path that connect them. Therefore, by the previous reasoning, we showed that the maximum of the function has to be attained in \(\partial A\), which is a contradiction. **Theorem A.3**.: _Let \(\mathcal{E}\) be the Markov generator of the continuous time Markov chain \(\{X_{t}\}_{t\geq 0}\) and denote by \(\mathcal{G}(\mathcal{E})\) its domain. Let \(\overline{\Omega}\) be a discrete set and \(\partial\Omega\) a non-empty subset of \(\overline{\Omega}\). Let \(\Omega=\overline{\Omega}\backslash\partial\Omega\). If \(f:[0,T]\times\overline{\Omega}\to\mathbb{R}\) is a function that it is differentiable in time and that is solution to_ \[\begin{cases}\partial_{t}f\leq\mathcal{E}f\text{ in }(0,T)\times\Omega,\\ f(t,x)=0\text{ in }[0,T]\times\partial\Omega,\\ f(0,x)=f_{0}(x),\text{ in }\Omega,\end{cases}\] _then \(f(y)\leq\max_{x\in\Omega}\{0,f_{0}(x)\}\), for every \(y\in[0,T]\times\overline{\Omega}\)._ The proof of the previous theorem can be obtained by adapting the proof of (A.1) for the time-dependent case. It is a simple combination of Feynman-Kac's representation of the solution to the problem \[\begin{cases}\partial_{t}f=\mathcal{E}f+h\text{ in }(0,T)\times\Omega,\\ f(t,x)=0\text{ in }[0,T]\times\partial\Omega,\\ f(0,x)=f_{0}(x),\text{ in }\Omega,\end{cases}\] where the function \(h\) is non-positive. ## Appendix B Details on the Chapman-Kolmogorov equation of \(\varphi_{t}^{N}\), when \(\alpha\geq 2\) For completeness we perform here some standard computations regarding one and two-point correlations, used in the proof of Proposition 4.2. For every \((x,y)\in V_{N}\), we have \[\partial_{t}\varphi_{t}^{N}(x,y) =\mathbb{E}_{\mu^{N}}[N^{2}\mathcal{L}_{N}(\bar{\eta}_{tN^{2}}( x)\bar{\eta}_{tN^{2}}(y))]\] \[=\mathbb{E}_{\mu^{N}}[N^{2}\mathcal{L}_{N}(\eta_{tN^{2}}(x)\eta_{ tN^{2}}(y))]-\rho_{t}^{N}(y)\mathbb{E}_{\mu^{N}}[N^{2}\mathcal{L}_{N}\eta_{tN^{2}} (x)]-\rho_{t}^{N}(x)\mathbb{E}_{\mu^{N}}[N^{2}\mathcal{L}_{N}\eta_{tN^{2}}(y)],\] by the forward Kolmogorov equation and the linearity of \(\mathcal{L}_{N}\). It is worthy to compute \(\mathbb{E}_{\mu^{N}}[\mathcal{L}_{N}(\eta(x)\eta(y))]\) and \(\mathbb{E}_{\mu^{N}}[\mathcal{L}_{N}\eta(x)]\) (resp. \(\mathbb{E}_{\mu^{N}}[\mathcal{L}_{N}\eta(y)]\)). We start with the latter. The action of the SEP\((\alpha)\) generator \(\mathcal{L}_{N}\) on the one-point correlation function is \[\mathcal{L}_{N}\eta(x) =\alpha[\eta(x-1)-\eta(x)]\mathbb{1}(x\neq 1)+\alpha[\eta(x+1)- \eta(x)]\mathbb{1}(x\neq N-1)\] \[\quad+\frac{\alpha\lambda^{t}}{N^{\theta}}\left[\rho^{t}-\eta(1) \right]\mathbb{1}(x=1)+\frac{\alpha\lambda^{t}}{N^{\theta}}\left[\rho^{t}- \eta(N-1)\right]\mathbb{1}(x=N-1)\,\] for \(x\in\Lambda_{N}\). Similarly for \(x,y\in\Lambda_{N}\) the action on the two-point correlation function can be conveniently written as \[\mathcal{L}_{N}(\eta(x)\eta(y))=\eta(x)\mathcal{L}_{N}\eta(y)+\eta(y) \mathcal{L}_{N}\eta(x)+\Gamma\left(\eta(x),\eta(y)\right)\,\] (B.1) where \[\Gamma\left(\eta(x),\eta(y)\right)=\begin{cases}\frac{\lambda^{t}\rho^{t}}{N^ {\theta}}[\alpha-\eta(1)]+\frac{\lambda^{t}\eta(1)}{N^{\theta}}[\alpha-\rho^{ t}]+\alpha[\eta(1)+\eta(2)]-2\eta(1)\eta(2)&\text{for }x=y=1,\\ \alpha[\eta(x-1)+2\eta(x)+\eta(x+1)]-2\eta(x)[\eta(x-1)+\eta(x+1)]&\text{for }y=x \neq 1,N-1,\\ 2\eta(x)\eta(y)-\alpha[\eta(x)\eta(y)]&\text{for }y=x+1,\\ \frac{\lambda^{t}\rho^{t}}{N^{\theta}}[\alpha-\eta(N-1)]+\frac{\lambda^{t} \eta(N-1)}{N^{\theta}}[\alpha-\rho^{t}]+\\ \alpha[\eta(N-1)+\eta(N-2)]-2\eta(N-1)\eta(N-2)&\text{for }x=y=N-1,\\ 0&\text{otherwise}.\end{cases}\] Extention of \(\varphi_{t}^{N}\) to the diagonal The role of this section is to give two different approaches in order to extend the value of the correlation function to the diagonal \(\mathcal{D}_{N}\). We first start with an approach based on stochastic duality, while for the second one we use an analytic approach based on degree two functions. ### Stochastic Duality Based on properties of duality (see [6] for a survey on duality results for several boundary driven interacting systems), we show how to extend \(\varphi_{t}^{N}\) to the diagonal \(\mathcal{D}_{N}\). It is well known that the \(\operatorname{SEP}(\alpha)\) with open boundary has \(\operatorname{SEP}(\alpha)\) with absorbing boundary as its dual process with duality function \(D:\Omega_{N}\times\Omega_{N}^{dual}\to\mathbb{R}\) given by \[D(\eta,\hat{\eta})=\left[\rho^{t}\right]^{\hat{\eta}(0)}\prod_{x=1}^{N-1}\frac {\eta(x)!(\alpha-\hat{\eta}(x))!}{\{\eta(x)-\hat{\eta}(x)\}!{\rm i}\sigma!} \mathbb{1}\{\eta(x)\geq\hat{\eta}(x)\}[\rho^{t}\,\hat{\eta}]^{\hat{\eta}(N)}\,,\] (C.1) for every \((\eta,\hat{\eta})\in\Omega_{N}\times\Omega_{N}^{dual}\), where \(\Omega_{N}^{dual}=\mathbb{N}\times\{0,\dots,\alpha\}^{\Delta_{N}}\times \mathbb{N}\) is the state space of the absorbing dual process. If we now take \(\hat{\eta}=\delta_{x}+\delta_{y}\) in (C.1), we have that \[\mathbb{E}_{\mu^{N}}[D(\cdot,\delta_{x}+\delta_{y})]=\begin{cases}\mathbb{E}_ {\mu^{N}}\left[\frac{\eta(x)(\eta)}{a^{2}}\right],\text{ if }y\neq x\\ \mathbb{E}_{\mu^{N}}\left[\frac{\eta(x)(\eta(x)-1)}{a(a-1)}\right],\text{ if }y=x.\end{cases}\] (C.2) A simple computation shows that in fact \(\varphi_{t}^{N}(x,y)\) as defined in (2.23) for \(x\neq y\) and in (4.18) for \(x=y\) satisfies \[\varphi_{t}^{N}(x,y)=\alpha^{2}(\mathbb{E}_{\mu^{N}}[D(\eta_{tN^{2}},\delta_{x }+\delta_{y})]-\mathbb{E}_{\mu^{N}}[D(\eta_{tN^{2}},\delta_{x})]\mathbb{E}_{ \mu^{N}}[D(\cdot,\delta_{y})])\.\] (C.3) In other words, the function \(\varphi_{t}^{N}(x,y)\) can be written in a natural way in terms of the duality function (C.1) without distinguishing the case \(x=y\). ### Degree Two Functions Now we show an analytic argument to choose the extension of \(\varphi_{t}^{N}\) to \(\mathcal{D}_{N}\) as in (4.18). In this subsection, for simplicity of the presentation, we neglect the boundary dynamics of the process and we explain the argument for the bulk dynamics. The general case, follows from adapting the ideas we present here. Let us call \(\tilde{\varphi}_{t}^{N}\) the extension of \(\varphi_{t}^{N}\) to \(\mathcal{D}_{N}\) as \(\mathbb{E}_{\mu^{N}}[(\tilde{\eta}(x))^{2}]\), i.e. for every \((x,y)\in V_{N}\) \[\tilde{\varphi}_{t}^{N}(x,y)=\begin{cases}\varphi_{t}^{N}(x,y),\text{ if }y\neq x,\\ \mathbb{E}_{\mu^{N}}[(\tilde{\eta}(x))^{2}],\text{ if }x=y.\end{cases}\] (C.4) For \(\alpha=1\) and since \(\eta(x)\in\{0,1\}\) then there is no need to extend the correlation function to the diagonal \(\mathcal{D}_{N}\). Moreover, the Chapman-Kolmogorov equation for \(\varphi_{t}^{N}\) is very simple as we saw in (4.13). Nevertheless, if \(\alpha\geq 2\), the Chapman-Kolmogorov equation for \(\tilde{\varphi}_{t}^{N}\) is not as simple. In fact, \(\tilde{\varphi}_{t}^{N}\) is solution, for every \((x,y)\in V_{N}\) to \[\partial_{t}\tilde{\varphi}_{t}^{N}(x,y) =N^{2}\mathcal{H}_{N}\tilde{\varphi}_{t}^{N}(x,y)\] (C.5) \[+N^{2}\left\{2\tilde{\varphi}_{t}^{N}(x,x+1)-\tilde{\chi}_{a}^{N, t}(x,x+1)\right\}\mathbb{1}(y=x+1)\] (C.6) \[-N^{2}\left\{4\tilde{\varphi}_{t}^{N}(x,x)-\left[\tilde{\chi}_{a} ^{N,t}(x,x+1)+\tilde{\chi}_{a}^{N,t}(x,x-1)\right]\right\}\mathbb{1}(y=x),\] (C.7) where the operator \(\mathcal{H}_{N}^{i}\) is the generator of a two dimensional random walk that jumps to each neighbor at rate \(\alpha\), apart when it is on the diagonal \(\mathcal{D}_{N}\) that jumps at rate \(\alpha-1\) to each one of its neighbors, i.e. for every function \(f:\overline{V}_{N}\to\mathbb{R}\) such that \(f(x,y)=0\) if \((x,y)\in\partial V_{N}\), and for every \((x,y)\in V_{N}\), \[\mathcal{H}_{N}f(x,y)=\begin{cases}\alpha[f(x-1,y)+f(x+1,y)+f(x,y-1)+f(x,y+1)-4f (x,y)],\text{ if }|x-y|\geq 1,\\ 2(\alpha-1)[f(x-1,x)+f(x,x+1)-2f(x,x)],\text{ if }y=x,\end{cases}\] and, for every \((x,y)\in V_{N}\) such that \(y\neq x\), \[\tilde{\chi}_{a}^{N,t}(x,y)=\rho_{t}^{N}(x)[\alpha-\rho_{t}^{N}(y)]+\rho_{t}^{N} (y)[\alpha-\rho_{t}^{N}(x)].\] Since we have different signs for the extra terms that appear on the upper diagonal \(\mathfrak{I}_{n}^{N}\) and main diagonal \(\mathcal{G}_{N}\), i.e. (C.6) and (C.7), and also they are not uniformly bounded in \(N\), we observe that the argument used for the case \(\alpha=1\) can not be applied directly here. This motivates us to redefine the function on the diagonal values in such a way that it becomes the solution of an equation with a similar structure to (4.13). As we will see below, that function is exactly the function \(\varphi_{t}^{N}\) defined in (4.18). We now observe that, since we want \(h_{t}\) to not depend on \(\varphi_{t}^{N}\), then it cannot depend on \(\mathbb{E}_{\mu_{N}}[\eta(x)^{2}]\) nor on \(\mathbb{E}_{\mu_{N}}[\eta(x+1)^{2}]\), meaning that the second and fourth lines of last display have to be equal to zero. Then \((\alpha-1)A-C=0\), i.e. \(A=\frac{C}{\alpha-1}\). We can then simplify \(h_{t}\) to \[h_{t}(x,y) =-C[\widehat{\nabla}_{\mu}^{+}\varphi_{t}^{N}(x)]^{2}\mathbb{I}( y=x+1)\] \[\quad-N^{2}[(\alpha-1)B+\alpha C][\rho_{t}^{N}(x)+\rho_{t}^{N}(x +1)]+2(\alpha-1)D]\mathbb{I}(y=x+1)\] (C.9) \[\quad+\frac{\alpha}{\alpha-1}N^{2}[(\alpha-1)B+\alpha C][\rho_{t }^{N}(x-1)+\rho_{t}^{N}(x+1)+2\rho_{t}^{N}(x)]+4D]\mathbb{I}(y=x).\] (C.10) Now, by the fact that we want \(h_{t}\) to be uniformly (in \(N\)) bounded, from (C.10) we need \(D\leq 0\) and \((\alpha-1)B+\alpha C\leq 0\), but from (C.9) we also need \(D\geq 0\) and \((\alpha-1)B+\alpha C\geq 0\). To make these two requirements compatible, we finally obtain that \(D=(\alpha-1)B+\alpha C=0\), i.e. \(D=0\) and \(B=-\frac{\alpha C}{\alpha-1}\). This implies that \(h_{t}(x,y)=-C[\widehat{\nabla}_{\mu}^{+}\varphi_{t}^{N}(x)]^{2}\mathbb{I}(y=x +1)\). We impose that \(C\geq 0\). For simplicity, we will take \(C=1\), and this coincides with the definition of \(\varphi_{t}^{N}\) from (4.18). Proof of Lemma 4.1 The proof of last lemma follows exactly the same steps as in the proof of Lemma 6.2 of [17], which was done for the case \(\theta\geq 0\). For completeness and convenience of the reader we decided to present it here with the necessary adaptations to accommodate the case \(\theta<0\). In fact the proof we present below works for any \(\theta<1\) and we note that the proof for \(\theta>1\) follows exactly the same steps as the proof of Lemma 6.2 of [17]. Assume now that \(\theta<1\). The idea of the proof is to find a sequence of functions \(\{\phi_{N}\}_{N}\), such that \(\phi_{N}(t,\frac{x}{N})\) is close to \(\rho_{t}^{N}(x)\) with an error of order \(O(N^{-1})\). Therefore, we consider a sequence of functions of class \(C^{4}\) in space and for that we need to restrict to initial profiles \(\rho_{0}\) of class \(C^{6}\). To this end let \(\{\phi_{N}(t,u)\}_{N\geq 1}\) be the solution of \[\begin{cases}\partial_{t}\phi_{N}(t,u)\,=\,\alpha\partial_{u}^{2}\phi_{N}(t,u ),&\text{for $t>0$, $u\in(0,1)$}\,,\\ \partial_{u}\phi_{N}(t,0^{+})\,=\,\mu_{N}^{\prime}(\phi_{N}(t,0^{+})-\rho^{ \,t}),&\text{for $t>0$}\,,\\ \partial_{u}\phi_{N}(t,1^{-})\,=\,\mu_{N}^{\prime}(\rho^{\,t}-\phi_{N}(t,1^{-} )),&\text{for $t>0$}\,,\\ \phi_{N}(t,0)\,=\,\rho^{\,t}\,\,\,\phi_{N}(t,1)=\rho^{\,r}\,,&\text{for $t>0$}\,,\\ \phi_{N}(0,u)\,=\,g_{N}(u),&u\in[0,1]\,,\end{cases}\] (D.1) where, for \(j\in\{\ell,r\}\), we define \(\mu_{N}^{j}=\frac{N^{2}}{N^{2}-\lambda^{j}}\), and \(g_{N}\) is a function of class \(C^{6}\) and that satisfies (H3) and (H4). Repeating the proof of Section 6.4 of [17], we see that \(\phi_{N}\in C^{1,4}\), which is a consequence of the fact that the initial condition of the equation above is of class \(C^{6}\) and \(\phi_{N}\) satisfies (D.1). For \(x\in\overline{\Lambda}_{N}\), let \(\gamma_{t}^{N}(x):=\rho_{t}^{N}(x)-\phi_{N}(t,\frac{x}{N})\). A simple computation shows that \(\gamma_{t}^{N}\) is solution of \[\begin{cases}\partial_{t}\gamma_{t}^{N}(x)=(N^{2}\Delta_{N^{1}}^{i},\gamma_{t }^{N})(x)+F_{t}^{N}(x),\,\,\,x\in\Lambda_{N}\,,\,\,\,t\geq 0\,,\\ \gamma_{t}^{N}(0)=0\,,\,\,\,\,\,\gamma_{t}^{N}(N)=0\,,\,\,\,t\geq 0\,,\end{cases}\] (D.2) where \(\Delta_{N}^{i}\) was defined in (2.18) and \(F_{t}^{N}(x)=(N^{2}\Delta_{N}^{i}-\alpha\partial_{u}^{2})\phi_{N}(t,\frac{x}{ N})\). Since \(\phi_{N}(t,\cdot)\) is sufficiently regular, we are done if we show that \(\left|\gamma_{t}^{N}(x)\right|\lesssim\frac{1}{N}\). From Duhamel's formula, we have \[\gamma_{t}^{N}(x)\,=\,\mathbb{E}_{x}\Big{[}\gamma_{0}^{N}(X_{tN^{2}}^{i})+ \int_{0}^{t}F_{t\to}^{N}(X_{tN^{2}}^{i})\,ds\Big{]},\] where \(\{X_{s}^{i},s\geq 0\}\) is the random walk on \(\overline{V}_{N}\) with generator \(\Delta_{N}^{i}\), absorbed at the boundary \(\{0,N\}\) and \(\mathbb{E}_{x}\) denotes the expectation with respect to the probability induced by the generator \(\Delta_{N}^{i}\) and the initial position \(x\). Therefore, \[\sup_{t\geq 0}\max_{x\in\Delta_{N}}|\gamma_{t}^{N}(z)|\,\leq\,\max_{x\in \Lambda_{N}}|\gamma_{0}^{N}(x)|\,+\,\sup_{t\geq 0}\max_{x\in\Lambda_{N}}\Big{|} \mathbb{E}_{x}\Big{[}\int_{0}^{t}F_{t\to}^{N}(X_{tN^{2}}^{i})\,ds\Big{]}\Big{|}.\] (D.3) From (H3), we have that \[\max_{x\in\Lambda_{N}}|\gamma_{0}^{N}(x)|=\max_{x\in\Lambda_{N}}|\rho_{0}^{N} (x)-g_{N}(\frac{x}{N})|\lesssim\frac{1}{N}.\] Then, it remains to analyse the rightmost term in last display. Note that \[\Big{|}\mathbb{E}_{x}\Big{[}\int_{0}^{t}F_{t\to}^{N}(X_{tN^{2}}^{i})\,ds\Big{]} \Big{|}\leq\int_{0}^{t}\sum_{x\in\Lambda_{N}}\mathbb{P}_{x}\Big{[}X_{tN^{2}}^{ i}=z\Big{]}\big{|}F_{t\to}^{N}(z)\big{|}\,ds.\] (D.4) Since \(\phi_{N}\in C^{4}\), then \(F_{t}^{N}(x)\lesssim 1/N^{2}\) for \(x\in\{2,\ldots,N-2\}\) and for any \(t\geq 0\) and last display is bounded by \[\frac{C}{N}+\sum_{k\in\{1,N-1\}}\mathbb{E}_{x}\Big{[}\int_{0}^{\infty}\mathbf{ 1}_{(X_{tN^{2}}^{i}=k)}\,ds\Big{]}\cdot|F_{t}^{N}(k)|.\] (D.5) Last expectation is the average time spent by the random walk at the site \(k\) until its absorption. This is the solution of the elliptic equation \[-N^{2}\Delta_{N}^{i}T^{N}(x)=\delta_{x=k},\forall x\in\Lambda_{N}\] with null Dirichlet conditions \(T^{N}(0)=0\) and \(T^{N}(N)=0\). A simple computation shows that \[T^{N}(x)=\frac{N^{\theta}}{N^{2}}\Big{[}-A_{N}^{i}x+B_{N}^{i}\,\Big{]}\] where \[A_{N}^{i}:=\frac{\lambda^{r}}{\lambda^{\ell}\lambda^{r}(N-2)+\alpha N^{\theta}( \lambda^{\ell}+\lambda^{r}))}\quad\text{and}\quad B_{N}^{i}:=\frac{1}{\lambda^ {\ell}}\Big{(}1-\big{(}\alpha-\frac{\lambda^{\ell}}{N^{\theta}}\big{)}A_{N}^{i} N^{\theta}\Big{)}.\] From this it follows that \(\max_{x\in\Lambda_{N}}|T^{N}(x)|\lesssim\frac{N^{\theta}}{N^{\theta}}\). Now we analyse \(\max_{k\in[1,N-1]}|F_{t}^{N}(k)|\). We do the proof for the case \(k=1\) and we leave the case \(k=N-1\) to the interested reader. Note that \[F_{t}^{N}(1) =\big{(}N^{2}\Delta_{n}^{i}-\alpha\partial_{u}^{2}\big{)}\phi_{N} (t,\tfrac{1}{N})\] \[=\alpha N^{2}(\phi_{N}(t,\tfrac{2}{N})-\phi_{N}(t,\tfrac{1}{N})) +\alpha N^{2-\theta}\lambda^{\ell}(\phi_{N}(t,0)-\phi_{N}(t,\tfrac{1}{N}))- \alpha\partial_{u}^{2}\phi_{N}(t,\tfrac{1}{N}).\] Now we use the regularity of \(\phi_{N}\) and make a Taylor expansion to get \[F_{t}^{N}(1)=\alpha N\partial_{u}\phi_{N}(t,0^{+})+O(1)+\alpha N^{2-\theta} \lambda^{\ell}\Big{(}\phi_{N}(t,0)-\phi_{N}(t,0^{+})-\frac{1}{N}\partial_{u} \phi_{N}(t,0^{+})\Big{)}+O(N^{-\theta}).\] If we now use the condition \[\alpha N(1-\frac{\lambda^{\ell}}{N^{\theta}})\partial_{u}\phi_{N}(t,0^{+})= \alpha N^{2-\theta}\lambda^{\ell}\Big{(}\phi_{N}(t,0^{+})-\phi_{N}(t,0)\Big{)},\] which (by noting that \(\phi_{N}(t,0)=\rho^{\ell}\)) coincides with \(\partial_{u}\phi_{N}(t,0^{+})=\mu_{N}^{\ell}(\phi_{N}(t,0^{+})-\rho^{\ell})\), then we obtain \[\sup_{t\geq 0}|F_{t}^{N}(1)|\lesssim 1+N^{-\theta}.\] Putting all the estimates together we find the bound for (D.3) given by \[\sup_{t\geq 0}\max_{x\in\Lambda_{n}}|\gamma_{t}^{N}(x)|\lesssim\frac{1}{N}+ \frac{N^{\theta}}{N^{2}}+\frac{1}{N^{2}}\] from where the proof ends, since \(\theta<1\). **Remark 1**.: _We observe that, for each \(N\in\mathbb{N}\), the stationary solution of (D.1), that we denote by \(\tilde{\rho}_{\mu_{n}^{j}}\), under the assumption that \(\lambda^{\ell}=\lambda^{r}:=\lambda\), is given by_ \[\tilde{\rho}_{\mu_{N}^{j}}\big{(}u\big{)}:=\frac{\rho^{r}+\rho^{l}(1+\mu_{N}^{ j})}{2+\mu_{N}^{j}}+\frac{\mu_{N}^{j}(\rho^{r}-\rho^{l})u}{2+\mu_{N}^{j}}.\] (D.6) _So, taking \(g_{N}=\tilde{\rho}_{\mu_{N}^{j}}+f\in C^{6}\) where \(f\) is a \(C_{c}^{\infty}[0,1]\) function, we have that \(g_{N}\) satisfies (H3). Indeed, using (D.6) and the definition of \(\mu_{N}^{j}\), we get that_ \[\tilde{\rho}_{\mu_{N}^{j}}\big{(}u\big{)}=\frac{\big{(}N^{\theta}-\lambda \big{)}(\rho^{r}+\rho^{l})+N\lambda\rho^{l}}{2(N^{\theta}-\lambda)+N\lambda}+ \frac{N\lambda(\rho^{r}-\rho^{l})u}{2(N^{\theta}-\lambda)+N\lambda}=Na_{N}u+b_ {N}.\] _Therefore, because \(f\) has compact support, we have that_ \[\partial_{u}^{k}g_{N}\big{(}u\big{)}=\partial_{u}^{k}\tilde{\rho}_{\mu_{N}^{j}} \big{(}u\big{)},\] _for \(u\in\{0,1\}\) and \(k=0,1,2,3\). Moreover, if we restrict \(p_{0}^{N}\) to be such that \(\rho_{0}^{N}(x)=g_{N}\left(\frac{x}{N}\right)\), then (H4) is trivially satisfied and we can find \(\gamma\in C^{6}\) which satisfies (H2). Indeed,_ \[\tilde{\rho}_{\mu_{N}^{j}}\big{(}u\big{)}\xrightarrow[N\to+\infty]{}\tilde{ \rho}(u):=\begin{cases}\rho^{l}+(\rho^{r}-\rho^{l})u,\text{ if }\theta<1,\\ \frac{\rho^{r}+(1+\lambda)\rho^{l}}{2+\lambda}+\frac{\lambda(\rho^{r}-\rho^{l}) u}{2+\lambda},\text{ if }\theta=1,\\ \frac{\rho^{r}+\rho^{l}}{2},\text{ if }\theta>1.\end{cases}\] _where the limit is taken uniformly in \(u\). Taking \(\gamma=\tilde{\rho}+f\) we have that_ \[\frac{1}{N}\sum_{x\in\Lambda_{N}}\left|\rho_{0}^{N}(x)-\gamma\left(\frac{x}{N }\right)\right|=\frac{1}{N}\sum_{x\in\Lambda_{N}}\left|\tilde{\rho}_{\mu_{N}^{ j}}\left(\frac{x}{N}\right)-\tilde{\rho}\left(\frac{x}{N}\right)\right| \leq\sup_{u\in[0,1]}|\tilde{\rho}_{\mu_{N}^{j}}\big{(}u\big{)}-\tilde{\rho}(u )|\xrightarrow[N\to+\infty]{}0\] _and so (H2) is satisfied._ Replacement Lemma For a configuration \(\eta\in\Omega_{N}\) and \(x\in\Lambda_{N}\) we define the translation by \(x\) of \(\eta\) as \((\tau_{x}\eta)(y)=\eta(x+y)\). Recall (3.8). **Lemma E.1** (Replacement Lemma).: _Recall from Proposition 4.4 the definition of \(\Lambda_{N}^{c,\ell},\Lambda_{N}^{c,r}\). Fix \(x\notin\Lambda_{N}^{c,r}\) and let \(\varphi:\Omega_{N}\to\mathbb{R}\) be a function whose support does not intersects the set of points in \(\{x+1,\cdots,x+\epsilon N\}\). Then for any \(\theta\in\mathbb{R}\) and for any \(t\in[0,T]\), it holds_ \[\lim_{\epsilon\to 0}\lim_{N\to+\infty}\mathbb{E}_{\mu^{N}}\bigg{[}\bigg{|} \int_{0}^{t}\varphi(\tau_{x}\eta)\Big{(}\eta_{sN^{2}}(x)-\overrightarrow{\eta }_{sN^{2}}^{|\epsilon N|}(x)\Big{)}ds\bigg{|}\bigg{]}=0.\] (E.1) _If \(x\notin\Lambda_{N}^{c,\ell}\) and for \(\varphi:\Omega_{N}\to\mathbb{R}\) a function whose support does not intersects the set of points in \(\{x-\epsilon N,\cdots,x-1\}\), the same statement holds replacing \(\overrightarrow{\eta}_{sN^{2}}^{|\epsilon N|}(x)\) by \(\overleftarrow{\eta}_{sN^{2}}^{|\epsilon N|}(x)\)._ In the case \(\varphi\equiv 1\), the last result was proved in Lemma 4.3 of [14] but for sake of completeness we give here a sketch of the proof of the more general result stated above, by following the strategy of the proof of the Lemma 4.3 of [14]. Proof.: Our starting point is to change the measure \(\mu_{N}\) to a reference measure, which in fact should be the invariant state of the system that we do not know, but instead we consider another suitable measure that we define as follows. To this end, let \(\varrho:[0,1]\to(0,1)\) be a Lipschitz function, bounded away from zero and one, and let \[\nu_{\varrho(\cdot)}^{N}(\eta):=\prod_{x=1}^{N-1}\left(\begin{matrix}\alpha \\ \eta(x)\end{matrix}\right)\big{(}\varrho(\frac{x}{N})\big{)}^{\eta(x)}\big{(}1 -\varrho(\frac{x}{N})\big{)}^{\alpha-\eta(x)}\] (E.2) be the inhomogeneous Binomial product measure of parameter \(\varrho(\cdot)\). From the entropy and Jensen's inequalities, the fact that \(e^{|x|}\leq e^{x}+e^{-x}\) and that for sequences of positive real numbers \((\alpha_{N})_{N},(D_{N})_{N}\) it holds \[\limsup_{N\to\infty}\frac{1}{N}\log(a_{N}+b_{N})=\max\left\{\limsup_{N\to \infty}\frac{1}{N}\log(a_{N}),\;\limsup_{N\to\infty}\frac{1}{N}\log(b_{N}) \right\},\] together with Feynman-Kac's formula, the expectation in (E.1) is bounded from above by \[\frac{H(\mu^{N}|\nu_{\varrho(\cdot)}^{N})}{BN}+t\sup_{f\,\mathrm{density}} \Big{\{}\pm\langle\varphi(\tau_{x}\eta)(\eta(x)-\overrightarrow{\eta}^{| \epsilon N|}(x)),f\rangle_{\nu_{\varrho(\cdot)}^{N}}+\frac{N}{B}\langle \mathcal{E}_{N}\sqrt{f},\sqrt{f}\rangle_{\nu_{\varrho(\cdot)}^{N}}\Big{\}},\] where \(B>0\). Now we note that a bound on the entropy can be obtained as \(H(\mu^{N}|\nu_{\varrho(\cdot)}^{N})\lesssim N\), see for example beginning of Section 4 of [14]). Moreover, we can use the estimate \(N^{2}\langle\mathcal{E}_{N}\sqrt{f},\sqrt{f}\rangle_{\nu_{\varrho(\cdot)}^{N}}\) given in Lemma 4.1 of [14] (where the parameters \(\epsilon,\gamma,\delta,\beta\) there have the correspondence given in (2.1)). Putting this all together, we get that the expectation in the statement of the lemma is bounded from above by a constant times \[\frac{1}{B}+t\sup_{f\,\mathrm{density}}\Big{\{}\pm\langle\varphi(\tau_{x}\eta )(\eta(x)-\overrightarrow{\eta}^{|\epsilon N|}(x)),f\rangle_{\nu_{\varrho( \cdot)}^{N}}-\frac{N}{B}D_{\nu_{\varrho(\cdot)}^{N}}(\sqrt{f})\Big{\}}+\frac{1 }{BN},\] where \[D_{\nu_{\varrho(\cdot)}^{N}}(\sqrt{f}):=D_{\nu_{\varrho(\cdot)}^{N}}^{f}( \sqrt{f})+D_{\nu_{\varrho(\cdot)}^{N}}^{bulk}(\sqrt{f})+D_{\nu_{\varrho(\cdot)} ^{N}}^{f}(\sqrt{f})\] with \[D_{\nu_{\varrho(\cdot)}^{N}}^{f}(\sqrt{f}) :=\int_{\Omega_{N}}\left[\frac{\lambda^{\ell}\varrho^{\ell}\eta( 1)}{N^{\theta}}\Big{\{}\sqrt{f}(\eta^{1,0})-\sqrt{f}(\eta)\Big{\}}^{2}+\frac{ \lambda^{\ell}[\alpha-\varrho^{\ell}][\alpha-\eta(1)]}{N^{\theta}}\Big{\{} \sqrt{f}(\eta^{0,1})-\sqrt{f}(\eta)\Big{\}}^{2}\right]d\nu_{\varrho(\cdot)}^{N},\] \[D_{\nu_{\varrho(\cdot)}^{N}}^{bulk}(\sqrt{f}) :=\sum_{x=1}^{N-2}D_{\nu_{\varrho(\cdot)}^{N}}^{x,x+1}(\sqrt{f})+D _{\nu_{\varrho(\cdot)}^{N}}^{x+1,x}(\sqrt{f})\] \[=\sum_{x=1}^{N-2}\int_{\Omega_{N}}\eta(x)[\alpha-\eta(x+1)] \Big{\{}\sqrt{f}(\eta^{x,x+1})-\sqrt{f}(\eta)\Big{\}}^{2}d\nu_{\varrho(\cdot)}^ {N}\] \[+\sum_{x=1}^{N-2}\int_{\Omega_{N}}\eta(x+1)[\alpha-\eta(x)] \Big{\{}\sqrt{f}(\eta^{x+1,x})-\sqrt{f}(\eta)\Big{\}}^{2}d\nu_{\varrho(\cdot)}^ {N}\] and the definition of \(D^{r}_{\varphi_{\varrho(\cdot)}^{N}}(\sqrt{f})\) is analogous to the one of \(D^{t}_{\varphi_{\varrho(\cdot)}^{N}}(\sqrt{f})\) by replacing \(0\) and \(1\) by \(N\) and \(N-1\), respectively, and also \(\lambda^{t}\) and \(\varrho^{t}\) by \(\lambda^{r}\) and \(\varrho^{t}\), respectively. We are now left with estimating \[\langle\varphi(\tau_{x}\eta)(\eta(x)-\overline{\eta}^{\lfloor\varrho N \rfloor}(x)),f\rangle_{\varphi_{\varrho(\cdot)}^{N}}\] for every \(f\) density with respect to \(\varphi_{\varrho(\cdot)}^{N}\). Note that \[\langle\varphi(\tau_{x}\eta)(\eta(x)-\overline{\eta}^{\lfloor\varrho N \rfloor}(x)),f\rangle_{\varphi_{\varrho(\cdot)}^{N}}=\frac{1}{\lfloor\varrho N \rfloor}\sum_{y=x+1}^{x+\lfloor\varrho N\rfloor}\sum_{w=x+1}^{y-1}(\left[\eta (w)-\eta(w+1)\right]\varphi(\tau_{x}\eta),f)_{\varphi_{\varrho(\cdot)}^{N}}.\] Since \[\langle\left[\eta(w)-\eta(w+1)\right]\varphi(\tau_{x}\eta),f \rangle_{\varphi_{\varrho(\cdot)}^{N}}\] \[=\frac{1}{2}\int_{\Omega_{N}}[\eta(w)-\eta(w+1)]\,\varphi(\tau_{ x}\eta)[f(\eta)-f(\eta^{w,w+1})]d\,\varphi_{\varrho(\cdot)}^{N}\] (E.3) \[+\frac{1}{2}\int_{\Omega_{N}}[\eta(w)-\eta(w+1)]\,\varphi(\tau_{ x}\eta)[f(\eta)+f(\eta^{w,w+1})]d\,\varphi_{\varrho(\cdot)}^{N},\] (E.4) making a change of variables \(\eta\mapsto\xi=\eta^{w,w+1}\) in (E.4) (and noting that the support of \(\varphi\) does not overlap with the set of points where this change is done) and splitting the state space \(\Omega_{N}\) as is done in Lemma 4.3 of [14], we get \[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq From this it follows that \[\pm\frac{1}{\left\lfloor eN\right\rfloor}\sum_{y=x-\left\lfloor eN \right\rfloor}^{x-1}\sum_{y=x}^{y-1}[\left\{\eta(w)-\eta(w+1)\right\}[\alpha- \eta(x+1)],f)_{\nu^{\prime}_{\varrho(t)}}-\frac{N}{B}D_{\nu^{\prime}_{\varrho(t )}}(\sqrt{f})\] \[\lesssim\frac{1}{\left\lfloor eN\right\rfloor}\sum_{y=x-\left\lfloor eN \right\rfloor}^{x-1}\sum_{y=x}^{y-1}\left[\frac{1}{4A}\left[D_{\nu^{\prime}_{ \varrho(t)}}^{w,w+1}(\sqrt{f})+D_{\nu^{\prime}_{\varrho(t)}}^{w+1,w}(\sqrt{f })\right]-\frac{N}{B}D_{\nu^{\prime}_{\varrho(t)}}(\sqrt{f})\right.\] \[+AeN+\frac{1}{\left\lfloor eN\right\rfloor}\sum_{y=x-\left\lfloor eN \right\rfloor}^{x-1}\sum_{y=x}^{y-1}\left\lfloor\varrho\left(\frac{w+1}{N} \right)-\varrho\left(\frac{w}{N}\right)\right\rfloor\] \[\lesssim\frac{1}{4A}-\frac{N}{B}D_{\nu^{\prime}_{\varrho(t)}}( \sqrt{f})+AeN+\frac{1}{\left\lfloor eN\right\rfloor}\sum_{y=x-\left\lfloor eN \right\rfloor}^{x-1}\sum_{y=x}^{y-1}\left\lfloor\varrho\left(\frac{w+1}{N} \right)-\varrho\left(\frac{w}{N}\right)\right\rfloor.\] Choosing \(A=\frac{B}{4N}\) and using the fact that \(\varrho(\cdot)\) is Lipschitz, then \[\limsup_{N\to\infty}\mathbb{E}_{\mu^{w}}\left[\left|\int_{0}^{t}\varphi(\tau _{x}\eta)\left(\eta_{N^{2}}(x)-\overline{\eta}_{sN^{2}}^{\left\lfloor eN \right\rfloor}(x)\right)ds\right|\right]\lesssim\frac{1}{B}+\left[\frac{B \epsilon}{4}+\epsilon\right].\] Finally, taking the limit \(\epsilon\to 0\) and then \(B\to\infty\), we are done. The proof of the other average to the left is completely analogous and we leave it to the reader.
2305.07195
Simultaneous Modeling of In Vivo and In Vitro Effects of Nondepolarizing Neuromuscular Blocking Drugs
Nondepolarizing neuromuscular blocking drugs (NDNBs) are clinically used to produce muscle relaxation during general anesthesia. This paper explores a suitable model structure to simultaneously describe in vivo and in vitro effects of three clinically used NDNBs, cisatracurium, vecuronium, and rocuronium. In particular, it is discussed how to reconcile an apparent discrepancy that rocuronium is less potent at inducing muscle relaxation in vivo than predicted from in vitro experiments. We develop a framework for estimating model parameters from published in vivo and in vitro data, and thereby compare the descriptive abilities of several candidate models. It is found that modeling of dynamic effect of activation of acetylcholine receptors (AChRs) is essential for describing in vivo experimental results, and a cyclic gating scheme of AChRs is suggested to be appropriate. Furthermore, it is shown that the above discrepancy in experimental results can be resolved when we consider the fact that the in vivo concentration of ACh is quite low to activate only a part of AChRs, whereas more than 95% of AChRs are activated during in vitro experiments, and that the site-selectivity is smaller for rocuronium than those for cisatracurium and vecuronium.
Hikaru Hoshino, Eiko Furutani
2023-05-12T01:38:02Z
http://arxiv.org/abs/2305.07195v2
Simultaneous Modeling of In Vivo and In Vitro Effects of Nondepolarizing Neuromuscular Blocking Drugs ###### Abstract Nondepolarizing neuromuscular blocking drugs (NDNBs) are clinically used to produce muscle relaxation during general anesthesia. This paper explores a suitable model structure and its parameters to simultaneously describe _in vivo_ and _in vitro_ effects of three clinically used NDNBs, cisatracurium, vecuronium, and rocuronium. In particular, it is discussed how to reconcile an apparent discrepancy that rocuronium is less potent at inducing muscle relaxation _in vivo_ than predicted from _in vitro_ experiments. We develop a framework for estimating model parameters from published _in vivo_ and _in vitro_ data, and thereby compare the descriptive abilities of several candidate models. As a result, it is shown that a dynamic modeling of the kinetics of competitive binding of acetylcholine (ACh) and NDNB molecules to ACh receptors (AChRs) is effective, and the above discrepancy can be resolved if we assume that the _in vivo_ concentration of ACh is relatively low to activate only a part of AChRs, whereas more than 95 % of AChRs are activated during _in vitro_ experiments, and that the site-selectivity is smaller for rocuronium than those for cisatracurium and vecuronium. keywords: neuromuscular transmission, anesthesia, dynamic modeling, kinetic mechanism + Footnote †: journal: Journal of the American Statistical Association ## 1 Introduction Nondepolarizing neuromuscular blocking drugs (NDNBs) interrupt synaptic transmission at the neuromuscular junction and clinically used during general anesthesia to produce muscle relaxation [1]. Neuromuscular transmission is initiated by arrival of an impulse at motor nerve terminal and subsequent release of acetylcholine (ACh) molecules to the synaptic cleft. A part of released ACh molecules bind to nicotinic ACh receptors (AChRs) on post-junctional membranes and thereby causes a change of membrane conductance due to channel opening of AChRs, followed by occurrence of action potential at muscle fibers and muscle contraction. It is well known that NDNBs act by competing with ACh for post-junctional AChRs and preventing changes in membrane conductance [2]. Each AChR has two non-identical binding sites, and the binding of only one molecule of NDNB is needed to prevent activation of the receptor, whereas two molecules of ACh are necessary for activation. While clinical effects of NDNBs are usually modeled by pharmacokinetic and pharmacodynamic (PKPD) analysis [3; 4; 5; 6; 7], it is a rather black-box approach. To better understand clinical properties of NDNBs, several mechanism-based models have been proposed. One of the most basic models is the two-site binding model [8], which is derived based on the assumption that the effect of a drug is proportional to the fractional amount of receptors occupied by the antagonist. This assumption is valid in most _in vitro_ experiments, and the two-site binding model has been widely used to represent these experimental results [8; 9; 10; 11]. However, it is known that these _in vitro_ results do not directly explain clinical or _in vivo_ results. For example, although the values of IC\({}_{50}\), the concentration needed to produce a 50% inhibition of the experimental current, take similar values for three clinically used NDNBs, cisatracurium, vecuronium, and rocuronium (10 nM, 15 nM, and 17 nM, respectively [10]), the value of EC\({}_{50}\), the concentration needed to produce a 50% decrease of clinically observed muscle response, for rocuronium is much higher (1.35 \(\mu\)M [4]) than those for cisatracurium (0.12 \(\mu\)M [6]) and vecuronium (0.26 \(\mu\)M [5]). That is, rocuronium is less potent at inducing muscle relaxation _in vivo_ than directly predicted from _in vitro_ experiments. The purpose of this paper is to develop a model describing _in vivo_ and _in vitro_ experimental results in a consistent manner by considering molecular mechanisms of the effects of NDNBs. Although the two-site binding model effectively describes _in vitro_ experimental results, it represents only static properties of an NDNBs at an equilibrium condition realized by _in vitro_ experimental settings. Since the free concentration of ACh and the degree of ACh occupancy on the receptors do not reach equilibrium during a synaptic event [12], a dynamic modeling is required to represent molecular processes of competition between ACh and NDNB molecules. In this direction, Dilger and coworkers [12; 13; 14] conducted kinetic measurements using a rapid perfusion system to determine association and dissociation rate constants of NDNB bindings. Particularly, a dynamic simulation was performed in [12] to reproduce the time course of experimental currents by using an ordinary differential equation model. Furthermore, Nigrovic and Amann [15] proposed a model of neuromuscular transmission, which is termed as the _competitive kinetic model_ in this paper, to simulate _in vivo_ muscular response. Based on these studies, this paper addresses simultaneous modeling of _in vivo_ and _in vitro_ effects of NDNBs. The contribution of this paper is twofold. The first one is to discuss a suitable structure of the model. By developing a framework of parameter optimization, we compare the descriptive abilities of the two-site binding model and the competitive kinetic model, and it is clarified that the two-site binding model is insufficient for the consistent modeling of _in vivo_ and _in vitro_ effects. Furthermore, a modification to the competitive kinetic model is proposed in this paper. Specifically, we introduce a cyclic scheme for gate opening and closing of AChR upon association and dissociation of ACh molecules: 1) agonists bind, 2) AChR open, 3) agonists dissociate, and then 4) AChR return to the resting condition. This modification is based on similar models developed for explaining the processes of desensitization of AChRs [16; 17]. Although a reciprocal gating scheme, where agonists dissociate after AChR return to the resting condition, has been a widely accepted mechanism [18; 19; 20; 21], the results of this paper indicates a possibility of the cyclic gating scheme (see Sec. 4 for details). The second one is to discuss how the difference in the values of IC\({}_{50}\) and EC\({}_{50}\) mentioned above can be understood. For this, in our previous work [22], we investigated how the fraction of activated AChRs calculated by the competitive kinetic model differs from that calculated by the two-site binding model (i.e., _in vitro_ simulation results) and revealed that the relationship between simulated results by these two models depends on the concentration of released ACh, the dissociation rate constant of NDNB, and the site-selectivity of NDNB. In this paper, it is shown that the above apparent discrepancy in IC\({}_{50}\) and EC\({}_{50}\) can be resolved if we assume that the _in vivo_ concentration of ACh is relatively low to activate only a part of AChRs, whereas more than 95 % of AChRs are activated during _in vitro_ experiments, and that the site-selectivity is smaller for rocuronium than those for cisatracurium and vecuronium. The rest of this paper is organized as follows. Sec. 2 introduces the two-site binding model and the competitive kinetic model with reciprocal and cyclic gating scheme and presents a framework for estimating model parameters to compare these candidate models. Sec. 3 provides the result of parameter estimation for each model and perform an additional numerical analysis to clarify the difference among these models. In Sec. 4, we summarize the obtained results and discuss how the apparent discrepancy in IC\({}_{50}\) and EC\({}_{50}\) can be resolved, and finally Sec. 5 concludes this paper. ## 2 Methods ### Models of Neuromuscular Response Here we introduce the models of neuromuscular response studied in this paper. The overall structure, which is based on [15] and common to all the considered models, is shown in Fig. 1. The list of parameters and their standard values are provided in Tab. 1. As shown in the figure, the _in vivo_ effects of NDNBs are simulated by the following two steps: 1) calculation of the fraction of activated AChRs after the release of ACh due to a stimulus to the motor nerve and 2) calculation of the twitch strength, i.e. the strength of the clinically observed muscle response to a stimulus. In this paper, we consider three different model structures for the first step of the above procedure: a) two-site binding model, b) competitive kinetic model with reciprocal gating scheme, and c) competitive kinetic model with cyclic gating scheme. Among them, the two-site binding model is a simple receptor binding model [8], and the concentration of activated AChRs, \([\mathrm{R}^{*}]_{0}\), at the absence of NDNB is given by \[\frac{[\mathrm{R}^{*}]}{[\mathrm{R}^{*}]_{0}}=\frac{K_{\mathrm{D1}}K_{ \mathrm{D2}}}{K_{\mathrm{D1}}K_{\mathrm{D2}}+K_{\mathrm{D1}}[\mathrm{D}]+K_{ \mathrm{D2}}[\mathrm{D}]+\left[\mathrm{D}\right]^{2}} \tag{1}\] where [D] stands for the drug concentration, and \(K_{\mathrm{D1}}\) and \(K_{\mathrm{D2}}\) for the dissociation equilibrium constants for NDNBs binding to the first and second sites of an AChR, respectively. Note that the right hand side of Eq. (1) represents the fractional amount of free AChRs not occupied by NDNB at an equilibrium condition, which can be derived based on the law of mass action. For the competitive kinetic models, we consider the two different schemes for gate opening and closing of AChR as shown in Fig. 2. In the figure, the complexes formed by binding of ACh, denoted by A, and NDNB, by D, to AChR, by R, are represented by 3-letter symbols. The first and last letters denote the first and second ligands occupying the sites 1 and 2, respectively, and the middle letter represents the receptor R. Unoccupied sites are denoted by O, and ORO stands for free AChR. The parameters \(k_{\mathrm{dissA}i}\) and \(k_{\mathrm{dissD}i}\) for site \(\#i\) (\(i=1\), 2) stand for the dissociation rate constants of ACh and NDNB from AChRs, respectively. The association constants \(k_{\mathrm{assocA}i}\) and \(k_{\mathrm{assocD}i}\) for ACh and NDNB for site \(\#i\) are given by \(k_{\mathrm{assocA}i}:=k_{\mathrm{dissA}i}/K_{\mathrm{A}i}\), \(k_{\mathrm{assocD}i}:=k_{\mathrm{dissD}i}/K_{\mathrm{D}i}\), respectively, where the parameter \(K_{\mathrm{A}i}\) stand for the dissociation equilibrium constant of ACh for site \(\#i\). In addition, the time course of gate opening and Figure 1: Overall structure of the studied models consisting of the two steps: 1) calculation of the fraction of activated AChRs after the release of ACh and 2) calculation of the strength of the clinically observed muscle response to a stimulus. Figure 2: Two schemes for gate opening and closing of AChRs. The complexes formed by binding of ACh, denoted by A, and NDNB, by D, to AChR, by R, are represented by 3-letter symbols, and the symbol \(\mathrm{ARA}^{\star}\) for AChRs stands for the open state due to the conformational change of AChRs. The symbol RD represents the desensitized state. closing of AChRs are characterized by the rate constants \(k_{\rm open}\) and \(k_{\rm close}\), respectively. Figure 2a shows the reciprocal gating scheme used in the preceding studies [12; 17]. The symbol ARA stands for AChRs bound with two ACh molecules but in the closed state, and the symbol ARA\({}^{*}\) for AChRs in the open state and thus activated due to the conformational change of AChRs. On the other hand, in the cyclic scheme shown in Fig. 2b, ACh molecules dissociate before the close of AChR. The dissociation and association constants \(k^{*}_{\rm dissDi}\) and \(k^{*}_{\rm assocAi}\) after the activation of AChRs are distinguished from those before the activation. In both the two gating schemes, the symbol RD represents the desensitized state, and \(k_{\rm d+}\) and \(k_{\rm d-}\) stand for the rate constants for desensitization. Finally, the decay of the concentration of free ACh molecules in the synaptic cleft, which is mainly due to rapid hydrolysis of ACh by acetylcholinesterase, is characterized by the rate constant \(k_{\rm decay}\). By using the rate constants introduced above, the time course of competition of ACh and NDNB molecules can be described by a set of ordinary differential equations derived based on the framework of chemical kinetics (see e.g. [15; 22] for more details). As a result, the concentration \(\left[{\rm R}^{*}\right]\) of activated AChRs can be calculated as the peak concentration of AChRs in open states (\(\left[{\rm ARA}^{*}\right]\) for the reciprocal scheme and \(\left[{\rm ARA}^{*}\right]\), \(\left[{\rm ARO}^{*}\right]\), and \(\left[{\rm ORA}^{*}\right]\) for the cyclic scheme). After the calculation of the fraction of activated AChRs, the clinically observed muscle response can be simulated in the second step of Fig. 1. By using the peak concentration \(\left[{\rm R}^{*}\right]\) calculated in the first step, the twitch strength is calculated as follows [15]: \[{\rm Twitch\ Strength}=\frac{\left[{\rm R}^{*}\right]^{\gamma_{\rm A}}}{\left[ {\rm R}^{*}\right]^{\gamma_{\rm A}}+\left[{\rm R}^{*}\right]^{\gamma_{\rm A}} _{50}} \tag{2}\] where \(\left[{\rm R}^{*}\right]_{50}\) stands for the parameter representing the concentration of the activated AChRs at half-maximal twitch, and \(\gamma_{\rm A}\) for the exponent that determines the slope of the sigmoidal curve. The formulation of Eq. (2) is based on the assumption that activation of a defined number of receptors at an end plate triggers the contraction of the associated muscle fiber, and the muscle response is proportional to the number of contracting muscle fibers, while each fiber contracts in an all-or-nothing manner. The above models of neuromuscular response can also be used to simulate _in vitro_ effects of NDNBs. As a typical experimental setting, we postulate a situation where AChRs are expressed into clonal cells, and \begin{table} \begin{tabular}{c|l|c} \hline symbol & meaning & value \\ \hline \(\left[{\rm R}\right]_{\rm total}\) & Concentration of AChRs in the synaptic cleft & \(7.75\times 10^{-5}\,{\rm M}^{*}\) \\ \(\left[{\rm A}\right]_{\rm init}\) & Initial concentration of ACh immediately after the stimulus & \(7.75\times 10^{-6}\,{\rm M}^{*}\) \\ \(k_{\rm decay}\) & Rate constant of the decay of the concentration of free ACh & \(1.2\times 10^{4}\,{\rm s}^{-1*}\) \\ \(k_{\rm dissA1}\) & Dissociation rate constant for ACh with site1 of AChR & \(1.8\times 10^{4}\,{\rm s}^{-1\dagger}\) \\ \(k_{\rm dissA2}\) & Dissociation rate constant for ACh with site2 of AChR & \(1.8\times 10^{4}\,{\rm s}^{-1\dagger}\) \\ \(K_{\rm A1}\) & Dissociation equilibrium constant for ACh with site1 of AChR & \(1.6\times 10^{-4}\,{\rm M}^{\dagger}\) \\ \(K_{\rm A2}\) & Dissociation equilibrium constant for ACh with site2 of AChR & \(1.6\times 10^{-4}\,{\rm M}^{\dagger}\) \\ \(k_{\rm close}\) & Rate constant of channel closing of AChR & \(1.2\times 10^{3}\,{\rm s}^{-1\ddagger}\) \\ \(k_{\rm open}\) & Rate constant of channel opening of AChR & \(5.0\times 10^{4}\,{\rm s}^{-1\ddagger}\) \\ \(k_{\rm d+}\) & Rate constant of desensitization & \(26\,{\rm s}^{-1\ddagger}\) \\ \(k_{\rm d-}\) & Rate constant of recovery from desensitization & \(0.13\,{\rm s}^{-1\ddagger}\) \\ \(k_{\rm dissD1}\) & Dissociation rate constant for NDNB with site1 of AChR & \(12.6\,{\rm s}^{-1\ddagger}\) \\ \(k_{\rm dissD2}\) & Dissociation rate constant for NDNB with site2 of AChR & \(113\,{\rm s}^{-1\ddagger}\) \\ \(K_{\rm D1}\) & Dissociation equilibrium constant for NDNB with site1 of AChR & \(7.0\times 10^{-8}\,{\rm M}^{\ddagger}\) \\ \(K_{\rm D2}\) & Dissociation equilibrium constant for NDNB with site2 of AChR & \(6.3\times 10^{-7}\,{\rm M}^{\ddagger}\) \\ \(\left[{\rm R}^{*}\right]_{50}\) & Concentration of activated AChRs at half-maximal muscle response & \(9.7\times 10^{-9}\,{\rm M}^{\lx@sectionsign}\) \\ \(\gamma_{\rm A}\) & Slope of the activated AChRs vs muscle response curve & \(4.8^{\lx@sectionsign}\) \\ \hline \end{tabular} \end{table} Table 1: List of parameters in the models of neuromuscular transmission. The symbol \({}^{*}\) stands for the values reported by [15], \({}^{\dagger}\) reported by [18], \({}^{\ddagger}\) reported by [17], and \({}^{\lx@sectionsign}\) reported by [12]. outside-out patches are prepared for voltage-clamp recordings of macroscopic currents (see [12; 13]). Then, on the condition that the membrane conductance is proportional to the fraction of activated AChRs, the peak current \(I_{\text{peak}}\) after a rapid application of ACh can be described as follows: \[\frac{I_{\text{peak}}}{I_{0}}=\frac{\left[\text{R}^{*}\right]}{\left[\text{R}^ {*}\right]_{0}} \tag{3}\] where \(I_{0}\) stands for the control value of the experimental current in the absence of NDNB. Furthermore, some simulation settings are changed to consider the _in vitro_ environment. First, while free ACh molecules are rapidly hydrolyzed by acetylcholinesteraseiesterase in the synaptic cleft, the concentration of ACh is kept constant in a typical experimental setting. Thus, the parameter \(k_{\text{decay}}\) is set to zero for simulating the _in vitro_ effects. Second, the concentration of ACh used in experiments, represented by the parameter \(\left[\text{A}\right]_{\text{init}}\), is higher than that for simulating _in vivo_ effects. In this paper, the concentration of \(7.75\times 10^{-3}\,\text{M}\) is used for _in vitro_ simulation, by which more than 95% of AChRs are activated, where as a typical number of ACh molecules released upon a nerve stimulus is one tenth of the number of AChRs [23; 24]. ### Method of Parameter Estimation In this paper, a set of estimates of parameters for each model is determined based on _in vivo_ and _in vitro_ experimental results reported in literature. The clinical effects of NDNBs have been quantified based on pharmacokinetic and pharmacodynamic (PKPD) analyses, and the relationship between the estimates of concentration of NDNBs in the effect compartment and the twitch strength is fitted to the so-called Hill equation [3]. As a result, the values of EC\({}_{50}\), the concentration needed to produce a 50 % decrease of muscle response, and \(\gamma_{\text{E}}\), the Hill coefficient, are reported [4; 5; 6] as listed in Tab. 2. While other studies have also been reported, we selected these studies considering that data were obtained from patients under propofol anesthesia rather than isoflurane anesthesia and that neuromuscular monitoring was performed using a mechanomyography (force transducer) rather than acceleromyography or electromyography to utilize experimental results obtained under similar conditions. Similarly, _in vitro_ effects of NDNBs have also been fitted by the Hill equation, and the values of IC\({}_{50}\), the concentrations needed to produce a 50% inhibition of the current, and \(\gamma_{\text{I}}\), the associated Hill coefficient, are reported [11]. Although some rate constants have been measured and reported [12; 13; 14], these studies have used mouse AChRs. This paper determines these rate constants using the values of IC\({}_{50}\) and \(\gamma_{\text{I}}\) reported in [10] obtained by using human adult AChRs rather than mouse adult or embryonic AChRs. The problem of parameter estimation can be formulated as an optimization problem with the following \begin{table} \begin{tabular}{c c c c} \hline & Cisatracurium & Vecuronium & Rocuronium \\ \hline In Vivo results & & & \\ \(\text{EC}_{50}\left(\text{\SIUnitSymbolMicro M}\right)\) & \(0.12\pm 0.027\) & \(0.26\pm 0.10\) & \(1.35\pm 0.26\) \\ \(\gamma_{\text{E}}\) & \(6.9\pm 1.3\) & \(7.6\pm 3.8\) & \(4.79\pm 1.70\) \\ In Vitro results & & & \\ \(\text{IC}_{50}\left(\text{\SIUnitSymbolMicro M}\right)\) & \(10\pm 1\) & \(15\pm 2\) & \(17\pm 2\) \\ \(\gamma_{\text{I}}\) & \(1.02\pm 0.09\) & \(1.03\pm 0.12\) & \(0.67\pm 0.05\) \\ \hline \end{tabular} \end{table} Table 2: In vivo and vitro experimental results used in this study. The values of EC\({}_{50}\) and \(\gamma_{\text{E}}\) are reported in [6], [5], and [4] for cisatracurium, vecuronium, and rocuronium, respectively, and the values of IC\({}_{50}\) and \(\gamma_{\text{I}}\) reported in [10]. objective function \(F\): \[F= \frac{1}{4N_{\rm D}}\sum_{k=1}^{N_{\rm D}}\Bigg{\{}\frac{(S_{\rm EC _{50},k}-E_{\rm EC_{50},k})^{2}}{\rm CI_{\rm EC_{50},k}^{2}}+\frac{(S_{\gamma_{ \rm E},k}-E_{\gamma_{\rm E},k})^{2}}{\rm CI_{\gamma_{\rm E},k}^{2}}\] \[+\frac{(S_{\rm IC_{50},k}-E_{\rm IC_{50},k})^{2}}{\rm CI_{\rm IC _{50},k}^{2}}+\frac{(S_{\gamma_{\rm I},k}-E_{\gamma_{\rm I},k})^{2}}{\rm CI_{ \gamma_{\rm I},k}^{2}}\Bigg{\}}\] \[+\frac{W}{4}\sum_{i=1}^{2}\big{\{}(\log_{10}k_{\rm dissAi}^{\rm est }/k_{\rm dissAi}^{\rm nom})^{2}+(\log_{10}k_{\rm assocAi}^{\rm est}/k_{\rm assoc Ai}^{\rm nom })^{2}\big{\}} \tag{4}\] where \(k\) represents the index for each NDNB (1: cisatracurium, 2: vecuronium, and 3: rocuronium in this paper), and \(N_{\rm D}\) stands for the number of NDNBs considered (\(N_{\rm D}=3\) in this paper). The symbol \(S\) stands for the simulated value of the pharmacologic parameter indicated by the subscript \(\rm EC_{50}\), \(\gamma_{\rm E}\), \(\rm IC_{50}\), or \(\gamma_{\rm I}\), and the symbol \(E\) for the experimental results. The symbol CI stands for the 95% confidence interval of the experimental result to normalize the errors between experimental and simulation results. The second term of \(F\) represents the penalty term for the difference between estimated and nominal values of the dissociation and association constants \(k_{\rm dissAi}\) and \(k_{\rm assocAi}\) show in Tab. 1. Furthermore, considering that the two binding sites of mouse adult AChR have similar affinities for ACh [18], we assume that it is also the case for human adult AChR and estimate the parameters \(k_{\rm dissA}\) and \(K_{\rm A}\) with \(k_{\rm dissA}=k_{\rm dissA1}=k_{\rm dissA2}\) and \(K_{\rm A}=K_{\rm A1}=K_{\rm A2}\). The dissociation rate constants \(k_{\rm dissD1}\) and \(k_{\rm dissD2}\) for NDNB are also considered to be equal (\(k_{\rm dissD}=k_{\rm dissD1}=k_{\rm dissD2}\)) to simplify the discussion. Finally, the proposed framework of parameter estimation was implemented in python codes. To numerically solve ordinary differential equations of the models, the Fortran-based solver LSODA provided by the python package SciPy (Version 1.5.2) was used, and the time courses of competitive kinetics were simulated for \(5\,\rm ms\). For the nonlinear regression analysis to derive the pharmacologic parameters (such as \(\rm EC_{50}\) and \(\rm IC_{50}\)) from a calculated concentration-effect curve, a trust region reflective algorithm implemented in the least_square function provided by the package SciPy was used. Finally, the cost function \(F\) was minimized by Nelder-Mead Algorithm implemented in the minimize function in the same package. The weight of the penalty term was set as \(W=0.25\). ## 3 Results The results of parameter estimation are summarized in Tab. 3. For the two-site binding model, the eight parameters shown in the table, including the dissociation equilibrium constants \(K_{\rm D1}\) and \(K_{\rm D2}\) for each NDNB, were estimated. The value of the objective function \(F=4.88\) is larger than 1, implying that the averaged error between simulation and experimental results is larger than the 95% confidence interval of the experimental results. Figures 3a and 3b show the comparison between simulation and experimental results for _in vivo_ and _in vitro_ effects, respectively. The solid lines in the figures show simulation results, and the broken lines the sigmoidal curves based on the experimental results given in Tab. 2. The error bars in Fig. 3a show the 95% confidence intervals of \(\rm EC_{50}\) for each NDNB, and it can be seen that the simulated values of \(\rm EC_{50}\) for cisatracurium and rocuronium are out of the confidence interval when the two-site binding model is used. This clarifies that the two-site binding model is insufficient for the consistent modeling of _in vivo_ and _in vitro_ effects. Next, for the competitive kinetic models with reciprocal and cyclic gating schemes, kinetic constants such as \(k_{\rm dissA}\) and \(k_{\rm dissD}\) are also estimated and the number of estimated parameters are fifteen and seventeen for reciprocal and cyclic schemes, respectively. Clearly, the errors between simulated and experimental results, quantified by the first term of the objective function \(F\), are smaller than that for the two-site binding model (0.14 and 0.16 for reciprocal and cyclic schemes, respectively). Figures 3c and 3d show the _in vivo_ and _in vitro_ simulation results for the reciprocal scheme, respectively, and Figs. 3e and 3f for the cyclic scheme. It can be seen that the simulated concentration-effect curves are close to the experimental results, and the values of \(\rm EC_{50}\) are all in the corresponding confidence intervals. A significant difference between the estimated results for the reciprocal and cyclic scheme is in the second term of the objective function \(F\) (2.10 for reciprocal scheme and 0.92 for cyclic scheme). This means that the dissociation rate constant \(k_{\rm dissA}\) and the dissociation equilibrium constant \(K_{\rm A}=k_{\rm dissA}/k_{\rm assocA}\) for ACh estimated for the cyclic model are closer to the nominal values than those estimated for the reciprocal model. The implication of this difference will be discussed in Sec. 4 after a further analysis presented in the rest of this section. To better understand the difference among the descriptive abilities of the three model structures, here we analyze how the pharmacologic parameters \({\rm EC}_{50}\), \(\gamma_{\rm E}\), \({\rm IC}_{50}\), and \(\gamma_{\rm I}\) change depending on the parameters \(K_{\rm D1}\), \(K_{\rm D2}\), and \(k_{\rm dissD}\) characterizing the properties of an NDNB. For this analysis, it is convenient to utilize the fact that the models can be represented in a dimensionless normalized form (see [22]), where the concentrations \({\rm EC}_{50}\) and \({\rm IC}_{50}\) are normalized by the dissociation equilibrium constant \(K_{\rm D1}\), and the properties of an NDNB can be identified by the ratio of the two dissociation equilibrium constants \(K_{\rm D1}\) and \(K_{\rm D2}\), i.e. the site-selectivity \(\mu:=K_{\rm D2}/K_{\rm D1}\), as well as the dissociation rate constants \(k_{\rm dissD}\). Figure 4 shows the results of pharmacologic parameters simulated by the two-site binding model with different values of \(\mu\) (the parameter \(k_{\rm dissD}\) is not used in this model). The values of \({\rm EC}_{50}\) and \({\rm IC}_{50}\) monotonically increase with an increase of the parameter \(\mu\), and the values of \(\gamma_{\rm E}\) and \(\gamma_{\rm I}\) decrease with an increase of the parameter \(\mu\). It can be seen that the value of \(\gamma_{\rm I}\) is in the range of \([1.0,1.2]\), and thus the experimental result of \(\gamma_{\rm I}\) for rocuronium (\(\gamma_{\rm I}=0.67\pm 0.05\)) can not be well described by the two-site binding model. The red, green, and blue points in the figure show the results for the cases of cisatracurium, vecuronium, and rocuronium, respectively. It can be seen that the site selectivity \(\mu\) is almost 1 for cisatracurium, and takes \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Parameters} & \multirow{2}{*}{Two-site binding model} & (b) & (c) \\ & & Competitive kinetic model & Competitive kinetic model \\ & & (Reciprocal scheme) & (Cyclic scheme) \\ \hline \(F\) & 4.88 & 2.24 & 1.08 \\ 1st term of \(F\) & 4.88 & 0.14 & 0.16 \\ 2nd term of \(F\) & — & 2.10 & 0.92 \\ \hline \([{\rm ARA}]_{50}\) & \((1.33\times 10^{-6})[{\rm R}^{*}]_{0}\) & \(2.08\times 10^{-7}\,{\rm M}\) & \(1.82\times 10^{-8}\,{\rm M}\) \\ \(\gamma_{A}\) & 4.17 & 9.14 & 9.04 \\ \(k_{\rm dissA}\) & — & \(4.43\times 10^{2}\,{\rm s}^{-1}\) & \(2.62\times 10^{3}\,{\rm s}^{-1}\) \\ \(K_{\rm A}\) & — & \(1.58\times 10^{-8}\,{\rm M}\) & \(4.44\times 10^{-7}\,{\rm M}\) \\ \(k_{\rm close}\) & — & \(2.22\times 10^{7}\,{\rm s}^{-1}\) & \(4.24\times 10^{4}\,{\rm s}^{-1}\) \\ \(k_{\rm open}\) & — & \(1.48\times 10^{10}\,{\rm s}^{-1}\) & \(1.06\times 10^{4}\,{\rm s}^{-1}\) \\ \(k_{\rm dissA}^{*}\) & — & — & \(1.70\times 10^{4}\,{\rm s}^{-1}\) \\ \(K_{\rm A}^{*}\) & — & — & \(1.84\times 10^{-8}\,{\rm M}\) \\ \hline Cisatracrium & & & \\ \(K_{\rm D1}\) & \(2.19\times 10^{-8}\,{\rm M}\) & \(9.57\times 10^{-9}\,{\rm M}\) & \(1.02\times 10^{-8}\,{\rm M}\) \\ \(K_{\rm D2}\) & \(2.12\times 10^{-8}\,{\rm M}\) & \(6.75\times 10^{-6}\,{\rm M}\) & \(3.60\times 10^{-5}\,{\rm M}\) \\ \(k_{\rm dissD}\) & — & \(2.6\,{\rm s}^{-1}\) & \(4.0\,{\rm s}^{-1}\) \\ Vecuronium & & & \\ \(K_{\rm D1}\) & \(2.09\times 10^{-8}\,{\rm M}\) & \(1.58\times 10^{-8}\,{\rm M}\) & \(1.63\times 10^{-8}\,{\rm M}\) \\ \(K_{\rm D2}\) & \(9.29\times 10^{-8}\,{\rm M}\) & \(2.76\times 10^{-6}\,{\rm M}\) & \(1.58\times 10^{-6}\,{\rm M}\) \\ \(k_{\rm dissD}\) & — & \(1.9\,{\rm s}^{-1}\) & \(5.4\,{\rm s}^{-1}\) \\ Rocuronium & & & \\ \(K_{\rm D1}\) & \(1.88\times 10^{-8}\,{\rm M}\) & \(1.23\times 10^{-8}\,{\rm M}\) & \(1.22\times 10^{-8}\,{\rm M}\) \\ \(K_{\rm D2}\) & \(1.28\times 10^{-4}\,{\rm M}\) & \(1.37\times 10^{-7}\,{\rm M}\) & \(1.76\times 10^{-7}\,{\rm M}\) \\ \(k_{\rm dissD}\) & — & \(64.0\,{\rm s}^{-1}\) & \(61.5\,{\rm s}^{-1}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Results of parameter estimation for the three modeling structures a large value for rocuronium (\(\mu=6.8\times 10^{3}\)). Figures 5 and 6 show the pharmacologic parameters simulated by the competitive kinetic model with reciprocal and cyclic schemes, respectively. The solid, broken, and dotted lines shows the simulation results with \(k_{\rm dissD}=1.0\,\mathrm{s}^{-1}\), \(10.0\,\mathrm{s}^{-1}\), and \(60.0\,\mathrm{s}^{-1}\), respectively. For _in vitro_ results, it can be seen that simulation results highly depend on the value of \(k_{\rm dissD}\) and the value of \(\gamma_{\rm I}\) decreases as the increase of \(k_{\rm dissD}\). This is due to the dissociation of NDNB molecules from AChRs, which has been experimentally observed in [12], and is important to explain the low \(\gamma_{\rm I}\) for rocuronium, which can not be described by the two-site binding model. Furthermore, regarding _in vivo_ effects, it can be seen that the values of \(\mathrm{EC}_{50}/K_{\rm D1}\) take the maximum values near \(\mu=10\), whereas it monotonically increases in the case of the two-site binding model. As a result, in contrast to the results of the two-site binding model, the site-selectivity \(\mu\) for rocuronium takes the lowest values (\(\mu=11.1\) for reciprocal scheme and \(\mu=14.4\) for cyclic scheme) among the three considered NDNBs. This is consistent with the finding for mouse AChRs in [11] that the site-selectivity for rocuronium is lowest Figure 3: Concentration-effect relationship for the two-site binding model (a, b), competitive kinetic model with reciprocal gating scheme (c,d), and competitive kinetic model with cyclic gating scheme (e,f). The solid lines show the simulation results, and the broken lines the sigmoidal curves plotted based on the experimental results given in Tab. 2. among the three NDNBs. Finally, when comparing the reciprocal and cyclic gating schemes, it can be seen that results for the cyclic scheme (Figs. 6a and 6b) are less dependent on the value of \(k_{\rm dissD}\) than for the reciprocal scheme. Its implication will be further discussed in Sec. 4. ## 4 Discussion This paper addressed simultaneous modeling of _in vivo_ and _in vitro_ effects of NDNBs. In particular, we explored a suitable model structure and its parameters to reconcile an apparent discrepancy seen among _in vivo_ and _in vitro_ experimental results for three clinically used NDNBs, cisatracurium, vecuronium, and rocuronium. Although the values of IC\({}_{50}\) are similar for these three NDNBs (10 nM, 15 nM, and 17 nM for cisatracurium, vecuronium, and rocuronium, respectively [10]), the value of EC\({}_{50}\) for rocuronium is much higher (1.35 \(\mu\)M [4]) than those for cisatracurium (0.12 \(\mu\)M [6]) and vecuronium (0.26 \(\mu\)M [5]). That is, rocuronium is less potent at inducing muscle relaxation _in vivo_ than directly predicted from _in vitro_ experiments. Regarding the difference between _in vivo_ and _in vitro_ effects of NDNBs, it is well known that neuromuscular transmission has a high margin of safety [25] due to copious density of AChRs, that is, only a small fraction of AChRs need to be activated to cause muscle contraction, and more than 80 % of the AChRs must be occupied by NDNBs before any diminution can be seen in twitch strength. In our simulations, neuromuscular response is calculated in the two step shown in Fig. 1, i.e., 1) calculation of the fraction of activated AChRs and 2) calculation of twitch strength induced by the activation of AChRs. Clearly, the second step is responsible for explaining the margin of safety, and the parameters \([\mathrm{R}^{*}]_{50}\) and \(\gamma_{\mathrm{A}}\) in Eq. (2) determine the amount of activated AChRs needed to induce muscle contraction. Here, although the low potency of rocuronium can be understood as if the margin of safety was higher for rocuronium than for cisatracurium and vecuronium, the properties of an NDNB would not affect the second step mentioned above. Thus, in this paper, we aim to discuss how the apparent difference in the margin of safety can be explained through the difference in the first step of the above framework. With this aim, this paper compared the following three different model structures for the first step in Fig. 1: (i) two-site binding model, (ii) competitive kinetic model with reciprocal gating scheme, and (iii) Figure 4: Pharmacologic parameters simulated by the two-site binding model with different values of \(\mu:=K_{\mathrm{D2}}/K_{\mathrm{D1}}\). The _red_, _green_, and _blue_ points show the calculated values for cisatracurium, vecuronium, and rocuronium, respectively. competitive kinetic model with cyclic gating scheme. Among them, the two-site binding model is the most basic one, and it has been used to describe concentration-effect relationships obtained by _in vitro_ experiments [8; 9; 10; 11] for the purpose of determining the fraction of ACh receptors occupied by each NDNB. When this model is used, the fraction of activated AChRs is directly calculated from the receptor occupancy, and there is no difference between _in vivo_ and _in vitro_ simulations at the first step. From the simulation results of this paper, it can be confirmed that using the two-site binding model is insufficient for simultaneous modeling Figure 5: Pharmacologic parameters simulated by the competitive kinetic model with the reciprocal scheme under various settings of \(\mu:=K_{\mathrm{D2}}/K_{\mathrm{D1}}\). The _red_, _green_, and _blue_ points show the calculated values for cisatracurium, vecuronium, and rocuronium, respectively. Figure 6: Pharmacologic parameters simulated by the competitive kinetic model with cyclic scheme under various settings of \(\mu:=K_{\mathrm{D2}}/K_{\mathrm{D1}}\). The _red_, _green_, and _blue_ points show the calculated values for cisatracurium, vecuronium, and rocuronium, respectively. of _in vivo_ and _in vitro_ experimental results. When the competitive kinetic model is used (both reciprocal and cyclic gating scheme), the simulated values of EC\({}_{50}\) are all in the range of 95% confidence intervals of the experimental results, implying that the low potency of rocuronium can be described. In these cases, dynamic simulations of competitive kinetics have been performed, and the fractions of activated AChRs for _in vivo_ and _in vitro_ simulations are different due to the difference in the concentration of ACh and in the parameter \(k_{\rm decay}\). In our previous study [22], we theoretically and numerically analyzed the relationship between the fraction of activated AChRs and the receptor occupancy by NDNB molecules. As a result, it has been shown that the fraction of activated AChRs simulated by the competitive kinetic model get closer to that described by the two-site binding model as 1) the concentration of ACh becomes higher and 2) the dissociation rate constant \(k_{\rm diss}\) becomes smaller. Conversely, small ACh concentration or large rate constant \(k_{\rm diss}\) is necessary for explaining _in vivo_ experimental results. Furthermore, it has been found in [22] that the difference between _in vivo_ and _in vitro_ simulations becomes more prominent as the value of \(\mu=K_{\rm D2}/K_{\rm D1}\) decreases, i.e., as the site-selectivity becomes small. Thus, as shown in Figs. 5 and 6, the value of EC\({}_{50}/K_{\rm D1}\) takes a maximum value because it increases with the decrease in \(\mu\) in the range of \(\log_{10}\mu>1\) due to the above reason and decreases with the decrease in \(\mu\) in the range of \(0<\log_{10}\mu<1\) due to a change in the receptor occupancy as described by the two-site binding model (as shown in Fig. 4). Owing to the fact that the value of EC\({}_{50}/K_{\rm D1}\) takes a large value near \(\mu=10\), where the value of IC\({}_{50}/K_{\rm D1}\) is relatively low as shown in Figs. 5 and 6, it is possible to explain the high ratio of EC\({}_{50}/\)IC\({}_{50}\) for rocuronium. When comparing the results for the reciprocal and cyclic gating schemes, the estimated values of \(k_{\rm dissA}\) and \(K_{\rm A}\) are quite different. The values estimated for the cyclic scheme are closer to the nominal values reported for mouse adult AChRs [18] than those for the reciprocal scheme. Furthermore, for the reciprocal scheme, it can be seen from simulations of the time course of [ARA\({}^{*}\)] (not shown in this paper) that it does not reach its peak concentration within initial activation phase and that it has peak at around several tens of ms, whereas, for the cyclic scheme, it reaches its peak in less than 1 ms. This result for the reciprocal scheme is due to the slow dissociation (\(k_{\rm dissA}=8.77\times 10^{2}\,{\rm s}^{-1}\)) and the high affinity (\(K_{\rm A}=3.33\times 10^{-9}\,{\rm M}\)) of ACh to AChRs, and this time course is too slow, given that an end-plate current has a typical time constant of about 1 ms. Thus, based on the above, the cyclic gating scheme is preferred in this paper. Furthermore, from the difference between Fig. 5 and Fig. 6, it can be seen that the reason for the low potency of rocuronium is differently explained by the models with reciprocal and cyclic gating schemes. When the reciprocal scheme is used, the _in vivo_ effects (EC\({}_{50}\) and \(\gamma_{\rm E}\) in Fig. 5) highly depend on the dissociation rate constant \(k_{\rm dissD}\), and thus, large \(k_{\rm dissD}\) is necessary to explain high EC\({}_{50}\) for rocuronium. On the other hand, when the cyclic scheme is used (Fig. 6), the _in vivo_ results is less dependent on the value of \(k_{\rm dissD}\). Thus, if the cyclic scheme is appropriate, it follows that \(k_{\rm dissD}\) is not an important factor to explain the low potency of rocuronium, while it is still important to explain the low \(\gamma_{\rm I}\) for rocuronium. Since it has been found in [22] that either small ACh concentration or large \(k_{\rm dissD}\) is necessary to explain the difference between the simulation results of the two-site binding model and the competitive kinetic model, small ACh concentration would be a key assumption needed to explain the low potency of rocuronium. Although the cyclic gating scheme is preferred in this paper, the reciprocal gating scheme has been a widely accepted model [18, 19, 20, 21]. A key fact that has been supported the reciprocal gating scheme is that the affinity of the AChR for ACh is much higher in the open than in the closed state [20, 21], which can lead to the thought that ACh would not dissociation from AChRs while the channel is open. Interestingly, however, with the estimated parameters of the cyclic model, the affinity in the open state (\(1/K_{\rm A}^{*}=5.43\times 10^{7}\,{\rm M}^{-1}\)) is higher than that in the closed state (\(1/K_{\rm A}=2.25\times 10^{6}\,{\rm M}^{-1}\) ) and are consistent with the above fact. That is, the high affinity at the open state does not exclude the possibility of a cyclic model. However, there is a discrepancy between the dissociation rate \(k_{\rm dissA}^{*}\) estimated in this paper (\(5.05\times 10^{4}\,{\rm s}^{-1}\)) and reported in [19] (\(24\,{\rm s}^{-1}\)). Since the constant reported in [19] is estimated based on the premise of the reciprocal model, it may be possible to reconcile the discrepancy by re-estimating the constant using the cyclic scheme. However, it is beyond the scope of this paper and is in the future work. Thus, further consideration is needed to discuss whether the cyclic gating scheme is appropriate or not. Finally, it should be noted that the method of parameter estimation and the estimated results are dependent on the following simplifying assumptions. 1) In the kinetic simulation performed in this paper, we ignored the effect of the three-dimensional structure of the synapse as modeled in [26; 27] and simply assumed that both ACh and AChRs are distributed uniformly in the synaptic cleft. 2) The extent of plasma protein binding, which is different for each NDNB [28], was not considered to simulate the _in vivo_ effects. Although only the unbound NDNB molecules can cross cell membranes and reach the postsynaptic membrane at the neuromuscular junction, we simply considered the effect-site concentration in the PKPD modeling as the NDNB concentration at the neuromuscular junction. 3) All the kinetic constants were assumed to be the same between _in vivo_ and _in vitro_ environments. For example, it has been reported that the kinetic constants for NDNBs at physiological temperatures around \(37\,\mathrm{\SIUnitSymbolCelsius}\) are different from those at room temperatures around \(25\,\mathrm{\SIUnitSymbolCelsius}\)[14]. 4) Several parameters are directly taken from literature without any correction. For example, the concentration \(\left[\mathrm{R}\right]_{\mathrm{total}}\) of AChRs is based on the number of AChRs at the end plates of human deltoid muscle [29] and the volume of the synaptic cleft of rat diaphragm [30], and thus it may be different from the value for human adductor pollicis muscle for which _in vivo_ experimental results have been obtained. Regardless of these assumptions, the obtained model and simulation results of this paper are useful for exploring the molecular mechanisms of the relationship between _in vivo_ and _in vitro_ effects of NDNBs. ## 5 Conclusions This paper addressed simultaneous modeling of _in vivo_ and _in vitro_ effects of NDNBs. In particular, we explored a suitable model structure and its parameters to reconcile the fact that rocuronium is less potent at inducing muscle relaxation _in vivo_ than directly predicted from _in vitro_ experiments. By comparing the results of parameter estimation for three candidate models, it was shown that the competitive kinetic model with the cyclic gating scheme best described both the _in vivo_ and _in vitro_ experimental data. It was found that the above apparent discrepancy can be resolved if we assume that the _in vivo_ concentration of ACh is relatively low to activate only a part of AChRs, whereas more than \(95\,\mathrm{\char 37}\) of AChRs are activated during _in vitro_ experiments, and that the site-selectivity is smaller for rocuronium than those for cisatracurium and vecuronium. Although further consideration is needed to conclude that the cyclic gating scheme is appropriate for the modeling, the obtained model and simulation results in this paper are useful for exploring the molecular mechanisms of the relationship between _in vivo_ and _in vitro_ effects of NDNBs. ## Acknowledgements This work was partially supported by Grant-in-Aid for Scientific Research (KAKENHI) from the Japan Society for Promotion of Science (#20K04553). ## Competing Interest The authors declare that they have no competing interests. ## Ethics declaration This study was conducted by theoretical investigations and computer-based simulations. As such, the data employed in this study did not require ethical approval.
2304.01815
Consolidated Control Barrier Functions: Synthesis and Online Verification via Adaptation under Input Constraints
In this paper, we develop a novel adaptation-based approach to constrained control design under multiple state and input constraints. Specifically, we introduce a method for synthesizing any number of time-varying candidate control barrier functions (CBF) into one consolidated CBF (C-CBF) candidate, and propose a predictor-corrector optimization-based adaptation law for the weights of the constituent constraint functions that certifies the C-CBF as valid for a class of nonlinear, control-affine systems. We prove this result by showing that the adapted weights are guaranteed to confer sufficient control authority to meet the new, adaptive C-CBF condition in perpetuity despite input constraints, which thereby permits its use in a quadratic program based control law. We then illustrate the performance of our controller on an academic example, and further highlight that it is successful even for constraint functions with higher or mixed relative-degree by simulating a reach-avoid problem for bicycle robots, which we use to demonstrate how our approach out-performs two baseline approaches.
Mitchell Black, Dimitra Panagou
2023-04-04T14:14:12Z
http://arxiv.org/abs/2304.01815v1
# Consolidated Control Barrier Functions: ###### Abstract In this paper, we develop a novel adaptation-based approach to constrained control design under multiple state and input constraints. Specifically, we introduce a method for synthesizing any number of time-varying candidate control barrier functions (CBF) into one consolidated CBF (C-CBF) candidate, and propose a predictor-corrector optimization-based adaptation law for the weights of the constituent constraint functions that certifies the C-CBF as valid for a class of nonlinear, control-affine systems. We prove this result by showing that the adapted weights are guaranteed to confer sufficient control authority to meet the new, adaptive C-CBF condition in perpetuity despite input constraints, which thereby permits its use in a quadratic program based control law. We then illustrate the performance of our controller on an academic example, and further highlight that it is successful even for constraint functions with higher or mixed relative-degree by simulating a reach-avoid problem for bicycle robots, which we use to demonstrate how our approach out-performs two baseline approaches. Constrained control; nonlinear systems; adaptive control; autonomous systems. ## I Introduction Since the arrival of control barrier functions (CBFs) to the field of safety-critical systems [1], much attention has been devoted to the development of their viability for safe control design [2, 3, 4]. As a set-theoretic approach founded on the notion of forward invariance, CBFs certify adherence to constraints in that they ensure that any state beginning within a given set remains so for all future time. In the context of control design, CBF conditions are often used as constraints in quadratic program (QP)-based control laws, either as safety filters [5] or in conjunction with stability or liveness constraints (e.g., control Lyapunov functions) [6]. Their utility has been successfully demonstrated for a variety of safety-critical applications, including mobile robots [7, 8], unmanned aerial vehicles (UAVs) [9, 10], and autonomous driving [11, 12]. But while it is now well-established that synthesizing a CBF for a constraint set serves as a certificate of constraint adherence via set invariance, the verification of _candidate_ CBFs as _valid_ is in general a challenging problem. Though for a single candidate CBF there exist guarantees of validity under certain conditions for systems with unbounded [2] control authority, verifying a candidate CBF under input constraints poses significant challenges. In response, various works have demonstrated the success of verification tools (e.g., sum-of-squares optimization [13, 14], linear programming [15]) in synthesizing a valid CBF offline prior to deployment. These verification certificates, however, may be invalidated by unmodelled phenomena like exogenous disturbances or environmental changes. To address this drawback, the authors of [16] propose an online method for guaranteed constraint satisfaction specific to high-order (HO-) CBFs, though their approach typically requires forward simulation of the system trajectories. Alternatively, in [17] an adaptation-based approach is introduced for guaranteed feasibility of a HO-CBF-QP control law provided that the parameters adapt sufficiently quickly. The above results, however, break down in the presence of multiple constraints. The problem of safe control design under multiple constraints is especially relevant in practical applications involving autonomous vehicles and mobile robots, where there may be liveness-based and/or spatiotemporal specifications in addition to safety constraints. In many cases, the joint satisfaction of safety and liveness constraints has been treated by synthesizing CBF-CLF-QP controllers [18, 19, 20], wherein CBF conditions are hard constraints and CLF conditions are soft constraints in the QP. As a class of Lyapunov-like functions, however, CBFs have also been used to enforce the satisfaction of tracking [21] and spatiotemporal constraints using logic encodings like signal temporal logic (STL) [22, 23] and linear temporal logic (LTL) [24]. Whether there exists a control input capable of satisfying the full collection of CBF constraints, however, is very much still an open problem. Recent approaches to control design in the presence of multiple constraints have mainly circumvented the underlying problem by considering only one such constraint at a given time instance, either by assumption [25] or construction in a non-smooth manner [26, 27], all of which may result in performance degradation (including undesirable oscillatory behavior). In contrast, the authors of [22] and [28] each propose smoothly synthesizing one candidate CBF for the joint satisfaction of multiple constraints, but notably make no attempt to validate their candidate function. Additional proposed
2303.14501
Link Prediction for Flow-Driven Spatial Networks
Link prediction algorithms aim to infer the existence of connections (or links) between nodes in network-structured data and are typically applied to refine the connectivity among nodes. In this work, we focus on link prediction for flow-driven spatial networks, which are embedded in a Euclidean space and relate to physical exchange and transportation processes (e.g., blood flow in vessels or traffic flow in road networks). To this end, we propose the Graph Attentive Vectors (GAV) link prediction framework. GAV models simplified dynamics of physical flow in spatial networks via an attentive, neighborhood-aware message-passing paradigm, updating vector embeddings in a constrained manner. We evaluate GAV on eight flow-driven spatial networks given by whole-brain vessel graphs and road networks. GAV demonstrates superior performances across all datasets and metrics and outperformed the state-of-the-art on the ogbl-vessel benchmark at the time of submission by 12% (98.38 vs. 87.98 AUC). All code is publicly available on GitHub.
Bastian Wittmann, Johannes C. Paetzold, Chinmay Prabhakar, Daniel Rueckert, Bjoern Menze
2023-03-25T15:42:27Z
http://arxiv.org/abs/2303.14501v2
# Link Prediction for Flow-Driven Spatial Networks ###### Abstract Link prediction algorithms predict the existence of connections between nodes in network-structured data and are typically applied to refine the connectivity among nodes by proposing meaningful new links. In this work, we focus on link prediction for flow-driven spatial networks, which are embedded in a Euclidean space and relate to physical exchange and transportation processes (e.g., blood flow in vessels or traffic flow in road networks). To this end, we propose the Graph Attentive Vectors (GAV) link prediction framework. GAV models simplified dynamics of physical flow in spatial networks via an attentive, neighborhood-aware message-passing paradigm, updating vector embeddings in a constrained manner. We evaluate GAV on eight flow-driven spatial networks given by whole-brain vessel graphs and road networks. GAV demonstrates superior performances across all datasets and metrics and outperforms the current state-of-the-art on the ogbl-vessel benchmark by more than 18% (98.38 vs. 83.07 AUC). ## 1 Introduction and Motivation Networks (or graphs) can serve as efficient representations of real-world, ultra-complex systems and can be further classified into different categories. A prominent category is represented by undirected networks embedded in a Euclidean space constrained by geometry, called spatial networks [5]. In this work, we are focusing on spatial networks, where a form of physical exchange or _flow_ can be used to describe characteristic functional properties of the underlying physical system. Examples include road networks, water bodies, and global exchange networks, but they can also be found in biology (, vascular system, lymphatic system, and connectome). We will refer to such networks as _flow-driven spatial networks_. Predominantly, network representations of physical systems originate from imaging methodologies, such as nanometer-scale microscopy in biology or regional to continental scale satellite remote sensing for road networks. The generation of compact network representations from these images is a multi-stage and imperfect process, which often consists of segmentation, skeletonization, and subsequent graph pruning. Since, for flow-driven spatial networks, the correct connectivity is of utmost importance, the erroneous graph generation process clearly motivates the task of link prediction as a meaningful method to optimize the graph representation. Therefore, we bring a simplistic yet general definition of the principle of physical flow, characterized by a direction and magnitude, to link prediction in graph representation learning. Our hypothesis is that for flow-driven spatial networks, link prediction algorithms should heavily benefit from considering known functional properties, such as the aforementioned _physical flow_, which are defined by the structural properties of the network (, bifurcation angles [32]). To this end, we propose the _Graph Attentive Vectors_ (GAV) link prediction framework. GAV operates on _vector embeddings_ representative of the network's structural properties and updates them in a constrained manner, imitating simplified dynamics of physical flow in spatial networks (, blood flow in the vascular system or traffic flow in road networks). We summarize our contribution as follows: 1. We propose an attentive, neighborhood-aware message-passing layer, called GAV layer, which Figure 1: Flow-driven spatial network \(\mathcal{G}\), representing vasculature. \(\mathcal{G}\)’s nodes are embedded in a Euclidean space and represent spatial positions specified by \(x\)-, \(y\)-, and \(z\)-coordinates. updates vector embeddings, mimicking the (change in) direction and magnitude of physical flow in spatial networks. 2. We introduce a readout module that aggregates vector embeddings in a physically plausible way and thus facilitates the interpretability of results. 3. We formulate link prediction as a graph-level classification task on a line graph representation and propose a tailored node labeling trick. In extensive validation experiments, we prove our hypothesis by demonstrating superior performance across all metrics on eight flow-driven spatial networks, including the Open Graph Benchmark's ogbl-vessel benchmark (98.38 vs. 83.07 AUC). ## 2 Related Works This section commences by discussing previous work on link prediction algorithms, followed by an overview of message-passing layers. Particular emphasis is placed on methods featured in our experiments. ### Link Prediction Link prediction algorithms are applied in various fields, such as social network analysis [28, 25, 10], bioinformatics [24, 33, 19], recommender systems [2, 34, 17, 49], supply chain networks [26], and information retrieval [23]. Broadly speaking, different link prediction algorithms try to estimate link existence between two nodes either via heuristic or learned methods. We discuss these two families of algorithms in the following. Heuristic AlgorithmsHeuristic algorithms employ pre-defined heuristics to encode the similarity between two nodes. Some prominent candidates are represented by common neighbors, resource allocation [50], preferential attachment [4], Adamic-Adar [1], Jaccard [18], Katz [20], and average commute time [13]. However, all heuristic link prediction algorithms suffer from the same underlying issue. They exploit predefined heuristics, which can not be modified to account for different network types. _E.g._, the common neighbors heuristic has been developed for social networks and hence yields underwhelming results when applied to molecular graphs. Learned AlgorithmsOn the other hand, learned algorithms do not rely on fixed, predesigned heuristics but rather learn a data-driven heuristic suitable to the properties of individual graphs utilizing neural networks. Thus, learned algorithms can easily adapt to different network types. SEAL [45, 47] represents a prominent, learned link prediction framework, defining link prediction as a subgraph-level classification task by training a binary GNN-based classifier to map from subgraph patterns to link existence. To this end, SEAL first extracts a local subgraph around the link of interest, which is subsequently forwarded to DGCNN [46] for classification. DGCNN relies on GCN message-passing layers [22] followed by a SortPooling layer for subgraph aggregation. Moreover, SEAL incorporates an additional node labeling technique, known as labeling trick, to enhance the expressiveness of node features obtained from GNNs. SIEG [16] builds upon SEAL and introduces, inspired by Graphormer [44], a pairwise structural attention module between two nodes of interest to capture local structural information more effectively. Pury _et al_. [31] tried to improve the generalization power of multiple message-passing layers via a low-rank global self-attention module, short LRGA, and combined their approach with a simple link prediction framework. We would like to mention that none of the above-mentioned methods are tailored to flow-driven spatial networks. ### Message-Passing Layers GNNs utilize the concept of message-passing to encode semantically rich features within network-structured data. Over time, multiple variations of message-passing layers have been proposed [40, 11, 12, 15]. For instance, GCN's message-passing layer [22] weighs each incoming message with a fixed coefficient, the node degree, before aggregation. In contrast, GAT's message-passing layer [6] learns aggregation weights dynamically based on attention scores. GraphSAGE's message-passing layer [14] does not directly aggregate central node features with incoming messages. Instead, it distinguishes these two kinds of features and learns two different transformations, one on the central node and another on incoming messages. EdgeConv [38] aggregates the feature difference between the central node and its neighbors combined with the central node's features. Thus, EdgeConv draws parallels to aggregating spatial vectors if the nodes embed spatial positions. However, our proposed GAV layer differs significantly from EdgeConv, as our method explicitly constrains the update of vector embeddings to imitate the simplified dynamics of physical flow in spatial networks. Importantly, only a few works tried to adapt the message-passing paradigm to spatial networks [48, 9]. ## 3 The Graph Attentive Vectors Framework Our proposed Graph Attentive Vectors (GAV) link prediction framework is depicted in Fig. 2. GAV represents a simple yet effective, end-to-end trainable framework tailored to the task of link prediction for flow-driven spatial networks. It predicts the probability of link (or edge) existence between two nodes in a graph \(\mathcal{G}\) based on a binary classifier \(\mathcal{F}\), composed of a message-passing and a read out module. To this end, \(\mathcal{F}\) should be able to differentiate between positive (real) and negative (sampled) links by assigning high probabilities of existence to plausible and low probabilities of existence to implausible links. Following Zhang [45, 47], we treat the link prediction problem as a subgraph classification task. To determine the probability of existence of an individual target link between two target nodes, we, therefore, first extract an enclosing subgraph describing the target links local neighborhood in a subgraph extraction module. Subsequently, the subgraph is transformed into a line graph representation and forwarded to \(\mathcal{F}\), resulting in an iterative link prediction scheme predicting the existence of target links one at a time. In the following sections, we elaborate extensively on GAV's individual components (see Fig. 2) and the principal ideas forming its backbone. ### Subgraph Extraction Module The undirected input graph \(\mathcal{G}\) is defined by a set of nodes \(\mathcal{V}\) and a set of corresponding edges \(\mathcal{E}\). While nodes \(n_{i}\in\mathcal{V}\) embed individual continuous spatial entities in the form of spatial positions given by coordinates (\(n_{i}\in\mathbb{R}^{d_{\text{spatial}}}\)), edges \(e_{ij}\in\mathcal{E}\) describe the relations and thus the connectivity among nodes. As a first step, we extract an \(h\)-hop enclosing subgraph \(\mathcal{G}_{h}^{t}\) around the nodes \(\{n_{i}^{t},n_{j}^{t}\}\) affiliated to the target link \(e_{ij}^{t}\) from the original graph representation \(\mathcal{G}\) (please note that we refer to the target in our notations as \(t\)). This results in an expressive and efficient representation of the target link's local neighborhood, including the relevant structural patterns necessary to determine link existence. Further, the subgraph extraction results in a drastically reduced computational complexity, which is crucial for ultra-large graphs. Since link prediction is naturally pertinent to links rather than nodes, we formulate link prediction as a problem on a line graph. To this end, we subsequently transform \(\mathcal{G}_{h}^{t}\) into a line graph representation \(\mathcal{L}(\mathcal{G}_{h}^{t})\). In the line graph, each node \(n_{i}^{\prime}\) represents an edge \(e_{ij}\in\mathcal{E}_{h}^{t}\), while its edges \(e_{ij}^{\prime}\) indicate adjacency between edges \(e_{ij}\) iff they are incident in \(\mathcal{G}\). We encode edges \(e_{ij}\) as vectors between the involved nodes \(\{n_{i},n_{j}\}\) to generate node embeddings for the line graph representation's nodes \(n_{i}^{\prime}\). Therefore, \(\mathcal{L}(\mathcal{G}_{h}^{t})\)'s node embeddings are defined as vectors of unique length and direction, representing the network's structural properties via edges in \(\mathcal{G}_{h}^{t}\) (see Fig. 2). The line graph representation \(\mathcal{L}(\mathcal{G}_{h}^{t})\) formed around the target link \(e_{ij}^{t}\) is finally forwarded to the message-passing module. ### Message-Passing Module and GAV Layer To incorporate contextual information in node embeddings, we propose a novel message-passing layer, termed GAV layer. We perform \(k\) iterations of message-passing among the nodes of the line graph \(\mathcal{L}(\mathcal{G}_{h}^{t})\), obtained from our subgraph extraction module. The GAV layer's message-passing relies on a straightforward intuition inspired by principles of physical flow. To be precise, we treat nodes in \(\mathcal{L}(\mathcal{G}_{h}^{t})\) as vector embeddings and update them in a constrained manner, imitating simplified dynamics of physical flow in spatial networks. The detailed structure of a single GAV layer is graphically visualized in Fig. 3. In order to update an individual vector embedded in a node \(n_{i}^{\prime}\in\mathbb{R}^{d_{\text{spatial}}}\), we first project it together with a matrix \(N_{i}\in\mathbb{R}^{|\mathcal{N}(n_{i}^{\prime})\cup n_{i}^{\prime}|\times d_ {\text{spatial}}}\), consisting of directly neighboring nodes and the node itself, into a higher dimensional space \(d_{\text{message}}\); the projection is through a learnable function \(\phi_{\theta}^{(1)}:\mathbb{R}^{d_{\text{spatial}}}\rightarrow\mathbb{R}^{d_ {\text{measup}}}\). Subsequently, \(\phi_{\theta}^{(1)}(n_{i}^{\prime})\) and \(\phi_{\theta}^{(1)}(N_{i})\) are forwarded to a multi-head attention operation, where \(\phi_{\theta}^{(1)}(n_{i}^{\prime})\) represents a single query, while \(\phi_{\theta}^{(1)}(N_{i})\) Figure 2: Overview of the GAV link prediction framework. GAV is divided into three modules, namely the subgraph extraction module, the message-passing module, and the readout module. First, an \(h\)-hop enclosing subgraph \(\mathcal{G}_{h}^{t}\) is extracted around the target nodes \(\{n_{i}^{t},n_{j}^{t}\}\) (red and green) affiliated to the target link \(e_{ij}^{t}\) (orange) and subsequently transformed into a line graph representation \(\mathcal{L}(\mathcal{G}_{h}^{t})\); second, we perform iterative message-passing between vector embeddings in \(\mathcal{L}(\mathcal{G}_{h}^{t})\) via \(k\) GAV layers to incorporate contextual information; and, third, a final subgraph-level readout module aggregates relevant features and predicts the probability of link existence with regard to the target link \(e_{ij}^{t}\). To provide a concise visualization, \(h\) was set to 1. We would like to draw the reader’s attention to color coding and the vector embeddings in the nodes \(n_{i}^{\prime}\) of \(\mathcal{L}(\mathcal{G}_{h}^{t})\). represents the key and value sequence. \[\tilde{n}^{\prime}_{i}=\text{MultiHeadAtt}(\phi^{(1)}_{\theta}(n^{\prime}_{i}),\; \phi^{(1)}_{\theta}(N_{i}),\;\phi^{(1)}_{\theta}(N_{i})) \tag{1}\] This results in an intermediate node representation \(\tilde{n}^{\prime}_{i}\in\mathbb{R}^{d_{\text{mean}^{\prime}}}\), which incorporates not only information of the node itself but also its local structural neighborhood via the concept of attention. In the next step, we apply a residual connection for increased gradient flow and forward the result to the learnable function \(\phi^{(2)}_{\theta}:\mathbb{R}^{d_{\text{mean}^{\prime}}}\rightarrow\mathbb{R}\), followed by a tanh non-linearity. \[s_{i}=\text{tanh}(\phi^{(2)}_{\theta}(\tilde{n}^{\prime}_{i}+\phi^{(1)}_{ \theta}(n^{\prime}_{i}))) \tag{2}\] Finally, the scalar value \(s_{i}\in(-1,1)\) is utilized to update the original node representation \(n^{\prime}_{i}\) via scalar multiplication. \[\hat{n}^{\prime}_{i}=s_{i}\cdot n^{\prime}_{i} \tag{3}\] Hence, after one layer of message-passing, the updated, refined node representation is given by \(\hat{n}^{\prime}_{i}\in\mathbb{R}^{d_{\text{spatial}}}\). Importantly, \(\hat{n}^{\prime}_{i}\) preserves relevant properties of the original node representation \(n^{\prime}_{i}\). This is because scalar multiplication with \(s_{i}\in(-1,1)\) restricts the modification of vector embeddings given by nodes in \(\mathcal{L}(\mathcal{G}^{t}_{h})\). In essence, our message-passing paradigm can be geometrically interpreted as a scaling combined with a potential flipping operation of vectors. These constraints imposed by our message-passing align with our principal idea of modeling simplified dynamics of physical flow in spatial networks via potential, constrained changes in the direction and magnitude of vector embeddings, preserving the network's structural properties. ### Labeling Trick Following Zhang _et al_. [45, 47], we apply a labeling trick to enable the message-passing module to learn an expressive structural representation of the target link's local neighborhood. Since our method operates on a line graph, we propose a novel labeling trick tailored to line graph-based link prediction tasks. Our labeling trick ensures that vector embeddings created from the target link \(e^{t}_{ij}\) and edges connected to the target nodes \(\{n^{t}_{i},n^{t}_{j}\}\) are identifiable by a distinct label. This allows the message-passing module to distinguish between the target link, edges connected to target nodes, and other edges encoded in \(\mathcal{L}(\mathcal{G}^{t}_{h})\)'s vector embeddings, simplifying the link prediction task. Labels generated by our proposed labeling trick are shown in Fig. 4. The additional labels generated by our labeling trick are concatenated to \(\mathcal{L}(\mathcal{G}^{t}_{h})\)'s vector embeddings in the subgraph extraction module. ### Readout Module The readout module consists of a custom node aggregation operation followed by a learnable function \(\phi^{(3)}_{\theta}:\mathbb{R}^{2\cdot d_{\text{spatial}}}\rightarrow\mathbb{R}\) and processes the refined vector embeddings obtained from the message-passing module. While the node aggregation operation aims to distill pertinent information from the refined graph representation in a fashion invariant to node ordering and quantity, \(\phi^{(3)}_{\theta}\) predicts the probability of link existence with regard to the target link. Equation 4 summarizes the readout modules functionality, where \(\mathcal{E}_{\mathcal{N}(n^{t}_{j})}\) defines the set of refined vector embeddings originally created from edges adjacent to \(n^{t}_{i}\) (see Fig. 2), \(\mathcal{E}_{\mathcal{N}(n^{t}_{j})}\) the set of refined vector embeddings originally created from edges adjacent to \(n^{t}_{j}\), \(\|\) the concatenation operation, and \(\hat{y}^{t}_{ij}\) the probability of existence of the target link. \[\hat{y}^{t}_{ij}=\phi^{(3)}_{\theta}(\text{mean}(\mathcal{E}_{\mathcal{N}(n^{ t}_{i})})\;\|\;\text{mean}(\mathcal{E}_{\mathcal{N}(n^{t}_{j})})) \tag{4}\] Thus, our node aggregation operation consolidates refined vector embeddings located in the vicinity of the target nodes \(\{n^{t}_{i},n^{t}_{j}\}\) to define a simple yet effective aggregation scheme, as shown in Fig. 2. Specifically, we intend to Figure 4: Labels generated by our labeling trick for a line graph representation (\(h\) set to two). Our labeling trick assigns the label 0 to the vector embedding representing the target link (orange), the labels 1 and 2 to vector embeddings representing edges connected to the target nodes \(n^{t}_{i}\) and \(n^{t}_{j}\) (purple and blue), and the label 3 to remaining vector embeddings. Figure 3: Graphical visualization of a single GAV layer updating the vector embedding of node \(n^{\prime}_{j}\). We forward the vector embedding of node \(n^{\prime}_{j}\) together with \(N_{j}\in\mathbb{R}^{|\mathcal{N}(n^{\prime}_{j})\cup n^{\prime}_{j}|\times d _{\text{spatial}}}\), which represents the set of \(n^{\prime}_{j}\) and its neighbors \(n^{\prime}_{k}\) and \(n^{\prime}_{i}\), to the GAV layer. Please note that the GAV layer’s structure draws parallels to the Transformer’s encoder [7]. exploit the relationship between vector embeddings aggregated around the two target nodes to predict whether target nodes should be connected or not. ### Loss Function Since our approach represents a binary classifier \(\mathcal{F}\) determining the probability of link existence with regard to the target link, we optimize a binary cross-entropy loss function during training. Here, \(y^{t}_{ij}\in\{0,1\}\) indicates whether the target links are negative (sampled) or positive (real). \[\mathcal{L}_{\text{BCE}}=\frac{-1}{|\mathcal{E}|}\sum_{ij\in\mathcal{E}}y^{t}_ {ij}\cdot\text{log}(\hat{y}^{t}_{ij})+(1-y^{t}_{ij})\cdot\text{log}(1-\hat{y} ^{t}_{ij}) \tag{5}\] We would like to highlight that our approach is trained in an entirely end-to-end manner, solely based on the information of link existence. Therefore, intermediate representations, such as the refined vector embeddings, are determined completely data-driven. ## 4 Experiments and Results In this section, we demonstrate the performance of our proposed GAV framework on the ogbl-vessel benchmark [16] and on additional datasets sourced from publicly available flow-driven spatial networks. We first elaborate on baseline algorithms and the experimental setup, followed by a detailed description of our used datasets. Finally, we introduce the evaluation metrics, report quantitative results, investigate our design choices by conducting detailed ablation studies, and discuss GAV's interpretability. ### Baselines and Experimental Setup To evaluate GAV properly, we experimented with different baseline algorithms. We ultimately settled for SEAL [45, 47], which has shown to deliver results on par with or superior to the state-of-the-art on multiple link prediction benchmarks. Additionally, we propose a new _secondary baseline_ combining SEAL with the EdgeConv message-passing layer [38], following recent trends in graph-based object detection from point clouds [8, 42, 37, 43]. Equation 6 describes EdgeConv's update function in detail, where \(\phi_{\theta}\) represents a two-layer MLP. \[\hat{n}_{i}=\frac{1}{|\mathcal{N}(n_{i})|}\sum_{n_{j}\in\mathcal{N}(n_{i})} \phi_{\theta}(n_{i}\parallel n_{j}-n_{i}) \tag{6}\] This provides us with an improved, highly competitive secondary baseline for link prediction on spatial networks. An empirical analysis varying SEAL's message-passing layer confirmed this decision. We additionally refined SEAL's parameters via a hyperparameter search. GAV was trained using the Adam optimizer [21] with a learning rate of 0.001 and a batch size of 32 on a single Quadro RTX 8000 GPU until convergence. An ablation study on the number of hops \(h\) in the subgraph extraction module and the number of message-passing iterations \(k\) indicates that setting both to one is sufficient (see Table 5). In the GAV layer, the number of heads of the multi-head attention operation is set to 4, while \(\phi^{(1)}_{\theta}\) represents a single linear layer with an output dimension of \(d_{\text{message}}=32\), and \(\phi^{(2)}_{\theta}\) is given by a two-layer MLP with a hidden dimension of 64. The GAV layer makes use of leaky ReLU non-linearities [27] to increase gradient flow and simplify weight initialization. The readout module's learnable function \(\phi^{(3)}_{\theta}\) is represented by a two-layer MLP with a hidden dimension of 128. All hyperparameters were tuned on the validation set of the ogbl-vessel benchmark. ### Datasets We experiment with multiple 2D and 3D flow-driven spatial networks to demonstrate the generalizability of our approach (see Table 1). In total, we conduct experiments on eight networks, given by whole-brain vessel graphs of different mouse strains and road networks of various European countries. In this context, link prediction can be interpreted as predicting the probability of the existence of blood vessels and road segments. Whole-Brain Vessel GraphsBlood vessels represent fascinating structures forming complex networks that transport oxygen and nutrients throughout the human body. The vascular system is, therefore, intuitively represented as a \begin{table} \begin{tabular}{l|l l l l l l} \hline \hline Dataset Name & \# Nodes & \# Edges & Node Degree & Node Features & Edge Features & Description \\ \hline ogbl-vessel [16] & 3,538,495 & 5,345,897 & 3.02 & \(x\)-, \(y\)-, \(z\)-coordinates & — & BALB/c mouse strain\({}^{1}\) \\ c574-vessel [30] & 3,820,133 & 5,614,677 & 2.94 & \(x\)-, \(y\)-, \(z\)-coordinates & — & C57BL/6 mouse strain\({}^{1}\) \\ c41-vcessel [30] & 3,645,963 & 5,791,309 & 3.18 & \(x\)-, \(y\)-, \(y\)-, \(z\)-coordinates & — & CD1-mouse strain\({}^{1}\) \\ c574-cvessel [36] & 6,650,580 & 9,054,100 & 2.72 & \(x\)-, \(y\)-, \(z\)-coordinates & — & C57BL/6 mouse strain\({}^{2}\) \\ \hline belgium-road [3] & 1,441,295 & 1,549,970 & 2.15 & \(x\)-, \(y\)-coordinates & — & Belgium \\ taly-road [3] & 6,686,493 & 7,013,978 & 2.10 & \(x\)-, \(y\)-coordinates & — & Italy \\ netherlands-road [3] & 2,216,688 & 2,441,238 & 2.20 & \(x\)-, \(y\)-coordinates & — & Netherlands \\ luxembourg-road [3] & 114,599 & 119,666 & 2.09 & \(x\)-, \(y\)-coordinates & — & Luxembourg \\ \hline \hline \multicolumn{7}{l}{\({}^{1}\) tissue clearing (tc) and light-sheet microscopy imaging} & \multicolumn{7}{l}{\({}^{2}\) corrosion casting (cc) and SRuGT imaging} \\ \end{tabular} \end{table} Table 1: Properties of the raw datasets. Each dataset consists of exactly one ultra-large graph. flow-driven spatial network, where branching points of vessels typically represent nodes embedding \(x\)-, \(y\)-, and \(z\)-coordinates, while edges are defined as blood vessels running between branching points [30]. We report results on the Open Graph Benchmark's ogbl-vessel benchmark [16], which measures the performance of different link prediction algorithms with regard to whole-brain vessel graphs. The ogbl-vessel benchmark consists of millions of nodes and edges (see Table 1) and describes the murine brain vasculature all the way down to the microcapillary level. However, we not only experiment with the ogbl-vessel benchmark but also source three additional whole-brain vessel graphs of different mouse strains acquired via different imaging methodologies [35, 36] (see Table 1, footnote). Road NetworksFurther, we report results on diverse road networks representative of four European countries for a thorough evaluation of GAV's performance. To this end, we adopt publicly available road networks introduced in the DIMACS graph partitioning and clustering challenge [3]. These road networks correspond to the largest connected components of OpenStreetMap's [29] road networks and are vastly different in size (_e.g_., luxembourg-road constitutes roughly 100,000 edges, whereas italy-road has more than 7,000,000). In road networks, intersections and locations with strong curvature represent nodes in the form of \(x\)- and \(y\)-coordinates, while connecting roads represent edges. PreprocessingLink prediction datasets require positive (label 1) and negative links (label 0). Positive links correspond to existent edges in our datasets, whereas negative links represent artificially created, non-existent edges. As link prediction algorithms are commonly employed to improve the graph representation through the identification of absent connections and the reduction of local noise arising from graph generation, negative links should appear as authentic as possible. In light of the absence of negative links in our sourced datasets, we prepare our sourced datasets in a manner that aligns with the ogbl-vessel benchmark. Following the ogbl-vessel benchmark, we sample negative links using a spatial sampling strategy. To be precise, we randomly connect nodes in close proximity, taking a maximum distance threshold of \(\delta=\overline{e_{ij}}+2\sigma\) into account. Here, \(\overline{e_{ij}}\) denotes the average edge length estimated over the entire \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline \hline Dataset & Model & \# Params \(\downarrow\) & AUC \(\uparrow\) (\%) & Hits@100 \(\uparrow\) (\%) & Hits@50 \(\uparrow\) (\%) & Hits@20 \(\uparrow\) (\%) \\ \hline \multirow{9}{*}{ogbl-vessel} & GCN [22] & 396,289 & 43.53 \(\pm\) 9.61 & - & - & - \\ & MLP & 1,037,577 & 47.94 \(\pm\) 1.33 & - & - & - \\ & Adamic-Adar [1] & 0 & 48.49 \(\pm\) 0.00 & - & - & - \\ & GraphsAGE [14] & 396,289 & 49.89 \(\pm\) 6.78 & - & - & - \\ & SAGE+ISAR [41] & 273 & 50.01 \(\pm\) 0.07 & - & - & - \\ & SGC [39] & 897 & 50.09 \(\pm\) 0.11 & - & - & - \\ & LRGA [31] & 265,577 & 54.15 \(\pm\) 4.37 & - & - & - \\ & SEAL [47] & 172,610 & 80.50 \(\pm\) 0.21 & - & - & - \\ & SEIG [16] & 407,338 & 83.07 \(\pm\) 0.44 & - & - & - \\ \cline{2-6} & SEAL (EdgeConv) & 49,346 & 97.53 \(\pm\) 0.32 & 16.09 \(\pm\) 10.48 & 9.37 \(\pm\) 6.18 & 4.99 \(\pm\) 4.24 \\ & GAV (ours) & 8,184 & **98.38 \(\pm\) 0.02** & **34.77 \(\pm\) 0.94** & **28.02 \(\pm\) 1.58** & **19.71 \(\pm\) 2.31** \\ \hline \multirow{2}{*}{c57-tc-vessel} & SEAL [47] & 43,010 & 78.21 & 0.12 & 0.06 & 0.01 \\ & SEAL (EdgeConv) & 49,346 & 97.23 & 16.71 & 10.39 & 5.01 \\ & GAV (ours) & **8,184** & **98.24** & **33.26** & **26.89** & **21.32** \\ \hline \multirow{2}{*}{c57-tc-vessel} & SEAL [47] & 43,010 & 83.60 & 0.27 & 0.16 & 0.06 \\ & SEAL (EdgeConv) & 49,346 & 97.91 & 17.05 & 11.57 & 2.98 \\ & GAV (ours) & **8,184** & **98.72** & **35.82** & **27.25** & **17.23** \\ \hline \multirow{2}{*}{c57-cc-vessel} & SEAL [47] & 43,010 & 83.75 & 0.65 & 0.44 & 0.24 \\ & SEAL (EdgeConv) & 49,346 & 97.49 & 7.21 & 3.35 & 1.06 \\ & GAV (ours) & **8,184** & **97.99** & **18.90** & **14.58** & **9.04** \\ \hline \multirow{2}{*}{begin{tabular}{l} Belgium-road \\ \end{tabular} } & SEAL [47] & 43,010 & 86.73 & 1.25 & 0.68 & 0.30 \\ & SEAL (EdgeConv) & 49,346 & 96.98 & 0.55 & 0.55 & 0.51 \\ & GAV (ours) & **8,184** & **99.29** & **47.44** & **38.60** & **22.11** \\ \hline \multirow{2}{*}{italy-road} & SEAL [47] & 43,010 & 90.07 & 0.32 & 0.16 & 0.08 \\ & SEAL (EdgeConv) & 49,346 & 90.24 & 0.26 & 0.17 & 0.07 \\ & GAV (ours) & **8,184** & **99.41** & **28.49** & **20.08** & **11.99** \\ \hline \multirow{2}{*}{netherlands-road} & SEAL [47] & 43,010 & 84.19 & 0.00 & 0.00 & 0.00 \\ & SEAL (EdgeConv) & 49,346 & 96.06 & 3.91 & 2.20 & 1.01 \\ & GAV (ours) & **8,184** & **99.44** & **37.55** & **26.97** & **10.77** \\ \hline \multirow{2}{*}{luxembourg-road} & SEAL [47] & 43,010 & 89.79 & 11.39 & 6.15 & 3.12 \\ & SEAL (EdgeConv) & 49,346 & 97.53 & 59.79 & 39.15 & 19.42 \\ \cline{1-1} & GAV (ours) & **8,184** & **99.31** & **85.88** & **76.84** & **61.95** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative results achieved on the test sets. We report mean and standard deviation values on the ogbl-vessel benchmark based on ten different seeds. Please note that the ogbl-vessel benchmark’s evaluation metric is AUC. Therefore, Hits@\(k\) values of participating algorithms are not available. GAV outperforms the previous state-of-the-art across all metrics and datasets. graph \(\mathcal{G}\) and \(\sigma\) the standard deviation. The number of negative, sampled links corresponds to the number of positive, real links across all datasets. We finally split positive and negative links into training, validation, and test sets (split 80%/10%/10%). ### Evaluation Metrics To compare GAV to existing baseline algorithms, we report quantitative results based on the area under the receiver operating characteristic curve (AUC), following the obb-vessel benchmark. The AUC metric indicates the performance of a classifier by plotting the true positive rate against the false positive rate at all possible classification thresholds. Therefore, AUC provides an aggregate performance measure indicating the classifier's ability to distinguish between positive and negative links. We introduce the evaluation metric Hits@\(k\) as an additional, stricter performance measure. Hits@\(k\) compares the classifier's prediction of every single positive link to a randomly sampled set of 100,000 negative links, resulting in a ranking among 100,001 links with respect to the probability of link existence. Based on this ranking, Hits@\(k\) indicates the ratio of positive links ranked at the \(k\)-th place and above. ### Quantitative Results GAV demonstrates excellent, superior performances on the task of link prediction across all metrics and datasets, as can be observed in Table 2. We outperform the current state-of-the-art algorithm SIEG on the ogbl-vessel benchmark by **more than 18%** (98.38 vs. 83.07 AUC) while requiring a significantly smaller amount of trainable parameters (8,184 vs. 407,338). However, GAV not only drastically outperforms the current state-of-the-art but also our introduced strong, secondary baseline, combining the SEAL framework with EdgeConv. The excellent performance and superiority of our GAV framework is even more pronounced when considering the strict evaluation metric of Hits@\(k\). Quantitative results reported in Table 2 additionally indicate the strong performance of our secondary baseline (see Section 4.1), surpassing previous state-of-the-art methods in AUC across all but one dataset, namely talya-road. It is of note that the luxembourg-road dataset's test set contains only 12,000 negative links. We, therefore, compare predictions of its positive links to 12,000 rather than 100,000 negative links (see Section 4.3). This explains the comparatively strong Hits@\(k\) performances on the luxembourg-road dataset. ### Ablation Studies To further validate GAV, we conduct detailed ablation studies on the validation set of the ogbl-vessel benchmark. Table 3 investigates the importance of the readout module, the message-passing module, and the labeling trick. First, we exchange our readout module with a SortPooling layer followed by two convolutional layers and an MLP, resembling SEAL's readout operation. We note that our readout module is more applicable to flow-driven spatial networks, as it leads to a modest AUC increase of 0.11. Second, we completely deactivate the message-passing module by forwarding \(\mathcal{L}(\mathcal{G}_{h}^{t})\) directly to the readout module. We observe a drastic AUC decrease of 17.83, indicating the importance of modifying the vector embeddings via our proposed GAV layer. Finally, we evaluate the impact of our labeling trick. Excluding the additional labels results in an AUC decrease of 2.39. This proves the significance of link identification via additional, distinct labels. In a second ablation study, we experiment with different message-passing layers, including EdgeConv, in our message-passing module. We report our findings in Table 4. Our proposed GAV layer outperforms the other message-passing layers across all metrics by a considerable amount. Lastly, we vary the number of hops \(h\) used to generate \(\mathcal{G}_{h}^{t}\) and the number of message-passing iterations \(k\) (see Table 5). We observe that simultaneously increasing \(k\) and \(h\) results in no discernible differences in performance. This finding is in line with the \(\gamma\)-decaying theory [45], proving the approximability of high-order heuristics from locally restricted subgraphs. ### Interpretability and Analysis of Results In Fig. 5, we visualize the behavior of our proposed GAV layer to facilitate interpretability. First, we investigate the correlation between the vector embeddings aggregated \begin{table} \begin{tabular}{c|c c c c} \hline \hline Message-Passing Layer & AUC \(\uparrow\) & Hits@100 \(\uparrow\) & Hits@50 \(\uparrow\) & Hits@20 \(\uparrow\) \\ \hline GAV layer (ours) & **98.39** & **34.46** & **26.30** & **19.81** \\ EdgeConv [38] & 97.43 & 17.30 & 5.97 & 0.78 \\ GAT layer [6] & 96.44 & 4.58 & 2.55 & 1.59 \\ SAGE layer [14] & 93.53 & 0.77 & 0.11 & 0.03 \\ GCN layer [22] & 89.31 & 0.39 & 0.22 & 0.16 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablations with different message-passing layers. \begin{table} \begin{tabular}{c c c|c c} \hline \hline Readout Module & Message-Passing & Labeling Trick & AUC \(\uparrow\) & \(\Delta\) \\ \hline ✓ & ✓ & ✓ & **98.39** & – \\ ✗ & ✓ & ✓ & 98.28 & –0.11 \\ ✓ & ✗ & ✓ & 80.56 & -17.83 \\ ✓ & ✓ & ✗ & 96.00 & -2.39 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablations on main design choices. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Readout Module & Message-Passing & Labeling Trick & AUC \(\uparrow\) & \(\Delta\) \\ \hline ✓ & ✓ & ✓ & **98.39** & – \\ ✗ & ✓ & ✓ & 98.28 & –0.11 \\ ✓ & ✗ & ✓ & 80.56 & -17.83 \\ ✓ & ✓ & ✗ & 96.00 & -2.39 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablations on main design choices. around the two target nodes represented by \(\text{mean}(\mathcal{E}_{\mathcal{N}(n_{j}^{t})})\) and \(\text{mean}(\mathcal{E}_{\mathcal{N}(n_{j}^{t})})\), which are constructed in the readout module (see Section 3.4). We find that the angle between these aggregated vector embeddings is highly correlated to the predicted probability of link existence \(\hat{y}_{ij}^{t}\) and hence provides decisive information. We observe a trend of high angles being associated with positive and low angles with negative predictions. In the context of flow, this observation draws parallels to the concept of sink and source flow. To be precise, GAV may attempt to assign the two target nodes to sink and source nodes for negative predictions (see Fig. 5, second row), which stands in contrast to the behavior of physical flow in spatial networks. The GAV layer's predicted scalar values \(s_{i}\in(-1,1)\) can not only flip but also scale vector embeddings. Based on Fig. 5, we identify \(|s_{i}|\) as a measure of uncertainty. This is because with decreasing certainty of \(\hat{y}_{ij}^{t}\), we observe a decrease in \(|s_{i}|\) (see Fig. 5, left to right). ## 5 Outlook and Conclusion In this work, we present the simple yet effective Graph Attentive Vectors (GAV) link prediction framework. GAV relies on the idea of modeling simplified physical flow in spatial networks by updating vector embeddings in a constrained manner. GAV achieves 97.99 to 99.44 AUC on the link prediction task, outperforming the previous state-of-the-art by an impressive margin on all metrics across multiple whole-brain vessel and road network datasets while requiring a significantly smaller amount of trainable parameters. This indicates the importance of developing link prediction algorithms tailored to flow-driven spatial networks. GAV's imitation of the dynamics of physical flow represents a simplified concept, which is not entirely representative of physical principles from, _e.g_., fluid dynamics (see Fig. 5). Future work should, therefore, aim to extend GAV's simplistic assumptions by incorporating different physical principles, such as conservation of mass and momentum, resulting in vector embeddings highly representative of physical flow in flow-driven spatial networks.
2309.01631
Evaluating the performance of ionic liquid coatings for mitigation of spacecraft surface charges
To reduce the impact of charging effects on satellites, cheap and lightweight conductive coatings are desirable. We mimic space-like charging environments in ultra-high vacuum (UHV) chambers during deposition of charges via the electron beam of a scanning electron microscope (SEM). We use the charge induced signatures in SEM images of a thin ionic liquid (IL) film on insulating surfaces such as glass, to assess the general performance of such coatings. In order to get a reference structure in SEM, the samples were structured by nanosphere lithography and coated with IL. The IL film (we choose BMP DCA, due to its beneficial physical properties) was applied ex situ and a thickness of 10 to 30 nm was determined by reflectometry. Such an IL film is stable under vacuum conditions. It would also only lead to additional mass of below 20 mg/m$^2$. At about 5 A/m$^2 \approx 3\cdot10^{19}$ e/(s$\cdot$m$^2$), a typical sample charging rate in SEM, imaging is possible with no noticeable contrast changes over many hours; this electron current density is already 6 orders of magnitudes higher than "worst case geosynchronous environments" of $3\cdot10^{-6}$ A/m$^2$. Measurements of the surface potential are used for further insights in the reaction of IL films to the electron beam of a SEM. Participating mechanisms such as polarization or reorientation will are discussed.
M. Wendt, R. Lange, F. Dorn, J. Berdermann, I. Barke, S. Speller
2023-09-04T14:26:30Z
http://arxiv.org/abs/2309.01631v1
# Evaluating the performance of ionic liquid coatings for mitigation of spacecraft surface charges ###### Abstract To reduce the impact of charging effects on satellites, cheap and lightweight conductive coatings are desirable. We mimic space-like charging environments in ultra-high vacuum (UHV) chambers during deposition of charges via the electron beam of a scanning electron microscope (SEM). We use the charge induced signatures in SEM images of a thin ionic liquid (IL) film on insulating surfaces such as glass, to assess the general performance of such coatings. In order to get a reference structure in SEM, the samples were structured by nanosphere lithography and coated with IL. The IL film (we choose BMP DCA, due to its beneficial physical properties) was applied _ex situ_ and a thickness of 10 to 30 nm was determined by reflectometry. Such an IL film is stable under vacuum conditions. It would also only lead to additional mass of below 20 mg/m\({}^{2}\). At about 5 A/m\({}^{2}\)\(\approx\) 3x10\({}^{19}\) e/(s\(\cdot\)m\({}^{2}\)), a typical sample charging rate in SEM, imaging is possible with no noticeable contrast changes over many hours; this electron current density is already 6 orders of magnitudes higher than "worst case geosynchronous environments" of 3x10\({}^{6}\) A/m\({}^{2}\)[1]. Measurements of the surface potential are used for further insights in the reaction of IL films to the electron beam of a SEM. Participating mechanisms such as polarization or reorientation will are discussed. ## 1 Introduction Satellites are constantly subjected to an influx of high energetic charged particles leading to, among other effects, the build-up of differential potentials on various parts of the satellites surface, if these are not in electrical contact with each other (e.g., adjacent cover glass plates of the satellites solar cells or main body). Especially at high altitude (\(>\) 10000 km) or orbits with high inclination (\(>\) 50\({}^{\circ}\)) this effect is dangerous as the potential difference can exceed several thousand volts, leading to sensor malfunctions or discharges, which severely damage the satellite [2]. For example, such a discharge did severe damage to the solar array of the ESA EURECA mission [2]. Common solutions to mitigate the dangers of differential charging include coating the cover glass of the solar cells with indium tin oxide or design adaptations such as increasing the distance between individual solar cells. However, these come with their own disadvantages, mainly high cost and increased payload mass. An ideal solution would be a conductive coating for the cover glass, that is inexpensive, transparent for visible light and parts of the infrared spectrum, easy to apply, stable under space conditions and would not lead to a significant mass increase. A class of materials that could satisfy these conditions are ionic liquids, thus in this work we explore how thin films can be applied to glass surfaces and how those films behave under ultra-high vacuum when subjected to the electron beam of a scanning electron microscope (SEM), mimicking the conditions in space. ## 2 Sample Preparation We used conventional glass cover slides (Menzel) as substrates. In order to have a reference structure for microscopy, gold nano-triangles were prepared on the surface of the samples using nanosphere lithography as developed by Fischer and Zingsheim [3]. In brief, 100 \(\upmu\)l of a 50% aqueous suspension of polystyrene (PS) micro or nanospheres (microparticles GmbH) were drop-casted on the substrate, resulting in a hexagonally arranged layer of PS spheres. Subsequently, a gold film of 30 nm (as indicated by a quartz microbalance) was evaporated on the PS sphere. These spheres, including their gold coverage, were then removed using tetrahydrofuran, leaving only the gold behind, that was deposited directly onto the glass. This results in an array of approximately triangular metal islands with occasional defects. By varying the diameter of the PS spheres used in this procedure, edge lengths of the triangles can be tuned from 1.5 \(\upmu\)m to 150 nm. The thickness of the triangles (\(\sim\)30 nm) has been verified by atomic force microscopy. Afterwards the samples were UV-ozone cleaned (PSD series, Novascan) for two hours to remove any residual organic substances. Furthermore, the ozone treatment leads to an increased hydrophilicity of the surface, making it easier to prepare uniform films of polar or polarizable species. After the ozone treatment, a solution of 100 \(\upmu\)l deionized water containing 3% of the ionic liquid 1-Butyl-1-methylpyrrolidinium dicyanamide (BMP DCA, Iolitec) was pipetted onto the surface. After 1 minute the excess liquid was removed by wiping it using a lab wipe. Finally, the sample was left to dry in air. ## 3 Coating Thickness ### Reflectometry The thickness of the BMP DCA thin film was determined by reflectometry using a commercial device (NanoCalc-XR, Ocean Optics). For this, a sample was placed on a silicon substrate and illuminated with a deuterium lamp. The resulting spectrum was fitted, assuming a stacked system made of BMP DCA, glass (n=1.46 [4]) and silicon (n=3.98 [5]) (from top to bottom), using the known refractive indices of glass, silicon and BMP DCA (n=1.5 [6]). This yielded a thickness for the BMP DCA layer of 12.8 \(\pm\)0.8 nm. Such layers would increase the mass of solar panels by about 15 mg/m\({}^{2}\) Figure 2: Scheme of the Ionic liquid thin layer preparation ### AFM measurements To confirm that the BMP DCA forms a homogeneous layer on the substrate, additional ex situ AFM analyses were performed in which a glass sample with gold nanotriangles was compared to a glass sample with gold nanotriangles and an additional coating of BMP DCA. Measurements were carried out in a commercial instrument (Park Systems XE-100) using non-contact silicon cantilevers (SSS-NCHR-50, Nanosensors), in dynamic mode. Figure 3 shows the AFM topography images of these uncoated (left) and coated (right) samples. For the uncoated sample, the gold structures are clearly visible with sharp edges and exhibiting heights of \(\sim\)25 nm. Between the gold structures residual material from the assembled polystyrene spheres is visible. As this residue is resistant to UV-ozone treatment, it probably is inorganic material. This material seems to be completely absent on the sample with BMP DCA coating, either because the application of the liquid and successive wiping of the sample removed it, or because the BMP DCA film now covers it and as the films surface is imaged it appears to be absent. The gold structures on the coated material have far less pronounced edges compared to the uncoated sample. Because of the repulsive interaction of the BMP DCA layer with silicon and the oscillation frequency of \(\sim\)350 kHz of the cantilever, the periodic formation of a meniscus of liquid between tip and samples should be repressed [7]. Thus, the surface of the liquid can be imaged at the expense of resolution. This is supported by the observation of droplet hills on the gold. The height of the gold structures with respect to the glass substrate seems to have increased to up to 60 nm. Also, it no longer forms plateaus. The formation of these droplets can be explained by the BMP DCA forming menisci at the edges of the gold triangles, effectively smoothing them out. This points to the formation of curved droplets on top of the gold triangles with an increased ionic liquid layer thickness around them. However, as the general shape of the gold structures is still recognizable, the layer thickness must be on a scale of several tens of nanometers, in good agreement with the previous reflectometry measurements. As realistic cover glass for Figure 3: AFM topography images of Au nanostructures prepared by nanosphere lithography (using 3 μm spheres as mask) without (left) and with (right) a coating of BMP DCA. In this defect-rich region the effect on various shapes and sizes of Au islands is evident. solar cells does not require the preparation of nanostructures, this effect is expected to absent here. ## 4 SEM Imaging The idea to use ionic liquid as a coating for insulating materials to be able to image them in an electron microscope has been tested in literature before [8, 9]. However, those samples were usually biologic samples such as cells or pollen, not flat samples such as glass, which require a thin and uniform film. Consequently, the conductive properties of such films were, to our knowledge, not yet tested in a SEM. To simulate the charging behavior in space environments, the samples were mounted on a metal carrier using conductive carbon pads and exposed to the electron beam of a scanning electron microscope (EVO MA 10, Zeiss) at high vacuum conditions (10-6 mbar). At beam currents of \(\sim\)100 pA, the current density of 5 A/m\({}^{2}\)\(\simeq\) 3\(\times\)10\({}^{19}\) e/(s\(\cdot\)m\({}^{2}\)) is approximately six order magnitudes higher than the 6\(\times\)10\({}^{12}\) e/(s\(\cdot\)m\({}^{2}\)) in average and 1.66\(\times\)10\({}^{13}\) e/(s\(\cdot\)m\({}^{2}\)) under storm conditions that have been detected by ATS and SCATHA missions [1]. Hence a material that is capable of mitigating charging effects at SEM current densities should be easily capable to perform that task under real space conditions. Figures 3(a) and 3(b) show a comparison between a glass sample with gold nanotriangles (3(a)) and a glass sample with gold nanotriangles and an additional coating of BMP DCA (3(b)), both subjected to the aforementioned conditions. When imaging the uncoated sample using a primary electron energy 5 keV, no clear image of the surface could be achieved, even at comparatively low magnification, i.e., low electron densities of 2.8\(\times\)10\({}^{12}\) e/m\({}^{2}\). The primary feature visible is a bright region in the center, which is the result of charges accumulating on the surface and reflecting the primary electrons towards the detector [10]. Those bright areas are surrounded by dark band like features which are the result of secondary electrons emitted from the surface being guided away due lateral fields, which are a result of the charges on surface in the bright regions [10]. On the coated sample however, the surface of the sample with its gold nanostructures, primarily the triangles with edge lengths of below 100 nm, is clearly visible. The same primary electron energy as before was used, however, at a far higher magnification corresponding to almost six orders of magnitude higher electron density of 1.5\(\times\)10\({}^{18}\) e/m\({}^{2}\) The gold appears brighter than the glass, because of its higher electron density, leading to an increased cross section and emission of secondary electrons. In the top left figure 4b a brighter cloud like feature can be seen, which is the onset of surface charging, which may be explained by a not entirely uniform film. Upon further increasing the magnification (thus the electron current density), these regions with possibly thinner films give rise to features like the one in 4a, albeit on a much smaller scale. However, as mentioned above, these are extreme conditions, unlikely to occur in space. ## 5 Surface Potential Measurements To better understand the conduction mechanism of BMP DCA, measurements of the surface potential were performed. These mechanisms could be that the electrons are conducted by "hopping" from one molecule to the other and thus the charging described in section 4 is actually the limit of this conduction channel or perhaps the electron beam might induce some irreversible chemical reactions, to name a few. In a first step glass samples containing gold nanostructures were compared to samples containing the nanostructures and a BMP DCA film again. The measurements were performed in the UHV chamber (\(\sim\)10\({}^{-10}\) mbar) of a tuning fork based scanning probe microscope (RHK, Duoprobe), equipped with a UHV compatible SEM (Orsay Eclipse+). Such a device detects the change of the eigenfrequency _df_ of a tuning fork, when the tip attached to it interacts with a surface. This change is proportional to the force gradient with \(z\) denoting the tip-sample-separation, as described in eq. (1). Assuming a model in which the tip and the sample form capacitor, the force \(F\) resulting from an applied voltage \(V\) can be described by eq. (2), with \(A(z)\) denoting a prefactor dependent on the tip-sample geometry. \[\mbox{(1)}\hskip 14.226378pt-\frac{dF}{dz}\propto df\hskip 56.905512pt\mbox{ (2)}\hskip 14.226378ptF=A(z)V^{2}\] If the applied voltage compensates the potential on the surface, the force acting on the tip and thus the frequency shift become minimal. The observed surface potential can directly be attributed to the presence of charges on the surface. Figure 5 shows such a surface potential measurement for glass with gold nanostructures (red) and glass with gold nanostructures and BMP DCA coating (blue) after exposure to \(\sim\)10\({}^{17}\) e/m\({}^{2}\) each. The measured frequency shift is plotted versus applied voltage. For the measurement on uncoated glass, the maximum of the curve is clearly shifted towards a positive voltage. As a positive voltage is applied, this means the actual potential of the surface is negative, which is what one would expect for a negatively charged surface. Fitting the curve, a minimal frequency shift is reached at 33.6 V bias Voltage, which means the surface potential is -33.6 V. For the coated sample, on the other hand, the fit parabola has its maximum at +0.22 V bias voltage and thus surface potential of -0.22 V is observed. This small surface potential is not necessarily caused by the presence of charges but rather by the contact potential difference of tip and sample. Note that the different sign in the frequency shift also has a physical reason. As the charges on the surface induce image charges in the tip and both attract each other, for uncoated glass attractive forces are dominant (as long as the tip does not directly touch the sample in which case Pauli-repulsion becomes dominant), which result in negative frequency shifts. For the coated glass, these surface charges are absent meaning the interaction between tip and sample follows a general Lennard-Jones like behavior with attractive and repulsive regimes. Approaching in this repulsive regime corresponds to the positive frequency shift observed. ## 6 Conclusions BMP DCA can be used to generate nanoscopically thin conductive layers on glass substrates, which was confirmed by AFM and reflectometry measurements. By imaging the gold nanotriangles which were generated on top of the glass surface in a scanning electron microscope, we confirmed that these layers were indeed sufficient to compensate charge densities that are far beyond typical conditions encountered in space. Furthermore, measurements of the surface potential of BMP DCA coated surfaces indicated the absence of residual surface charges, even after exposure to comparatively high electron densities. Figure 5: df-V-spectroscopy curves for glass with gold nanostructures (red) and glass with gold nanostructures and BMP DCA coating (blue) after exposure to \(\sim\)10\({}^{17}\) e/m\({}^{2}\) each. Tip material: Pt/Ir Crosses indicate experimental data, continuous lines the parabolic least-square fits Red fit: -0.00213 Hz/V\({}^{2}\)\(\times\) (V-33.6 V)\({}^{2}\)- 0.51 Hz; blue fit: -0.00036 Hz/V\({}^{2}\)\(\times\) (V-0.22 V)\({}^{2}\)\(+\) 4.04 Hz This suggests, that BMP DCA coatings should be capable to be used as a coating to mitigate spacecraft surface charging. ## 7 Future Work In the future we plan to investigate, how these ionic liquid coatings behave under the influence of ionizing radiation, i.e., whether they retain their conductive capabilities, whether uniform films are damaged and whether their optical properties change. Studies indicate almost no absorbance above 250 nm [11]. Additionally, time and spatially resolved Kelvin Probe Force microscopy measurements of the surface potential seem promising for unravelling the nature of conduction mechanisms in ionic liquids. Ultimately, a performance test of these layers under real space conditions will be necessary.
2310.14439
Towards the automation of book typesetting
This paper proposes a generative approach for the automatic typesetting of books in desktop publishing. The presented system consists in a computer script that operates inside a widely used design software tool and implements a generative process based on several typographic rules, styles and principles which have been identified in the literature. The performance of the proposed system is tested through an experiment which included the evaluation of its outputs with people. The results reveal the ability of the system to consistently create varied book designs from the same input content as well as visually coherent book designs with different contents while complying with fundamental typographic principles.
Sérgio M. Rebelo, Tiago Martins, Diogo Ferreira, Artur Rebelo
2023-10-22T22:50:46Z
http://arxiv.org/abs/2310.14439v1
# Towards the automation of book typesetting ###### Abstract This paper proposes a generative approach for the automatic typesetting of books in desktop publishing. The presented system consists in a computer script that operates inside a widely used design software tool and implements a generative process based on several typographic rules, styles and principles which have been identified in the literature. The performance of the proposed system is tested through an experiment which included the evaluation of its outputs with people. The results reveal the ability of the system to consistently create varied book designs from the same input content as well as visually coherent book designs with different contents while complying with fundamental typographic principles. Design tools; Data-driven Design; Generative Design; Graphic design; Typography. ## 1 Introduction Typography is the art of giving our language a visual form. Thus, it is through typography that we materialise and store our knowledge and information [1]. Since the publication of the first typographic book, in the mid-fifteenth century, society has been looking for the most appropriate way to convey a message typographically. In the field of editorial design, this effort has focused on the search for the best principles for designing typographic compositions, such as books [2]. Advances in digital technologies have been changing the work process of graphic designers. The emergence of computational approaches in the design domain enabled designers to explore new perspectives, new conceptual and visual possibilities, and achieve new types of solutions. Furthermore, the emergence of computational approaches comes with a paradigm shift in the role of the designer, who begins to create processes that enable the creation of designs, instead of designing the final solution. In other words, the design concept is translated into a computer program that systematically explores various design possibilities from the original concept. That said, in the particular case of layout design, we consider that the potential of computational approaches is not yet being fully explored. In this work, we explore a computational generative approach for the automatic design of book layouts. The result is a computer system, which operates inside the Adobe InDesign environment, that automatically generates book designs from input content. Figure 1 shows different books created with the presented system. It starts by receiving the input content, namely text and images. Before generating compositions for the input content, the designer can specify restrictions on some of the visual characteristics of the output compositions, _e.g._ format, size, grid and font. Then, the system automatically typesets the content based on a set of typography rules and principles found in fundamental literature in the field. In the end, the system presents the generated composition to the user as an editable Adobe InDesign document. Overall, the system is capable of creating layout compositions that comply with specific fundamental typographic principles while matching the graphic preferences of the user. In addition, experiments conducted with the system demonstrate its ability to autonomously generate varied compositions with the same input content and also generate visually coherent compositions for different input contents. In addition, it creates functional layout designs in an almost unpredictable manner. This reveals the great potential of this approach to layout design, both in generating outputs that the designer uses as final solutions and in using the outputs as starting points for further explorations. This work is aligned with the experiments previously developed by Ferreira _et al._[3] The remaining of the article is organised as follows: Section 2 overviews related works; Section 3 describes the system, namely the interaction process, the engine that runs the system, as well as its inputs and outputs; Section 4 presents an experiment conducted to validate the designs created with the system and discusses the results obtained; Lastly, Section 5 summarises the main contributions of this work and identifies future work. ## 2 Computational Approaches in Editorial Design Systematic approaches have been popular in layout design since the mid-twentieth century when some creative practitioners designed layouts based on grids and the variation of the visual features of typographic elements [4, 5, 6]. Some works have explored algorithmic approaches for book layout design. In the 1960s, Gerstner introduced a selective and combinatorial method for the design of graphics, including layouts [7]. Afterwards, he translated it into a logical language that computers could understand in _Compendium for Literates: A system of Writing_[8]. LeWitt compiled, Figure 1: Examples of books generated with the presented system. All designs generated by the system for this paper, along with demonstration videos, can be found in the supplementary files. in 1971, a set of formal instructions to design a conceptual art exhibition catalogue [9]. Already in the 1980s, Knuth and Plass [10] presented a dynamic programming algorithm to page breaks avoiding widows and orphans and employed it to typeset a two-column dictionary. Soon thereafter, Knuth introduced the parametric typeface design language, _Metafont_[11] and the _TeX_ typesetting system [12], enabling anybody to produce a book using a structural markup language and a set of high-level commands. Cooper and her students, at the Visible Language Workshop, experimented with the generation of layouts, _e.g._ publications that resulted from the collaboration with IBM [13] and the cover for _Design Quarterly 142_[14]. In the mid-1990s, Maeda also designed a series of digital booklets, the _Reactive Book_ series, where the graphics are controlled interactively by user input [15]. Nevertheless, in the last two decades, we observed the increasing employment of the use of computational graphic design approaches, especially because of the development of easier-to-use creative code environments [16, 17]. A solid overview of the field is presented, for instance, by Reas _et al._[6] or Richardson [18]. These approaches have been explored in several artistic and creative domains, including the generation of visual and communication artefacts such as poster designs (_e.g._ Rebelo _et al._[19] or Guo _et al._[20]), banners (_e.g._ Gatarski [21] or Yin _et al._[22]), user interfaces (_e.g._ Quiroz _et al._[23] or Amitani _et al._[24]), visual identities (_e.g._ Levin [25] or Neue [26]), type designs (_e.g._ Ahn & Jin [27] or Martins _et al._[28]), among others. In the context of book typesetting, two frameworks are stimulating the adoption of computational practices. The library _Basil.js_[29] provides friendly and accessible tools for scripting and automation in Adobe InDesign. Then, the open-source framework _The Magic Book_[30] facilitates the design, production, and self-publishing of books. Tailor-made procedural and template-based approaches are employed in the generation and definition of layouts, modification of layouts based on their contents and/or inter-relationships between the elements. LettError type design studio used random processes and parametric design methods to design several typographic artefacts such as calendars, type specimens and even their portfolio [31, 32]. Oliveira [33] presented a recursive division method to place elements on both one and multiple-column grid layouts. Cleveland [34] proposed a method for generating style-based design layouts that explores the inter-relationships between text and graphics. Also, he presented a system to generate layouts employing these principles. LUST developed a set of scripts to layout and stylise the book "I Read Where I am" informed by its content [35]. Damera-Venkata _et al._[36] presented a template-based probabilistic framework for generating document layouts for variable content. Ahmadullin and Damera-Venkata [37] also presented a probabilistic model for newspaper typeset that, based on given content, divides the available layout into regions and optimises the content to fit within these regions. Flipboard developed _Duplo_[38], a layout engine that creates news magazines adapted to its contents and based on a set of heuristics such as the amount and flux of text or the existence and position of images. We can also observe the use of Artificial Intelligence approaches for typesetting. Evolutionary computation and greedy approaches have been used to create layouts with varied purposes. For example, Geigel and Loui [39] evolved layouts for photo books by evaluating different aesthetic criteria. Goldenberg [40] employed an evolutionary approach to automatically generate page layouts, minimising the waste of space on the page. Gonzalez _et al._[41] used a greedy simulated annealing algorithm to create multi-column newspaper layouts. Purvis _et al._[42] automatically evolved documents using a multi-objective optimisation approach, considering a set of layout constraints and aesthetic measures. Quiroz _et al._[43] evolved brochure documents according to user preferences and design guidelines. Strecker and Hennig [44] proposed a grid-based method for newspaper layouts, minimising the wasted space and bearing in mind newspaper design aesthetic measures. Boll _et al._[45] and Sandhaus _et al._[46, 47] evolved photo layouts based on rules of layout design and proposed a method to transform a blog into a photo book considering different aesthetic requirements. Onduygu [48] developed the system _Graphagos_ that generates compositions through the interactive evolution of specific features of visual elements. Klein [49] developed the tool _Crossing, Mixing, Mutating_ to create variations in a template using genetic operators. Later, an updated version of this tool was released as an InDesign plug-in named _Evolving Layout_[50]. Lopes _et al._[51] developed the system _evoDesigner_, which automatically creates and evolves designs in the InDesign environment. Recently, Machine Learning approaches have been employed in the layout design field taking into account the relation between elements on layout and the learning of specific typesetting and design styles, _e.g._ Zheng _et al._[52], Li _et al._[53], or Kikuchi _et al._[54]. Our analysis of the related work unveils a series of computational approaches that reveal great potential for the support and automatisation of the creation of book layouts. However, as far as we know, the existing approaches on this context rarely enable the subsequent modification of the generated designs, even less in the natural working environment of the designer. This way, it is difficult to introduce such computational technologies in the design process in an easy and fluid fashion. Also, we denoted that most of these approaches do not present a multipurpose objective, being developed to generate specific designs. ## 3 System We developed a computer system to automatically typeset books from content provided by the user. This system is developed as a computer script that operates inside Adobe InDesign software which is popular among graphic designers working in the field of typography. We idealised and implemented the system to take advantage of the typeset functionalities supplied with InDesign, which can be controlled via scripting. By integrating the system with InDesign, we allow users to generate design variations and easily edit them within a familiar working environment. One can find demonstration videos in the supplementary files. To install the system in the InDesign environment, the user only needs to copy the folder that contains the system files to the scripts folder in the application directory. To facilitate access to the system, we also made available a script that creates a dedicated tab for the system in the InDesign navigation menu. The system installation files and source code are available at [https://cdv.dei.uc.pt/2019/scriptedpages.1](https://cdv.dei.uc.pt/2019/scriptedpages.1) Footnote 1: We also made available supplementary materials, including demonstration videos and examples of books designed with the proposed system. In the following subsections, we describe the user interface of the system and explain how its engine works. ### Interface Figure 2 shows different snapshots of the user interface built to enable the interactive control of the inner workings of the system. The user interface is structured in a series of five tabs. In each tab, one can set specific properties of the composition or let the system choose automatically based on a set of predefined rule-based values for those properties. The first interface tab "Document" (Figure 2a) concerns the structural characteristics of the document. It allows the user to set the page size, margins, number of columns and gutter. There is also an option to import settings stored in a file. In the second tab "Input" (Figure 2b), the user provides the content of the book to be typeset. To that end, the interface offers two options. The first option is to load a Microsoft Word file containing only text, text and images, or only images. The second option is to load a Microsoft Word file containing only the text and a folder containing the images. For this second option, the place where each image should be inserted must be identified in the text using a tag @imageFileName@, which will be replaced by the image with the same name contained in the loaded folder. In addition, the user can choose whether to generate a table of contents and/or colophon for the book. The last option of this tab allows the user to select the language of the input text so that it is possible to correctly hyphenate words. The third tab "Styles" (Figure 2c) concerns the definition and mapping of paragraph styles. The user has three options to choose from. The first option is to keep the styles imported from the input Word document. The second option is to map each style of the input document to another paragraph style selected, manually or randomly, from a list created from all fonts installed on the computer, while keeping the remaining paragraph attributes. The last option is to let the system generate the styles, suggesting font combinations based on the rules entered in the system (these rules are explained in the next subsection). In the fourth tab "Experimental" (Figure 2d), the user can toggle experimental features that can be applied to the generated book. In the presented version of the system, there are four experimental features: (i) draw a colour background on half of each book page; (ii) draw a colour gradient along the inner and/or outer margins of the pages; (iii) apply a random indentation to each text paragraph; and (iv) make the cover title as large as possible. The purpose of these features is to increase the uniqueness of the resulting designs. The user can also opt to let the system randomly choose if any experimental features should be applied by selecting the checkbox "Surprise me." Furthermore, new features can be implemented and added to the system at any time. After interacting with these four tabs of settings, the user can start creating book designs by clicking on the button "Create" located at the bottom right corner of the interface. This button will start the engine of the system which will automatically create a new InDesign document and typeset a new book from the content and settings chosen by the user. Once the typeset process is complete, the result is presented to the designer as an editable InDesign document. From that moment on, the user can, for instance: (i) adopt the generated book as a final design; (ii) use the generated book as a starting design from which the designer can make any changes or refinements needed; or (iii) continue to use the system to generate more designs until a more suitable design is found. There is another interface tab, entitled "Properties" (Figure 2e), which not only overviews the settings used in the generation of a book but also enables the user to save those settings to a file. Later, this settings file can be imported to the system using the first interface tab, as mentioned earlier. This functionality can be useful, for example, to facilitate the typeset of different books using the same settings. This last tab is only accessible after a book is generated. After the generation, it is also made Figure 2: Snapshots of the system, showing the five different tabs of the user interface: (a) Document, (b) Input, (c) Styles, (d) Experimental and (e) Properties. Demonstration videos of the system can be found in the supplementary files. visible a button that enables the user to generate a new book, maintaining the same input document and predefined settings. ### Engine The system operates based on a series of typographic rules, styles and principles (Table 1) which were identified and collected from literature recognised in the field. This includes the work by Bringhurst [1], Muller-Brockmann [55], Haslam [56], Hochuli and Kinross [2], Hochuli [57], Lupton [58] and Tschichold [59]. The encoding of these rules into the system, enables the typeset process to go through them, one by one, and make typographic decisions on all composition attributes. The rules are stored in a JSON file, which contains the possible configurations (_i.e._ values or range of values) and, when applicable, their probability rate for each typographic attribute. This file enables easy access to all rules by the system and their quick editing by the designer at any time. \begin{table} \begin{tabular}{l|l|l|l} \hline **Attribute** & **Valid approaches or values** & **Influenced by** & **Based on** \\ \hline **Book type** & Short reading (\(<\)50,000 words) & n/a & Defined empirically \\ \cline{2-3} & Long reading (\(\geq\)50,000 words) & & based on examples \\ \cline{2-3} & Only images & & \\ \cline{2-3} & Text and images & & \\ \hline **Book Size and** & 105 \(\times\) 180 mm (portrait) & Book type & Defined empirically \\ **Format** & 110 \(\times\) 170 mm (portrait) & & based on examples \\ \cline{2-3} & 110 \(\times\) 180 mm (portrait) & & \\ \cline{2-3} & 110 \(\times\) 220 mm (portrait) & & \\ \cline{2-3} & 130 \(\times\) 200 mm (portrait) & & \\ \cline{2-3} & 150 \(\times\) 210 mm (portrait) & & \\ \cline{2-3} & 170 \(\times\) 240 mm (portrait) & & \\ \cline{2-3} & 180 \(\times\) 180 mm (square) & & \\ \cline{2-3} & 200 \(\times\) 110 mm (landscape) & & \\ \cline{2-3} & 200 \(\times\) 120 mm (landscape) & & \\ \hline **Margins** & Defined randomly based on a certain range for each margin. & n/a & Defined empirically \\ & Top and Bottom margins between 7 mm and 15 mm. Inside and outside margins between 7 mm and 30mm. & & \\ \hline **Grid** & Defined based on a random column size value calculated based on a certain range. Column sizes vary between 70 mm and 140 mm. & Book size & Muller-Brockmann [55] \\ \hline **Line length** & Between 45 and 75 characters (the ideal is 66); \(\geq\)48 characters for justified text & Grid: Alignment & Bringhurst [1] \\ \hline **Words per page** & \(\leq\)500 words (\(\approx\)45 lines) for one column & n/a & Bringhurst [1] \\ \cline{2-3} **Lines per page** & \(\leq\)1.000 words for multiple columns & & \\ \hline **Paragraph marks** & Oraments & Book type & Bringhurst [1] \\ \hline \end{tabular} \end{table} Table 1: Typography rules, styles and principles loaded into the system by default Following all the encoded typographic rules, the system performs the typesetting process in order to computationally design books. This process consists of seven different sequential steps: (i) Input processing; (ii) Document size and grid definition; (iii) Typeface definition; (iv) Typographic styles definition; (v) Document typesetting; (vi) Experimental features application; and (vii) Cover design. The next subsections further describe each step. #### 2.2.1 Input Processing The first step is to load and process the content provided by the user. Once the content is loaded, the system analyses it and extracts data concerning, for example, the text length, number of images and proportion of text in relation to images. This data is important since it allows the system to determine the type of book that it is typesetting, that is, the extracted data may indicate whether the task is to design a long reading book (_i.e._ 50,000 words or more), a short reading book (_i.e._ less than 50,000 words), a text and images book, or a book that only contains images. This information, in turn, will enable the system to make typographic decisions in the following steps of the typesetting process. #### 2.2.2 Document Size and Grid Definition The next step is to create a new document with the page format and size based on the type of book. Thus, we encoded in the typographic styles, rules and principles a set of available book sizes as well as the probability of being selected for a certain book type. Then, the book size is selected at random based on those probability rates. In this step, the system also defines the document grid, that is, the size of the margins, the number of columns, and the document baseline. Margins are defined randomly within a present range. The number of columns in the grid is calculated by dividing the width of the available text block by a random integer value chosen within a predefined range. Also, it considers the predefined minimum text block width. Thus, in small page sizes, it will only create one-column grids. The template pages are defined by creating the template text boxes and placing the running header (with section name) and page numbering. The section name of a page is the last title found in the input document file. The position and size of these elements are defined empirically based on observation and analysis of examples. Currently, the system implements five different ways to compose the headers and page numbering. #### 2.2.3 Typeface Definition Then, the system defines the typefaces to be used based on the preferences of the user, who can choose to (i) keep the typefaces used on the input document, (ii) map those typefaces to new ones installed on the computer, or (iii) let the system select the typefaces. When users decide to map the typefaces of the input document to new ones, they must define the style for each typeface using a font installed on their computers. Typefaces that are not for mapping are defined with the same typeface as in the input document. Additionally, users can determine that the system should map one, or more, typefaces at random. In the last option, the typefaces are selected from a set of pairs or combinations of fonts which are defined and encoded on the configuration file mentioned above. When selecting the fonts, the system considers the type of book (_e.g._ long reading or short reading). Each font pair is composed of one typeface for the titles and another for the text body, along with other typographic features that are specific to each font. Table 2 presents the typeface pairing loaded into the system by default. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Title font** & **Body font** & **Leading** & **Book type** \\ \hline BRRR bold (Swiss Typefaces, 2017) & PS Fournier Typofonderie (Typofonderie, 2012) & 1.17 & Long reading \\ \hline Founders Grotesk bold (Klim Type Foundry, 2010) & Arnhem regular (Fred Smeijers, 2002) & 1.25 & Long reading \\ \cline{2-4} & Domain regular (Klim Type Foundry, 2013) & 1.20 & Long reading \\ \cline{2-4} & Founders Grotesk regular (Klim Type Foundry, 2010) & 1.20 & Short reading \(|\) Text and images \\ \cline{2-4} & Tiempos Regular (Klim Type Foundry, 2010) & 1.20 & Long reading \\ \hline Futura PT bold (Paratype, 1995) & Didot regular (Linotype, 2009) & 1.30 & Long reading \(|\) Text and images \\ \cline{2-4} & Sabon regular (Lynotype, 1964) & 1.28 & Long reading \\ \cline{2-4} & Futura PT regular (Paratype, 1995) & 1.20 & Short reading \(|\) Text and images \(|\) Only images \\ \hline Gill Sans bold (Monotype, 1928) & Baskerville PT regular (ParaType, 2016) & 1.25 & Long reading \\ \cline{2-4} & Perpetua regular (Monotype, 1925) & 1.16 & Long reading \\ \cline{2-4} & Minion regular (Adobe, 1990) & 1.28 & Long reading \\ \cline{2-4} & Times New Roman regular (Monotype, 1931) & 1.20 & Long reading \\ \hline \end{tabular} \end{table} Table 2: Typeface pairing data loaded into the system by default. \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline GT Walsheim & Adobe Cashion regular (Adobe, 1990) & 1.20 & Long reading \\ \cline{2-4} & Bembo regular (Monotype, 1929) & 1.20 & Long reading \\ \hline Helvetica bold (Lynotype, 1957) & Arno regular (Adobe, 2007) & 1.16 & Long reading \\ \cline{2-4} & Joanna regular (Monotype, 1931) & 1.22 & Long reading \\ \cline{2-4} & Helvetica regular (Lynotype, 1957) & 1.20 & Short reading \(|\) Text and images \(|\) Only images \\ \hline La Nord bold (Type Club Düsseldorf, 2017) & Lyon Text regular (Commercial Type, 2009) & 1.30 & Long reading \\ \cline{2-4} & Arno regular (Adobe, 2007) & 1.16 & Long reading \\ \cline{2-4} & La Nord Regular (Type Club Düsseldorf, 2017) & 1.20 & Short reading \(|\) Only images \\ \hline Neuzeit S bold (Linotype, 1959) & Antwerp regular (A2 Type, 2011) & 1.20 & Long reading \\ \hline Proxima Nova bold (Mark Simons Studio, 2005) & Arnhem regular (Fred Smeijers, 2002) & 1.25 & Long reading \(|\) Text and images \\ \hline FF Scala Sans bold (FontShop, 1990) & Arno regular (Adobe, 2007) & 1.16 & Long reading \\ \cline{2-4} & FF Scala Serif regular (FontShop, 1990) & 1.28 & Long reading \\ \hline Univers bold (Linotype, 1957) & Sabon regular (Monotype, 1967) & 1.28 & Long reading \\ \hline Akkurat bold (Lineto, 2004) & Akkurat regular (Lineto, 2004) & 1.20 & Short reading \(|\) Text and images \(|\) Only images \\ \hline Antique Olive bold (Linotype, 1960) & Antique Olive regular (Linotype, 1960) & 1.20 & Short reading \(|\) Only images \\ \hline Arnhem bold (Fred Smeijers, 2002) & Arnhem regular (Fred Smeijers, 2002) & 1.25 & Short reading \(|\) Text and images \(|\) Only images \\ \hline Fedra Sans bold (Typotheque, 2001) & Fedra Sans regular (Typotheque, 2001) & 1.20 (Typotheque, 2001) & Short reading \(|\) Text and images \(|\) Only images \\ \hline ATF Franklin Gothic bold (ATF, 2019) & ATF Franklin Gothic regular (ATF, 2019) & 1.20 (ATF, 2019) & Short reading \(|\) Text and images \(|\) Only images \\ \hline Scala Sans bold (FontShop, 1990) & Arno regular (Adobe, 2007) & 1.16 & Short reading \(|\) Only images \\ \hline \end{tabular} #### 2.2.4 Typographic Styles Definition The process continues with the definition of the paragraph, character and image styles. For each paragraph style, the following properties are defined: font, weight, size, leading, first line indentation, paragraph indentation, space before and after the paragraph, alignment, vertical alignment, hyphenation, language and colour. The font, weight, text leading, language initial text size and colour are defined based on the preferences expressed beforehand by the user. The final text size is determined based on the chosen document grid. In this process, the system checks whether the initial typeface size is within the limits defined by the loaded rules and then it composes a text box and confirms that its median number of characters per line is within a predefined range. When the median number of characters is lower or higher than the limits, the system decreases or increases the text size, respectively. This process will continue until a proper text size is reached. When those values are achieved, if yet necessary, the document grid is also modified, namely the margins area and the number of columns. Finally, the first line indentation, paragraph indentation, space before and after the paragraph, text alignment, vertical alignment and hyphenation are defined at random based on the type of book. The character styles are then defined based on the input document and the paragraph styles, which will be useful for using italics, bolds, small caps, among others. Also, the styles for the images are decided based on the input document and the paragraph style, thus determining their positioning, size, text wrap and effects. #### 2.2.5 Document Typesetting Once the base document is created and all the styles are defined, the system proceeds to the actual typesetting of the book. It starts by typesetting the inside of the book. The typesetting of the inside of the book includes a sequence of steps. First, the system positions the body text. Then, the typographic styles defined earlier are applied to the entire content of the document. The titles on the document are formatted considering three levels of hierarchies defined based on their text size on the input document: (i) chapters titles, _i.e._ titles in the biggest size and preceded by a page break; (ii) section titles, _i.e._ titles in a size bigger than the main text font and preceded by a page or a column break; and (iii) subsection titles, _i.e._ titles in the same size that the body text. Chapter titles are composed isolated on one page. Section titles are typeset on the following page of the document. In multi-column documents, they are placed isolated on the first column; otherwise, they are placed at the beginning of the page. Subsection titles are composed inline on the text of the same size as body text. Once the typographic styles are applied, the images and the corresponding captions are created automatically. Initially, images are placed inline, with the same width as the column where they are placed. If the book has a multi-column grid and the image is placed on the leftmost column of the document, the system randomly decides whether to change its size to fulfil more than a column. The captions are created automatically based on the name of the input images. For each document, the system determines a caption style based on the available inner space on margins as well as the existence of headers and page numbers. By default, captions can be placed below the images, aligned to the left, or aside the images, centred and vertically rotated 90\({}^{\circ}\). Finally, the table of contents and colophon are created. As mentioned before, the system interface allows the user to choose whether to create a table of contents and/or colophon. When these options are activated, a table of contents is created based on the titles on the input document and with the same paragraph style as the titles. On the other hand, the colophon with information about the generated book is typeset at the end of the book in the same style as that body text. This information includes a description of this project and parametric details about the specific design of that book, such as size, margins, and the number of columns, among others. #### 2.2.6 Experimental Features The use of experimental features is optionally and performed when selected by the user through the interface. As mentioned earlier, new features can be developed and added to the system. The presented version of the system presents the following experimental features regarding the inside of the book. The first one draws a colour background on half of each book page, in a specific layer under the text. The second feature draws a gradient along the inner and/or outer margins of the pages. The margins where the gradient is drawn are defined randomly, being possible for the system to create gradients in both margins. In both experimental features, the background colour is defined randomly based on a set of predefined colours, as soon as the first feature is used. By default, the set of colours includes the following seven CMYK colours: cyan (100,0,0,0); light orange (0,40,100,0); orange (0,60,100,0); red (0,100,100,0); pink (0,39,3,0); yellow (0,0,100,0); and beige (2,14,38,0). Finally, the last feature applies a random indentation to each text paragraph. #### 2.2.7 Cover design The last step is the design of the book cover. We developed a method that generates simple typographic compositions with the title of the book (aligned to the top margin) and its author (aligned to the bottom margin) in uppercase letters and in the same font used in the text. The back cover includes information about the computer system that generated the book. The cover background colour is randomly selected from a set of predefined colours. The book title and author(s) need to be defined in the input document using a specific paragraph style. Alternatively, when this information is not specified, the system automatically sets the title to the first sentence of the paragraph composed in the biggest text size in the input document. Experimental features also can be developed and applied to the covers. For instance, the presented version of the system includes experimental features that typeset the cover title in the largest font size possible. ## 4 Experimentation and Discussion We performed an experiment to assess if, and to what extent, the proposed system is able to automatically design book layouts with different purposes and styles. In particular, we are interested in studying the ability of the system to perform two design tasks: (i) create books with distinguishable layout designs, _i.e._ to generate a set of books that present varied visual characteristics; and (ii) create visually coherent books, _i.e._ to generate a collection of books that follow and share the same visual style between them. Therefore, we conducted a survey to assess the visual diversity and coherence of designs created with the proposed system. The following subsections describe our experimentation process. First, we explain the conducted experiment. Then, we report and discuss the obtained results. ### Experimental Method For the evaluation of visual diversity, we performed the following actions. First, we selected one public domain book. Then, we input this content into the system and generated 15 books while not manually setting any visual or typography attribute, _i.e._ the attributes were defined at random by the system within the predefined ranges based on the typographic styles, rules and principles defined by default in the system (see Table 1). Lastly, we presented the 15 generated book designs to a group of 42 participants and asked them to assess the layout diversity and/or coherence of the set. The selected content was the book "Contos" written by the Portuguese author Eca de Queiroz and published in 1992.2 This book comprises thirteen short stories, each one structured as a chapter, composed of about 73.330 words. Footnote 2: The book “Contos” by Eca de Queíros was retrieved from the Project Gutenberg. One may download the book at the following address www.gutenberg.org/ebooks/31347 (visited: 26 July 2022). For the evaluation of visual coherence, we proceeded as follows. First, we selected a book from the set of 15 generated earlier to evaluate diversity. Then, we exported to file the settings used by the system to generate the selected book. Table 3 overviews the visual and typographic features of the chosen book design. Next, we input these settings into the system and created book designs for 5 different contents. Lastly, we presented the 5 generated designs to the same group of participants and asked them to evaluate the layout diversity and/or coherence of this set. The two sets of books were evaluated by the same group of testing participants through a survey. First, we presented to the participants a set of 15 designs generated at random and then a new set of 5 designs generated using the settings of one design selected from the first set. After observing each set of designs, each participant was asked to classify its layout diversity and/or coherence on a scale between 1 (very \begin{table} \begin{tabular}{l|l} \hline \hline **Page size** & 130 mm \(\times\) 200 mm \\ (width \(\times\) height) & \\ \hline **Page margins** & 12 mm \(|\) 12 mm \(|\) \\ (top \(|\) inside \(|\) bottom \(|\) outside) & 13.7 mm \(|\) 22 mm \\ \hline **Running header and page numbering position** & Top page margin \\ \hline **Grid** & 1 \(|\) n/a \\ (columns number \(|\) gutter size) & La Nord (Raoul Gottschling, 2006) \(|\) 24 pt \(|\) 27 pt \\ \hline **Title alignment** & Top page margin \(|\) centre \\ (text box alignment on page \(|\) text alignment) & \\ \hline **Body text** & Antwerp (A2 Type, 2011) \(|\) 10 pt \(|\) 13 pt \\ (typeface \(|\) font size \(|\) leading) & \\ \hline **Body text alignment** & justify \(|\) hyphenation on \\ (text alignment \(|\) hyphenisation) & \\ \hline **Cover colour** & CMYK (2, 14, 38, 0) \\ \hline **Experimental features** & none \\ \hline \hline \end{tabular} \end{table} Table 3: System settings employed to generate the book designs used in the second part of the experiment, where their visual coherence is evaluated coherent) and 5 (very diverse). As already mentioned, the testing group includes 42 individuals. The age of the participants ranged from 19 to 49 years old. ### Results and Discussion Figure 3 presents several pages composed automatically by the system for the first part of the experiment, where the visual settings are defined at random. Looking at the resulting designs, we noticed that most of the designs (12 out of 15) are portrait-oriented. This is due to the automatic classification of the input content by the system as a long reading book and therefore the probability of selecting a portrait format is higher. Nevertheless, the format of the generated books exhibits slight variations between them. Concerning the text box, the grids and the typefaces used, we noticed the influence of the typographic principles and rules encoded in the system. Most designs present the body text typeset in the justified text (13 out of 15) over a one-column grid (12 out of 15), which are characteristics considered suitable for long reading books. Nevertheless, it is possible to observe diversity in other aspects. The dimensions of the margins and the size of the grid gutter, when it exists, vary between books. The used font, text size and leading and position of the running and page numbers also change. Additionally, it is possible to observe that different experimental features were used, alone or combined, in 6 of the generated designs. Figure 3: Examples of pages from books designed by the system using random settings and used in the first part of the experiment to assess their visual diversity. All designs generated by the system for this paper can be found in the supplementary files. Observing books created when the system settings are imported from a previously generated book, we can notice the share of visual characteristics among the resulting designs (_e.g._ book format and size, page margins, grid or used typefaces) and their similarity to the initial design. Figure 4 shows pages from the book that sourced the settings file (Figure 4a), which was selected from the set used in the first part of the experiment (Figure 3), along with pages of some books used in the second part of the experiment (Figure 4b). One should note that Figure 3 and Figure 4 depict only a small portion of the designs generated for the experiment, which can be all consulted in the supplementary files. The conducted survey indicates that the system is able to generate both diverse and cohesive book designs. The chart in Figure 5 shows the distribution of answers obtained in the survey. When we asked the survey participants to classify the diversity and/or coherence of the first set of books, which were generated at random by the system, the majority of participants (35 out of 42) considered them as diverse (19 participants) or very diverse (16 participants). Only a few participants considered that this set of books was very cohesive (2 participants) or cohesive (5 participants). When participants performed the same task for the second set of books, which were generated from a specific settings file, the majority of participants (38 out of 42) considered that the generated designs were visually coherent between them. Most of them (30 Figure 4: Examples of pages from books designed by the system using the same settings. The top row of pages (a) belongs to a book selected from the set used in the first part of the experiment. The other two rows of pages (b) belong to two books of the set used in the second part of the experiment, generated using the settings sourced by the book selected from the first set (a). All designs generated by the system for this paper can be found in the supplementary files. participants) considered that this second set of designs present a high-level coherence among them. The survey results revealed that variables and properties defined by the system can create visually diverse layouts. Although the system engine determines the features of books based on a set of predefined typographic rules and principles, it also employs probabilistic mechanisms to define some attributes. This allows the occasional definition of attributes in an unexpected manner, promoting visual variation on the resulting layouts. This is visible, for instance, in the first part of the experiment, where books are created with the different formats even though the system tends to avoid the use of landscape format in long reading. The obtained results expose the exploratory nature of the proposed system, which demonstrates high potential to stimulate and foster graphic designers' creativity and experimentation in the different stages of the design process. The system presents itself as a co-creativity tool which enables editorial designers to explore multiple conceptual and visual possibilities in an accessible, easy, and effortless manner. This is possible by enabling users to not only define values of the different attributes but also by enabling them to define the level of autonomy of the system and/or target the exploration of certain properties. Therefore, the system can be used in most stages of the design process, from the earlier and exploratory stages (when designers can take Figure 5: Distribution of the answers obtained in the user survey conducted to evaluate the visual coherency and diversity of different books generated by the system. Black bars regard user evaluations of books created using random settings chosen automatically by the system. Grey bars regard user evaluations of books created using the same settings imported from a selected settings file. advantage of random generation to look for new types of layouts) to the final stages (when designers need to fine-tune one or more graphic attributes). The system allows exporting data to settings files that encode the design of the book and, later, can be loaded into the system to create other books which are visually similar. As demonstrated by the survey results, the proposed system can automatically create highly coherent designs when the book properties and characteristics are prior determined and stored in these settings files. Users can also include in the system their desired book properties both directly on the system interface and/or by modifying the typographic principles and rules used by the system. The proposed system is also a useful tool to automate the design of books that need to follow a set of restricted typographic and visual attributes (_e.g._ when it is necessary to design new books that will be part of a collection or a series). Thus, besides its exploratory nature, the system also enables the automatisation of some editorial design tasks and routines. With the proposed system, users do not require other software tools to manipulate and produce the resulting book designs since it operates inside of the popular editorial design software Adobe InDesign and the generated books are made available as editable documents. This way, it is fully integrated into the typical working environment of editorial designers. This allows its use both as an exploratory and as an automation tool, thus empowering designers to easily edit or fine-tune the output designs directly in a familiar environment. The experimental features implemented by the system can be observed in some of the generated books and definitely contribute to their diversity and variation. Nevertheless, we noted that some generated designs exhibit certain graphic limitations. This may be related to the fact that the generated designs comply with the same default typographic rules and principles. Although users can manipulate these rules, this will primarily change the typeset of the book but it will not include new visual features. For this reason, the system facilitates the addition of new features as well as their control. In this sense, one can add new features to the system for exploration and automatisation purposes. We believe that this possibility may allow the system to solve some lack of distinguishable visual features of the output, including in the design of the covers. In summary, the experimental results demonstrate the ability of the proposed system to automatically generate finished and functional designs from scratch. Furthermore, the results reveal the potential of the system as a useful exploratory tool in the context of book typesetting and editorial design. It may be operated by graphic designers when they are searching and exploring new conceptual and visual perspectives, fine-tune a book characteristic, and/or design books that must be coherent with a given set of typographic rules. In addition, we consider that the automation provided by the system has great potential in varied graphic design commercial scenarios, _e.g._ the automatic design and production of customised books, which is relevant for print-on-demand applications, or the effortless typeset of books for an existing one-book collection. ## 5 Conclusion We have presented a novel approach to computationally design books. The presented system implements a generative design process which takes advantage of the scripting capabilities of Adobe InDesign to procedurally typeset books from content provided by the user. We have shown the ability of the system to (i) create book designs that consistently comply with a series of typographic rules, styles and principles identified in the literature; (ii) produce visually diversified books from the same input content; and (iii) produce visually coherent books with different contents. The work presented in the paper may challenge the typical roles of both the tool and the designer. First, by automatically creating and suggesting design alternatives, the tool ends up playing a more active role in the design process. Then, by modifying and developing custom tools, the designer is no longer a mere tool user and becomes the author of tools tailored to specific needs. We believe this shift can be fruitful since it enables the exploration and discovery of new technical and creative possibilities. This work can hopefully provide directions to further research on generative processes for supporting design exploration and finding unique designs. In the particular case of typography, generative approaches such as the one presented in the paper can be useful and reveal great potential, especially in the current print-on-demand market and digital publishing, where each publication may be unique. Our future work will move in the direction of employing Artificial Intelligence techniques, such as Evolutionary Computation and Machine Learning, to enable a deeper exploration of the vast space of book designs that can be achieved with the system and also to automatically suggest settings to designers according to their needs or goals. ## 6 Acknowledgments We would like to express our gratitude to all the participants in the evaluation sessions. This work is partially supported by the Foundation for Science and Technology, I.P./MCTES (Portugal) through national funds (PIDDAC), within the scope of project UIDB/00326/2020 or project code UIDP/00326/2020. Sergio M. Rebelo was funded by FCT under the grant SFRH/BD/132728/2017 and COVID/BD/151969/2021.
2301.08989
Note on Milnor numbers of irreducible germs
Let $(\bf {V,0})\subset (\mathbb{C}^n,0)$ be a germ of a complex hypersurface and let $f: (\mathbb{C}^n,0)\to(\mathbb{C}^n,0)$ be a germ of a finite holomorphic mapping. If germs $(\bf {V,0})$ and ${\bf W}:=(F^{-1}(\bf{ V})),0)$ are irreducible and with isolated singularities, then $$\mu(F^{-1}(\bf{ V}))\ge \mu(\bf {V}),$$ where $\mu$ denotes the Milnor number.
Zbigniew Jelonek
2023-01-21T18:41:12Z
http://arxiv.org/abs/2301.08989v1
# Note on Milnor numbers of irreducible germs ###### Abstract. Let \((\mathbf{V},\mathbf{0})\subset(\mathbb{C}^{\mathbf{n}},\mathbf{0})\) be a germ of a complex hypersurface and let \(f:(\mathbb{C}^{n},0)\rightarrow(\mathbb{C}^{n},0)\) be a germ of a finite holomorphic mapping. If germs \((\mathbf{V},\mathbf{0})\) and \(\mathbf{W}:=(F^{-1}(\mathbf{V})),\mathbf{0})\) are irreducible and with isolated singularities, then \[\mu(F^{-1}(\mathbf{V}))\geq\mu(\mathbf{V}),\] where \(\mu\) denotes the Milnor number. 2010 Mathematics Subject Classification: 14R15, 14R99, 14P10 ## 1. Introduction This paper is devoted to proving the following theorem: **Theorem 1.1**.: _Let \((\mathbf{V},\mathbf{0})\subset(\mathbb{C}^{\mathbf{n}},\mathbf{0})\) be a germ of a complex hypersurface and let \(f:(\mathbb{C}^{n},0)\rightarrow(\mathbb{C}^{n},0)\) be a germ of a finite holomorphic mapping. If germs germ \((\mathbf{V},\mathbf{0})\) and \(\mathbf{W}:=(F^{-1}(\mathbf{V})),\mathbf{0})\) are irreducible and with isolated singularities, then_ \[\mu(F^{-1}(\mathbf{V}))\geq\mu(\mathbf{V}),\] _where \(\mu\) denotes the Milnor number._ As a Corollary we obtain the main result of [1]: **Corollary 1.2**.: _Let \((\mathbf{V},\mathbf{0})\subset(\mathbb{C}^{\mathbf{n}},\mathbf{0})\) be a germ of a complex hypersurface and let \(f:(\mathbb{C}^{n},0)\rightarrow(\mathbb{C}^{n},0)\) be a germ of a finite holomorphic mapping. Then if the germ \((F^{-1}(\mathbf{V})),\mathbf{0})\) taken with the reduced structure is smooth, then also the germ \((\mathbf{V},\mathbf{0})\) is smooth._ ## 2. Main results We start with **Definition 2.1**.: _Let \((\mathbf{V},\mathbf{0})\subset(\mathbb{C}^{\mathbf{n}},\mathbf{0})\) be a germ of a complex hypersurface with an isolated singularity at \(0\) and let \(\mathbf{f}\in\mathcal{O}_{0}\) be a generator of the ideal \(I(\mathbf{V},\mathbf{0})\) Then the Milnor number \(\mu(\mathbf{f})=\)\(\dim\ \mathcal{O}_{0}/(\frac{\partial\mathbf{f}}{\partial x_{1}},...,\frac{ \partial\mathbf{f}}{\partial x_{n}})\) does not depend on \(\mathbf{f}\) but only on \((\mathbf{V},\mathbf{0}).\) We write \(\mu(\mathbf{V},\mathbf{0}):=\mu(\mathbf{f}).\)_ It is easy to see that this definition is well-defined, i.e., the number \(\mu\) does not depend on \(f\) but only on \((\mathbf{V},\mathbf{0}).\) Now we can prove our main result: **Theorem 2.2**.: _Let \((\mathbf{V},\mathbf{0})\subset(\mathbb{C}^{n},\mathbf{0})\) be a germ of a complex hypersurface and let \(f:(\mathbb{C}^{n},0)\rightarrow(\mathbb{C}^{n},0)\) be a germ of a finite holomorphic mapping. If germs germ \((\mathbf{V},\mathbf{0})\) and \(\mathbf{W}:=(F^{-1}(\mathbf{V})),\mathbf{0})\) are irreducible and with isolated singularities, then_ \[\mu(F^{-1}(\mathbf{V}))\geq\mu(\mathbf{V}).\] Proof.: We can assume that \(n>1.\) Take \(\epsilon>0\) so small that all spheres \(S_{\eta}=\{z:|z|=\eta\}\) are transversal to \(V\) for \(0<\eta\leq\epsilon\). Let \(g\) be a reduced equation for \(V.\) We can assume that for \(|c|<\delta\) the fibers \(\{f(z)=c\}\) are transversal to \(S_{\epsilon}\) and they are Milnor fibers for \(g\) in \(B(0,\epsilon)\) (see [3]). Take a ball \(B^{\prime}\) so small that \(F(B^{\prime})\subset B(0,\epsilon)\). There is an \(\eta_{0}>0\) so small that \(F^{-1}(B(0,\eta_{0}))\subset B^{\prime}.\) We can take \(\delta_{0}\) so small that for every \(C\) with \(|c|\leq\delta_{0}\) fibers \(g=c\) are transversal to spheres \(S_{\eta}\) for all \(\eta_{0}\leq\eta\leq\epsilon\). Let \(h\) be a reduced (local) equation of \(W=F^{-1}(V).\) By the holomorphic Nullstellensatz we have \(g\circ F=h^{r}\) where \(r>0\) is a natural number. Hence \((g-c)(F)=\prod_{i=1}^{r}(h-\alpha_{i}c^{1/r})\), where \(\alpha_{i}\) are all roots of order \(r\) from \(1.\) In particular we see that the fiber \(\{h=\alpha_{i}c^{1/r}\}\) is transformed onto fiber \(\{g=c\}.\) Let \(A_{c}\) be a Milnor fiber \(\{g=c\}\cap B(0,\eta_{0})\) and let \(B_{c}=F^{-1}(A_{c})\cap\{h=c^{1/r}\}.\) Since the mapping \(F:B_{c}\to A_{c}\) is finite, we have that \(F_{*}:H_{k}(B_{c})\to H_{k}(A_{c})\) is surjective for every \(k\) (see [4]). By the Milnor theorem rank \(H_{n-1}(A_{c})=\mu(\mathbf{V})=s\geq 1\), where \(\mu(\mathbf{V})\) is the Milnor number of \(g.\) In particular there are \(n-1\) cycles \(\alpha_{1},...,\alpha_{s}\subset A_{c},\) such that \([\alpha_{i}]\) are generators of \(H_{n-1}(A_{c}).\) Let \(\beta_{i};\ i=1,...,s\) be cycles in \(B_{c}\) such that \(F(\beta_{i})=\alpha_{i}.\) Let \(A^{\prime}_{c}=\{g=c\}\cap B(0,\epsilon).\) By our assumptions the fiber \(A_{c}\) is a deformation retract of \(A^{\prime}_{c}.\) Hence the cycles \(\alpha_{i};\ i=1,...,s\) are generators of \(H_{n-1}(A^{\prime}_{c}),\) too. Let \(B^{\prime}_{c}=\{h=c^{1/r}\}\cap B^{\prime}\) be a Milnor fiber of \(h\) in \(B^{\prime}.\) Consider the mapping \[F_{*}:H_{n-1}(B^{\prime}_{c})\to H_{n-1}(A^{\prime}_{c}).\] We have \(F_{*}([\beta_{i}])=[\alpha_{i}]\). Hence rank \(H_{n-1}(B^{\prime}_{c})\geq\) rank \(H_{n-1}(A^{\prime}_{c}),\) i.e., \[\mu(F^{-1}(\mathbf{V}))\geq\mu(\mathbf{V}).\] **Corollary 2.3**.: _Let \((\mathbf{V},\mathbf{0})\subset(\mathbb{C}^{n},\mathbf{0})\) be a germ of a complex hypersurface and let \(f:(\mathbb{C}^{n},0)\rightarrow(\mathbb{C}^{n},0)\) be a germ of a finite holomorphic mapping. Then if the germ \((F^{-1}(\mathbf{V})),\mathbf{0})\) taken with the reduced structure is smooth, then also the germ \((\mathbf{V},\mathbf{0})\) is smooth._ Proof.: We proceed by induction. Note that if \(\mathbf{V}\) has an isolated singularity, then it is smooth if and only if \(\mu(\mathbf{V})=0\). In particular in this case our Corollary follows directly from Theorem 2.2. In particular we get the thesis if \(n=2.\) Now assume that \(n>2\) and \(\mathbf{S}:=Sing(\mathbf{V},\mathbf{0})\) has dimension greater than \(0.\) Let \(\mathcal{L}\) be a linear system of hyperplanes in \(\mathbb{C}^{n}.\) On \(V\) let us consider the induced linear system \(\mathcal{L}_{V}\). Then a generic member \(L_{V}\) of \(\mathcal{L}_{V}\) is singular, because \(S\cap L\neq\emptyset\) (see Lemma 2.5 in [2]). Moreover \(F^{*}(L_{V})\) (and \(F^{-1}(L)\)) are smooth because the system \(F^{*}(\mathcal{L}_{V})\) has no base points on smooth \(W=F^{-1}(V)\subset F^{-1}(L)\) (the system \(F^{*}(\mathcal{L})\) has no base points on \(\mathbb{C}^{n}\)). Since \(\dim\,L\cap V<\dim\,V\), this is a contradiction. _Acknowledgement_. The author is grateful to prof. Maciej Denkowski and prof. Karolina Zajac for helpful conversations.
2302.11988
Time Complexity of Broadcast and Consensus for Randomized Oblivious Message Adversaries
Broadcast and consensus are most fundamental tasks in distributed computing. These tasks are particularly challenging in dynamic networks where communication across the network links may be unreliable, e.g., due to mobility or failures. Indeed, over the last years, researchers have derived several impossibility results and high time complexity lower bounds (i.e., linear in the number of nodes $n$) for these tasks, even for oblivious message adversaries where communication networks are rooted trees. However, such deterministic adversarial models may be overly conservative, as many processes in real-world settings are stochastic in nature rather than worst case. This paper initiates the study of broadcast and consensus on stochastic dynamic networks, introducing a randomized oblivious message adversary. Our model is reminiscent of the SI model in epidemics, however, revolving around trees (which renders the analysis harder due to the apparent lack of independence). In particular, we show that if information dissemination occurs along random rooted trees, broadcast and consensus complete fast with high probability, namely in logarithmic time. Our analysis proves the independence of a key variable, which enables a formal understanding of the dissemination process. More formally, for a network with $n$ nodes, we first consider the completely random case where in each round the communication network is chosen uniformly at random among rooted trees. We then introduce the notion of randomized oblivious message adversary, where in each round, an adversary can choose $k$ edges to appear in the communication network, and then a rooted tree is chosen uniformly at random among the set of all rooted trees that include these edges. We show that broadcast completes in $O(k+\log n)$ rounds, and that this it is also the case for consensus as long as $k \le 0.1n$.
Antoine El-Hayek, Monika Henzinger, Stefan Schmid
2023-02-23T13:11:01Z
http://arxiv.org/abs/2302.11988v2
# Time Complexity of Broadcast and Consensus for ###### Abstract Broadcast and consensus are most fundamental tasks in distributed computing. These tasks are particularly challenging in dynamic networks where communication across the network links may be unreliable, e.g., due to mobility or failures. Indeed, over the last years, researchers have derived several impossibility results and high time complexity lower bounds (i.e., linear in the number of nodes \(n\)) for these tasks, even for _oblivious message adversaries_ where communication networks are rooted trees. However, such deterministic adversarial models may be overly conservative, as many processes in real-world settings are stochastic in nature rather than worst case. This paper initiates the study of broadcast and consensus on stochastic dynamic networks, introducing a _randomized oblivious message adversary_. Our model is reminiscent of the SI model in epidemics, however, revolving around trees (which renders the analysis harder due to the apparent lack of independence). In particular, we show that if information dissemination occurs along random rooted trees, broadcast and consensus complete fast with high probability, namely in logarithmic time. Our analysis proves the independence of a key variable, which enables a formal understanding of the dissemination process. More formally, for a network with \(n\) nodes, we first consider the completely random case where in each round the communication network is chosen uniformly at random among rooted trees. We then introduce the notion of randomized oblivious message adversary, where in each round, an adversary can choose \(k\) edges to appear in the communication network, and then a rooted tree is chosen uniformly at random among the set of all rooted trees that include these edges. We show that broadcast completes in \(O(k+\log n)\) rounds, and that this it is also the case for consensus as long as \(k\leq 0.1n\). ## 1 Introduction Broadcast and consensus are most fundamental operations in distributed computing which, in large-scale systems, typically have to be performed over a _network_. These networks are likely to be dynamic and change over time due, e.g., to link failures, interference, or mobility. Understanding how information disseminates across such dynamic networks is hence important for developing and analyzing efficient distributed systems. Over the last years, researchers have derived several important insights into information dissemination in dynamic networks. A natural and popular model assumes an _oblivious message adversary_ which controls the information flow between a set of \(n\) nodes, by dropping an arbitrary set of messages sent by some nodes in each round. Specifically, the adversary is defined by a set of directed communication graphs, whose edges determine which node can successfully send a message to which other node in a given round. Concretely, based on this set of graphs, the oblivious message adversary chooses a sequence of graphs over time, one per round, in such a way that the time complexity of the information dissemination task at hand is maximized. This model is appealing because it is conceptually simple and still provides a highly dynamic network model: The set of allowed graphs can be arbitrary, and the nodes that can communicate with one another can vary greatly from one round to the next. It is, thus, well-suited for settings where significant transient message loss occurs, such as in wireless networks. As information dissemination is faster on dense networks, we focus in this paper on sparse networks, in particular, on rooted trees, similar to prior work on the oblivious message adversary [13, 28]. Unfortunately, information dissemination can be slow in trees: broadcast can take time linear in the number of nodes under the oblivious message adversary[13, 28], even for constant-height trees (see Appendix A); and consensus can even take super-polynomial time until termination, if it completes at all [6, 18]. While this is bad news, one may argue that while the deterministic adversary model is useful in malicious environments, in real-word applications, the dynamics of communication networks is often more stochastic in nature. Accordingly, the worst-case model considered in existing literature may be overly conservative. This motivates us, in this paper, to study information dissemination, and in particular broadcast and consensus tasks, initially in the case where the communication network is purely stochastic: in each round, the communication network is chosen uniformly at random among all rooted trees. We then initiate the study of an extension of this model to a setting where an adversary has some limited control over the communication network, which we call the _randomized oblivious message adversary_. More specifically, we study the setting where first a worst-case adversary chooses \(k\) directed edges in the dynamic \(n\)-node network for \(0\leq k<n\), and then a rooted tree is chosen uniformly at random among the set of all rooted trees that include these edges. With our parameterized approach, we can get a smooth transition between the purely stochastic model (\(k=0\)) and the completely deterministic adversary (\(k=n-1\)) typically studied in prior work. We show that under our randomized oblivious message adversary broadcast completes in \(O(k+\log n)\) time with high probability. Note that for \(k=O(\log n)\) this is an exponential improvement over the deterministic setting. Furthermore, we also show that consensus completes and is fast, with high probability: namely in \(O(k+\log n)\) time for \(k\leq 0.1n\), and it only requires messages of constant size along each edge in each round (only 1 bit). It is useful to put our model into perspective with the SI model in epidemics [11]: while in the SI model interactions occur on a network that equals a clique, our model revolves around trees which are chosen by an adversary. This tree structure renders the analytical understanding of the information dissemination process harder, due to the lack of independence between the edges in the network in a particular round. A key insight from our paper is that we can prove the independence of a key variable, which is crucial for our analysis. Our proof further relies on stochastic dominance, which makes it robust to the specific adversarial objective, and applies to any adversary definition (e.g., whether it aims to maximize the minimal or expected number of rounds until the process completes). Model.In a first model, that we call the _Uniformly Random Trees_ model, let \(n\) be the number of nodes, and let each node have a unique identifier from \([n]\). Let \(\mathcal{T}_{n}\) be the set of all directed rooted trees on \(n\) vertices (where all edges are pointed away from the root). Time proceeds in a sequence of rounds \(t=1,2,\dots\), such that in each round \(t\) a network is chosen uniformly at random from \(\mathcal{T}_{n}\) independently from other rounds, and that network will be the communication network for the corresponding round. In each round, every node sends a message to all of its out-neighbors before receiving one from its in-neighbor. There is no message size restriction. In this setting, we will study broadcast, all-to-all broadcast and consensus. Broadcast.In the _Broadcast on Uniformly Random Trees_ problem, we start by giving a message to _one_ node, and _broadcast completes_ when that node has forwarded this message to all other nodes. Each node that received the message can replicate it as many times as needed, and start forwarding it as well. Communication networks are chosen according to the Uniformly Random Trees model. We prove the following theorem: **Theorem 1.1**.: _Broadcast on Uniformly Random Trees completes within \(16\ln n\) rounds with probability \(p>1-\frac{1}{n^{2}}\)._ We also show that this result is asymptotically tight. Indeed, we cannot hope for a similar probability for a number of rounds that is \(o(\ln n)\): **Theorem 1.2**.: _If \(n\geq 2\), then the probability that Broadcast on Uniformly Random Trees fails to complete within \(\log n\) rounds is at least \(\frac{1}{4}\)._ All-to-All Broadcast.In the _All-to-All Broadcast on Uniformly Random Trees_ problem, we start by giving a distinct message to _each_ node, and each node must forward this message to all other nodes. In each round, each node forwards all the messages it has received in previous rounds to all its out-neighbors. Communication networks are chosen according to the Uniformly Random Trees model. We prove the following theorem: **Theorem 1.3**.: _All-to-All Broadcast on Uniformly Random Trees completes within \(16\ln n\) rounds with probability \(p>1-\frac{1}{n}\)._ Consensus.In the _Consensus on Uniformly Random Trees_ problem, we start by giving a value \(v_{p}\in\{0,1\}\) to each node \(p\), and each node must decide on a value in \(\{0,1\}\). This should satisfy the following conditions: * Agreement: No two nodes decide differently. * Termination: Every node eventually decides. * Validity: The value the nodes agree on should be one of the input values \(v_{p}\). Communication networks are chosen according to the Uniformly Random Trees model. We prove the following theorem: **Theorem 1.4**.: _There exists a protocol for Consensus on Uniformly Random Trees that satisfies Agreement and Validity, terminates within \(16\ln n\) rounds with probability \(p>1-\frac{2}{n^{2}}\), and only requires messages of 1 bit over each edge in each round._ In our second model an adversary can influence the network that is chosen in each round. The setting where the adversary completely determines the tree was studied in [28, 17] and Broadcast in that model was recently solved: The required number of rounds is \(\Theta(n)\)[17, 13], while Consensus is unsolvable [6]. We generalize this model and consider the _Randomized Oblivious Message Adversary model,_ where the power of the adversary is controlled by a parameter \(k\). In that model, to construct the communication network for a round, the adversary chooses \(k\) directed edges to appear in the tree, and a rooted tree is chosen uniformly at random among the trees from \(\mathcal{T}_{n}\) that include those \(k\) edges. Note that the case \(k=n-1\) is exactly the case where the adversary chooses all edges in the tree for each round, while the case \(k=0\) is where the adversary has no influence. We allow the adversary to access the random tree of all rounds \(t^{\prime}<t\) before choosing its edges for round \(t\). In this model we analyze Broadcast and Consensus. Broadcast with a Randomized Oblivious Message AdversaryIn the _Broadcast with a Randomized Oblivious Message Adversary of parameter \(k\)_ problem, we start by giving a different message to each node, and the message of one of those nodes (no matter which one) must be forwarded to all other nodes. Each node can replicate and start forwarding any message it has received, and it forwards as many messages as it wants in any given round. Communication networks are chosen according to the Randomized Oblivious Message Adversary of parameter \(k\). We prove the following theorem: **Theorem 1.5**.: _Broadcast with a Randomized Oblivious Message Adversary of parameter \(k\) completes within \(O(k+\log n)\) rounds with probability \(p\geq 1-\frac{2}{n^{2}}\)._ We show that this overhead of \(k\) compared to the case where the adversary has no control is inevitable, as the adversary can always delay Broadcast for at least \(\Omega(k)\) rounds: **Theorem 1.6**.: _If the adversary controls \(k\) edges in each round, then there exists a strategy that, with probability \(1\), guarantees that at least \(\frac{k}{2}-1\) rounds are required._ Consensus with a Randomized Oblivious Message AdversaryIn the _Consensus with a Randomized Oblivious Message Adversary_ problem, we start by giving a value \(v_{p}\in\{0,1\}\) to each node \(p\), and each node must decide on a value in \(\{0,1\}\). This should satisfy Validity, Agreement and Termination as defined above. Communication networks are chosen according to the Randomized Oblivious Message Adversary of parameter \(k\). We prove the following theorem: **Theorem 1.7**.: _There exists a protocol for Consensus with a Randomized Oblivious Message Adversary that satisfies Agreement and Validity, and terminates in \(O(k+\log n)\) rounds with probability \(p\geq 1-\frac{2}{n^{2}}\), and only requires messages of 1 bit over each edge in each round, as long as \(k\leq 0.1n\)._ OrganisationThe paper is organized as follows: we first introduce combinatorial results on rooted trees in Section 2. We then explore the fully random case in Section 3. In Section 4, we explore the case where the adversary controls \(k\) edges in each round. We review related work in Section 5, then conclude in Section 6. Appendix A gives a lower bound for deterministic broadcast in constant-height trees. In Appendix B, we give some probability theory results that are useful throughout the paper. Finally, in Appendix C, we include omitted proofs from Section 4. Counting Trees In this section, we will present previously known and new results on the number of rooted trees that satisfy given properties. This will be helpful for computing probabilities in later sections. Namely, we are particularly interested in the two following results: **Theorem 2.1** (Lemma 1 of [26]).: _Let us be given a directed rooted forest \(F\) on \(n\) vertices, and let \(|E|\) be the number of edges in \(F\). Then, the number of directed rooted trees \(T\) over \(n\) vertices, such that \(F\) is contained by \(T\), is \(n^{n-1-|E|}\)._ **Theorem 2.2**.: _Let us be given a directed rooted forest \(F\) on \(n\) vertices, let \(v\in[n]\) be a vertex with no parent in \(F\), and \(f\) be the number of vertices of the component of \(F\) containing \(v\) (note that we can have \(f=1\) if \(v\) is an isolated vertex). Then the number of directed rooted trees \(T\) on \(n\) vertices, such that \(F\) is contained in \(T\), and such that \(v\) is the root of \(T\), is \(fn^{n-2-|E|}\)._ In this section, we will give a different proof to Theorem 2.1, as an analysis similar to that different proof will allow us to prove Theorem 2.2. To do so, we start by recalling Cayley's formula [2]: **Theorem 2.3** (Cayley's formula).: _The number of undirected trees on \(n\) vertices is \(n^{n-2}\)._ As a corollary of this theorem, we can compute the number of rooted trees on \(n\) vertices, as choosing a rooted tree is equivalent to choosing an undirected tree, and then choosing a root: **Corollary 2.4**.: _The number of rooted trees on \(n\) vertices is \(n^{n-1}\)._ Throughout this section we use \(F\) to denote an undirected or directed forest and \(C_{1},C_{2},\ldots,C_{m}\) of \(f_{1},\ldots,f_{m}\) vertices with integer \(m\geq 1\) to denote the connected components of (the undirected version of) \(F\). The next theorem on undirected trees gives the number of undirected trees which respect a set of fixed edges. It was shown by Lu, Mohr and Szekely [24]. **Theorem 2.5** (Lemma 6 of [24]).: _Let us be given an undirected forest \(F\) on \(n\) vertices, with connected components \(C_{1},C_{2},\ldots,C_{m}\) of \(f_{1},\ldots,f_{m}\) vertices with integer \(m\geq 1\). Let \(|E|\) be the number of edges in \(F\). Then, the number of undirected trees \(T\) on \(n\) vertices, such that \(F\) is contained in \(T\), is:_ \[\left(\prod_{i\in[m]}f_{i}\right)n^{n-2-|E|}\] In the rooted case the formula is simpler, as one can drop the product of \(f_{i}\). For this, let us first recall the definition of a directed rooted forest: **Definition 2.6** (Directed Rooted Forest).: _A directed rooted forest is a collection of disjoint directed rooted trees._ **Theorem 2.1** (Lemma 1 of [26]).: _Let us be given a directed rooted forest \(F\) on \(n\) vertices, and let \(|E|\) be the number of edges in \(F\). Then, the number of directed rooted trees \(T\) over \(n\) vertices, such that \(F\) is contained by \(T\), is \(n^{n-1-|E|}\)._ As stated above, we will give a new proof for this theorem. For simplicity, we will always require that \(\sum_{i\in[m]}f_{i}=n\), which is always achievable by putting isolated vertices in trivial components. For any directed graph \(G\), \(u(G)\) will represent its undirected version. For any directed rooted tree \(T\), its root is denoted by \(r(T)\). We will also use the following bijection. Recall that \(\mathcal{T}_{n}\) is the set of all directed rooted trees on \(n\) vertices. We use \(T_{n}\) to denote the set of all undirected trees on \(n\) vertices. **Definition 2.7**.: _Let \(T_{n}\) be the set of all undirected trees on \(n\) vertices. We define \(\pi\) to be the following bijection:_ \[\pi\colon\mathcal{T}_{n} \to T_{n}\times[n]\] \[T \mapsto(u(T),r(T))\] To prove Theorem 2.1, we will first look at all the rooted trees that agree with \(F\) if edge directions are ignored. Choosing such a tree is equivalent to choosing an undirected tree that contains \(F\), then choosing a root. This results in \(\left(\prod_{i\in[m]}f_{i}\right)n^{n-1-E}\) trees. However, while all of them agree with \(F\) on the undirected edges, the direction of those edges will not correspond for a majority of them. We will then partition this set of trees such that only one element of each set of the partition agrees with \(F\) on the directed edges, and counting the number of sets in the partition will yield the desired result. To do so, we will use group actions. **Definition 2.8** (Group action).: _If \(G\) is a group with identity element \(e\), and \(X\) is a set, then a (left) group action \(\alpha\) of \(G\) on \(X\) is a function_ \[\alpha\colon G\times X\to X\] _that satisfies the following two axioms:_ * _Identity:_ \(\alpha(e,x)=x,\forall x\in X\)_, where_ \(e\) _is the identity element of_ \(G\)_._ * _Compatibility:_ \(\alpha(g,\alpha(h,x))=\alpha(gh,x),\forall g,h\in G,\forall x\in X\)__ **Definition 2.9** (Rotations).: _Let \(k>0\) be an integer and let \(R_{k}\) be the group of all rotations of \([k]\), that is, the set of functions:_ \[\sigma_{i}^{k}\colon\mathbb{Z}/k\mathbb{Z} \to\mathbb{Z}/k\mathbb{Z}\] \[x \mapsto(x+i)\mod k\] **Definition 2.10**.: _Let \(F\) be a forest with vertices in \([n]\) (rooted and directed or undirected), and \(T\) a tree with vertices in \([n]\) (rooted and directed or undirected). We say that they are undirected-compatible if \(u(F)\subseteq u(T)\), where \(u(G)\) represents the undirected version of graph \(G\). If \(F\) and \(T\) are both rooted and directed or both undirected, we say that they are compatible if \(F\subseteq T\)._ **Definition 2.11**.: _Let us be given a directed rooted forest \(F\) with vertices in \([n]\). \(A_{F}\) is the set of directed rooted trees on \(n\) vertices that are undirected-compatible with \(F\)._ The following lemma follows almost immediately from Theorem 2.5. **Lemma 2.12**.: _Let \(F\) be a directed rooted forest with \(n\) vertices and \(E\) edges. Then \(|A_{F}|=\left(\prod_{i\in[m]}f_{i}\right)n^{n-1-|E|}\)._ Proof.: Let \(B_{F}\) be the set of all undirected rooted trees that are undirected-compatible with \(F\). \(\pi\) induces a bijection between \(A_{F}\) and \(B_{F}\times[n]\). Therefore, \(|A_{F}|=|B_{F}|\cdot n\). By Theorem 2.5, \(|B_{F}|=\left(\prod_{i\in[m]}f_{i}\right)n^{n-2-|E|}\). **Definition 2.13**.: _For any \(i\in[m]\), there exists a bijection between \(\mathbb{Z}/f_{i}\mathbb{Z}\) and \(C_{i}\). Let \(b_{i}\) be that bijection._ Let \(R=R_{f_{1}}\times\ldots\times R_{f_{k}}\). Note that \(R\) is a group as a cartesian product of groups. We now define a group action of \(R\) on \(A_{F}\). This group action will allow us to partition \(A_{F}\) as desired. **Definition 2.14** (Group Action of \(R\) on \(A_{F}\)).: _Given a forest \(F\) with connected components \(C_{i}\) with \(1\leq i\leq m\) and corresponding bijections \(b_{i}\), let \(\alpha\) be the group action of \(R\) on \(A_{F}\) defined as follows: Let \(\sigma=(\sigma_{a1}^{f_{1}},\ldots,\sigma_{a_{m}}^{f_{m}})\) for some \((a_{1},\ldots,a_{m})\in\mathbb{Z}/f_{1}\mathbb{Z}\times\cdots\times\mathbb{Z}/ f_{m}\mathbb{Z}\) be an element of \(R\) and let \(T\in A_{F}\). Then \(\alpha(\sigma,T)\) is obtained from \(T\) by making the following modifications to \(\pi(T)=(u(T),r(T))\):_ * _For every_ \(i\) _such that_ \(r(T)\notin C_{i}\)_, there is one (and only one) path from_ \(r(T)\) _to_ \(C_{i}\) _in_ \(u(T)\)_. Let_ \((x,y)\) _be the only edge on that path such that_ \(x\notin C_{i},y\in C_{i}\)_. Replace edge_ \((x,y)\) _with edge_ \((x,b_{i}\sigma_{a_{i}}^{f_{i}}b_{i}^{-1}(y))\)_._ * _For_ \(i\) _such that_ \(r(T)\in C_{i}\) _for some_ \(i\in[m]\)_, set_ \(r(\alpha(\sigma,T))\) _to_ \(b_{i}\sigma_{a_{i}}^{f_{i}}b_{i}^{-1}(r(T))\)_._ _The group action returns this modified tree rooted at \(b_{i}\sigma_{a_{i}}^{f_{i}}b_{i}^{-1}(r(T))\)._ To prove that this is indeed a group action, we need to verify (1) that \(\alpha(\sigma,T)\) is indeed in \(A_{F}\), (2) that the identity element \(e=(\sigma_{0}^{f_{1}},\ldots,\sigma_{0}^{f_{m}})\) of \(R\) verifies \(\alpha(e,T)=T\) for any \(T\in S\), and (3) that for any two \(\sigma,\tau\in R\), for any \(T\in S\), we have \(\alpha(\sigma,\alpha(\tau,T))=\alpha(\sigma\tau,T)\). The second condition being trivial as \(\sigma_{0}^{f_{i}}\) is the identity function for any value of \(f_{i}\), we only prove the other two. **Lemma 2.15**.: \(\alpha(\sigma,T)\in A_{F}\)_._ Proof.: Let us first show that \(u(\alpha(\sigma,T))\) is an undirected tree. As it has \(n-1\) edges, we only need to show that it is connected. Let \(v\) be a vertex. We need to show that it can be reached from \(r(T)\). Let \(P\) be the (only) path from \(r(T)\) to \(v\) in \(T\), written as a sequence of vertices. Then we can split up \(P\) into \(P=P_{1}P_{2}\ldots P_{z}\), where each \(P_{j}\) is a sequence of vertices that all belong to the same \(C_{i}\) for some \(i\in[m]\). We will now replace each of the \(P_{j}\) by another path to make a path from \(r(T)\) to \(v\) in \(u(\alpha(\sigma,T))\). Consider every edge \((x,y)\) where \(x\) is the last vertex of \(P_{j}\) for some \(j\), and \(y\) is the first vertex of \(P_{j+1}\). There exists some \(k\) such that \(y\in C_{k}\). Then \(P_{1}P_{2}\ldots P_{j}y\) is the path from \(r(T)\) to \(C_{k}\) in \(u(T)\). Then \((x,b_{k}^{-1}\sigma_{a_{k}}^{f_{k}}b_{k}(y))\in u(\alpha(\sigma,T))\). Replace \(y\) by \(b_{k}^{-1}\sigma_{a_{k}}^{f_{k}}b_{k}(y)\) in \(P\). Let us now look at a particular \(P_{j}\), and let \(i\) be such that all of the vertices of \(P_{j}\) belong to \(C_{i}\), then its first vertex has been changed to another vertex of \(C_{i}\), while all others are unchanged. Hence, the first and last vertex still belong to \(C_{i}\). As \(C_{i}\) is connected in \(u(\alpha(\sigma,T))\) since no edge inside \(C_{i}\) has been modified, there exists a path \(P_{j}^{\prime}\) in \(u(\alpha(\sigma,T))\) that connects that first and last vertex of \(P_{j}\). We can thus replace \(P_{j}\) by \(P_{j}^{\prime}\). The new path now correctly connects \(r(T)\) and \(v\) in \(u(\alpha(\sigma,T))\), which shows that it is connected. Rooting \(u(\alpha(\sigma,T))\) at \(r(\alpha(\sigma,T))\) gives \(\alpha(\sigma,T)\). Hence \(\alpha(\sigma,T)\) is a tree. Since no edge in any particular \(C_{i}\) has been modified, \(\alpha(\sigma,T)\) is compatible with \(F\). **Lemma 2.16**.: _For any \(T\in A_{F}\), and \(\sigma,\tau\in R\) we have that \(\alpha(\sigma,\alpha(\tau,T))=\alpha(\sigma\tau,T)\)._ Proof.: Let \(\sigma=(\sigma_{a1}^{f_{1}},\ldots,\sigma_{a_{m}}^{f_{m}})\) and \(\tau=(\sigma_{c1}^{f_{1}},\ldots,\sigma_{c_{m}}^{f_{m}})\). Let \(k\) be such that \(r(T)\in C_{k}\), then \(r(\alpha(\tau,T))=b_{k}\sigma_{c_{k}}^{f_{k}}b_{k}^{-1}(r(T)),r(\alpha(\sigma \tau,T))=b_{k}\sigma_{a_{k}}^{f_{k}}\sigma_{c_{k}}^{f_{k}}b_{k}^{-1}(r(T))\), and \(r(\alpha(\sigma,\alpha(\tau,T)))=b_{k}\sigma_{a_{k}}^{f_{k}}b_{k}^{-1}b_{k} \sigma_{c_{k}}^{f_{k}}b_{k}^{-1}(r(T))=b_{k}\sigma_{a_{k}}^{f_{k}}\sigma_{c_{k}} ^{f_{k}}b_{k}^{-1}(r(T))=r(\alpha(\sigma\tau,T))\). And then, for every \(i\in[m]\setminus\{k\}\), the path from any of those roots to \(C_{i}\) in \(T\) will include the path from \(C_{k}\) to \(C_{i}\) which in turn will include the edge \((x,y)\) such that \(x\notin C_{i},y\in C_{i}\), then the corresponding edge is \((x,b_{i}\sigma_{c_{i}}^{f_{i}}b_{i}^{-1}(y))\) in \(\alpha(\tau,T)\), and \((x,b_{i}\sigma_{a_{i}}^{f_{i}}\sigma_{c_{i}}^{f_{i}}b_{i}^{-1}(y))\) in \(\alpha(\sigma\tau,T)\). Hence, it is \((x,b_{i}\sigma_{a_{i}}^{f_{i}}b_{i}^{-1}b_{i}\sigma_{c_{i}}^{f_{i}}b_{i}^{-1}(y))\) in \(\alpha(\sigma,\alpha(\tau,T))\). We thus have \(\alpha(\sigma,\alpha(\tau,T))=\alpha(\sigma\tau,T)\). As we plan to use Lagrange's theorem for group actions, we now compute the _stabilizer_ of a tree \(T\), which is the set of all rotations that do not modify the tree: **Lemma 2.17**.: \(R_{T}:=\{\sigma\in R:\alpha(\sigma,T)=T\}=\{e\}\)_, for every \(T\in A_{F}\)._ Proof.: Let \(\sigma\in R\) be a rotation such that \(\alpha(\sigma,T)=T\). We obviously have that \(r(T)=r(\alpha(\sigma,T))\). Let \(i\in[m]\), * Either \(r(T)\in C_{i}\), in which case \(r(\alpha(\sigma,T))=b_{i}\sigma_{a_{i}}^{f_{i}}b_{i}^{-1}(r(T))=r(T)\), which implies that \(\sigma_{a_{i}}^{f_{i}}b_{i}^{-1}(r(T))=b_{i}^{-1}(r(T))\), therefore \(b_{i}^{-1}(r(T))=b_{i}^{-1}(r(T))+a_{i}\), and hence \(a_{i}=0\). * Or \(r(T)\notin C_{i}\). In that case, we look at the path from \(r(T)\) to \(C_{i}\) in both \(T\) and \(\alpha(\sigma,T)\). These two paths must be the same. However, if the first element of that path in \(T\) that is in \(C_{i}\) is some vertex \(x\), then in \(\alpha(\sigma,T)\), it is \(b_{i}\sigma_{a_{i}}^{f_{i}}b_{i}^{-1}(x)\). We conclude that \(b_{i}\sigma_{a_{i}}^{f_{i}}b_{i}^{-1}(x)=x\) and thus \(a_{i}=0\). We therefore have that \(a_{i}=0\) for every \(i\in[m]\), which proves that \(\sigma=(\sigma_{0}^{f_{1}},\ldots,\sigma_{0}^{f_{m}})=e\). We now take a look at the orbit \(R\cdot T\) of a tree \(T\in A_{F}\). The group action ensures that the orbits in \(A_{F}\) form a partition of \(A_{F}\). **Theorem 2.18** (Corollary 10.23 of [27]).: _Let \(G\) be a group, \(X\) a set and \(\alpha\) a group action of \(G\) on \(X\). Let \(x\) be an element of \(X\), \(G_{x}:=\{g\in G:\alpha(g,x)=x\}\) and \(G.x:=\{y\in X:\exists g\in G,y=\alpha(g,x)\}\). Then we have that:_ \[|G.x|=\frac{|G|}{|G_{x}|}\] **Lemma 2.19**.: _Let, for every \(T\in A_{F}\), \(R\cdot T:=\{T^{\prime}\in A_{F}:\exists\sigma\in R,\alpha(\sigma,T)=T^{\prime}\}\). Then \(|R\cdot T|=\prod_{i\in[m]}f_{i}\)._ Proof.: By Theorem 2.18, we have that \(|R\cdot T|=\frac{|R|}{R_{T}}=\frac{\prod_{i\in[m]}f_{i}}{1}\) We now show that exactly one tree in each orbit is compatible with \(F\). **Lemma 2.20**.: _Let \(T\in A_{F}\). Then there exists exactly one \(T^{\prime}\in R\cdot T\) such that \(T^{\prime}\) is compatible with \(F\)._ Proof.: Let \(T^{\prime}\in R\cdot T\) be a tree such that \(T^{\prime}\) is compatible with \(F\), and let \(\sigma\) be the rotation such that \(T^{\prime}=\alpha(\sigma,T)\). Let, for every \(i\in[m]\), \(r_{i}\) be the root of \(C_{i}\) in \(F\) and let \(k\) be such that \(r(T)\in C_{k}\). Then we must have that \(r_{k}=r(T^{\prime})=b_{k}\sigma_{a_{k}}^{f_{k}}b_{k}^{-1}r(T)\) and thus \(a_{k}=b_{k}^{-1}r_{k}-b_{k}^{-1}r(T)\). For every \(i\) such that \(r(T)\notin C_{i}\), look at the path from \(r(T)\) to \(C_{i}\) in \(T\), and its corresponding path in \(T^{\prime}\), computed similarly to the proof of Lemma 2.15. In \(T^{\prime}\), the first vertex of that path in \(C_{i}\) must be \(r_{i}\), but it also is \(b_{i}\sigma_{a_{i}}^{f_{i}}b_{i}^{-1}(y)\), where \(y\) is the first vertex of the path in \(T\). Hence \(a_{i}=b_{i}^{-1}r_{i}-b_{i}^{-1}(y)\). These conditions uniquely determine \(\sigma\), and, thus, \(T^{\prime}\). Conversely, setting \(\sigma\) with each \(a_{i}\) defined as above gives a tree \(T^{\prime}\) that is compatible with \(F\). We can now prove Theorem 2.1, which we recall below: **Theorem 2.1** (Lemma 1 of [26]).: _Let us be given a directed rooted forest \(F\) on \(n\) vertices, and let \(|E|\) be the number of edges in \(F\). Then, the number of directed rooted trees \(T\) over \(n\) vertices, such that \(F\) is contained by \(T\), is \(n^{n-1-|E|}\)._ Proof.: Consider set \(A_{F}\) as defined in Definition 2.11. We know that every directed rooted spanning tree \(T\) in \(K_{n}\) such that \(F\) is contained by \(T\) is in \(A_{F}\). We can partition \(A_{F}\) in orbits of the group action defined in Definition 2.14. By Lemma 2.19, each orbit has \(\prod_{i\in[m]}f_{i}\) elements, and thus we have \(\frac{|A_{F}|}{\prod_{i\in[m]}f_{i}}\) orbits, which is equal to \(n^{n-1-E}\) by Lemma 2.12. Lemma 2.20 ensures that exactly one element in each orbit is a directed rooted spanning tree \(T\) in \(K_{n}\) such that \(F\) is contained by \(T\) Using a very similar technique, we prove next Theorem 2.2: **Theorem 2.2**.: _Let us be given a directed rooted forest \(F\) on \(n\) vertices, let \(v\in[n]\) be a vertex with no parent in \(F\), and \(f\) be the number of vertices of the component of \(F\) containing \(v\) (note that we can have \(f=1\) if \(v\) is an isolated vertex). Then the number of directed rooted trees \(T\) on \(n\) vertices, such that \(F\) is contained in \(T\), and such that \(v\) is the root of \(T\), is \(fn^{n-2-|E|}\)._ Proof.: Let \(\hat{A}_{F}\) be the set of all _undirected_ trees on \(n\) vertices that are undirected-compatible with \(F\). If \(C_{1},\ldots,C_{m}\) are the components of \(F\), with respective cardinality \(f_{1},\ldots,f_{m}\), where \(v\in C_{1}\). This implies that \(f=f_{1}\). By Theorem 2.5, \(\left|\hat{A}_{F}\right|=n^{n-2-E}\prod_{i\in[2,m]}f_{i}\). Rooting all of those trees at \(v\) creates the set of all _rooted_ trees on \(n\) vertices that are undirected-compatible with \(F\), rooted at \(v\). Defining the group action as above (by using rotations on all \(C_{i}\) for \(i>1\)), we can partition \(\hat{A}_{F}\) into orbits. Each orbit has size \(\frac{|R|}{R_{T}}=\prod_{i\in[2,m]}f_{i}\), so we have \(f_{1}n^{n-2-|E|}=fn^{n-2-|E|}\) orbits, and in each orbit, exactly one tree is compatible with \(F\), hence the result. ## 3 The Uniformly Random Trees Model We will now be able to give a precise description of how information flows in the random network over time. Indeed, the theorems of the previous section will allow us to find the probability that a set of edges exists in a uniformly chosen random tree. Since all nodes are symmetric, we will at each step, divide the nodes into two sets: the set \(I\) of nodes that have received the message, called _informed_ nodes, and the set \(S\) of remaining nodes, called _uninformed_ nodes. We study how \(I\) grows over time. For the rest of the section, \(I_{t}\) and \(S_{t}\) will, respectively, be the set of nodes that are informed and uninformed after round \(t\). We set \(I_{0}=\{f\}\) and \(S_{0}=[n]-\{f\}\), where \(f\) is the node that initially holds the message, \(N_{t}=|I_{t}|\) to be the number of informed nodes after \(t\) rounds, and \(T_{t}\) to be the tree chosen at random in round \(t\). For a tree \(T\), for each node \(p\), \(P_{T}(p)\) is the (unique) parent of node \(p\) in \(T\), unless \(p\) is the root of \(T\), in which case \(P_{T}(p)=p\). Simplifying the notation, we also use \(P_{t}(p)\) to denote \(P_{T_{t}}(p)\). We use \(A(S,x)\) where \(S\) is a set and \(x\) an integer to represent the set of subsets of \(S\) of size \(x\). The central lemma of the proof is the following lemma, which characterizes how many new nodes get informed in each round, depending on how many were informed after the previous round. This lemma shows that uninformed nodes get informed independently from each other. **Lemma 3.1**.: _For any \(t>0\), \(N_{t+1}-N_{t}\) follows a binomial distribution with parameters \(\left(\frac{N_{t}}{n},n-N_{t}\right)\)._ The proof of this lemma shows that every uninformed node has probability \(\frac{N_{t}}{n}\) of having an informed parent in round \(t+1\), independently of whether the other uninformed nodes have an uninformed parent. Proof.: Let \(I_{t}=\{i_{1},\ldots,i_{N_{t}}\}\) and \(S_{t}=\{s_{1},\ldots,s_{n-N_{t}}\}\). We then have, for any integer \(x\): \[\mathbb{P}(N_{t+1}-N_{t}=x)=\sum_{J\in A(S_{t},x)}\mathbb{P}\left(\bigcap_{y \in J}(P_{t+1}(y)\in I_{t})\bigcap_{y\in S_{t}\setminus J}(P_{t+1}(y)\notin I _{t})\right)\] Our goal is to show that the events \(P_{t}(y)\in I_{t}\) for different \(y\in S_{t}\) are mutually independent. Let us look at the event \(\bigcap_{y\in J}(P_{t}(y)\in I_{t})\) for any \(J\subseteq S_{t}\) (note that we do not require that \(J\) has a specific size here). We can then write, indexing \(a\) on \(J\): \[\mathbb{P}\left(\bigcap_{y\in J}(P_{t+1}(y)\in I_{t})\right) =\sum_{a\in[N_{t}]^{J}]}\mathbb{P}\left(\bigcap_{y\in J}(P_{t+1}(y) =i_{a_{y}})\right)\] \[=\sum_{a\in[N_{t}]^{J}]}\frac{\left|\{T\in\mathcal{T}_{n}:P_{T}(y) =i_{a_{y}},\forall y\in J\}\right|}{\left|\mathcal{T}_{n}\right|}\] Now consider the forest that is composed of stars whose centers are the \(i_{a_{y}}\) and whose leaves are the nodes \(y\). More specifically, consider the forest that contains the edges \((i_{a_{y}},y),\forall y\in J\). Note that \(\left|\{T\in\mathcal{T}_{n}:P_{T}(y)=i_{a_{y}},\forall y\in J\}\right|\) equals the number of rooted trees that are compatible with this forest. By Theorem 2.1, we have that \(\left|\{T\in\mathcal{T}_{n}:P_{T}(y)=i_{a_{y}},\forall y\in J\}\right|=n^{n-1 -\left|J\right|}\). This allows us to compute the above probability as follows: \[\mathbb{P}\left(\bigcap_{y\in J}(P_{t+1}(y)\in I_{t})\right)=\sum_{a\in[N_{t} ]^{J}]}\frac{n^{n-1-\left|J\right|}}{n^{n-1}}=\left(\frac{N_{t}}{n}\right)^{ \left|J\right|}\] This proves that the events \(P_{t+1}(y)\in I_{t}\) for any two \(y\in S_{t}\) are mutually independent (Definition B.7), each having probability \(\frac{N_{t}}{n}\). Going back to the first equation of this proof, we can now compute, using Lemma B.8: \[\mathbb{P}(N_{t+1}-N_{t}=x) =\sum_{J\in A(S_{t},x)}\prod_{y\in J}\mathbb{P}\left(P_{t+1}(y) \in I_{t}\right)\prod_{y\in S_{t}\setminus J}\mathbb{P}\left(P_{t+1}(y)\notin I _{t}\right))\] \[=\binom{n-N_{t}}{x}\left(\frac{N_{t}}{n}\right)^{x}\left(1-\frac {N_{t}}{n}\right)^{n-N_{t}-x}\] Our next goal is to show that \(N_{t}=n\) with high probability for all \(t\geq 16\ln n\). To do so we introduce a random variable \(X_{t}\) that we use to lower bound \(N_{t}\). **Definition 3.2**.: _Let \(X_{t}\) be the random variable that is defined as follows:_ \[X_{0} =1\] \[X_{t+1} =X_{t}+(n-X_{t})\cdot\frac{X_{t}}{n} \text{if}\quad N_{t+1}-N_{t}\geq(n-N_{t})\cdot\frac{N_{t}}{n}\] \[X_{t+1} =X_{t} \text{if}\quad N_{t+1}-N_{t}<(n-N_{t})\cdot\frac{N_{t}}{n}\] **Lemma 3.3**.: _For every \(t\in\mathbb{N}\), we have that \(n\geq N_{t}\geq X_{t}\)._ Proof.: Note that \(N_{t}\) cannot go higher than \(n\) because it is the number of nodes informed after round \(t\), which is at most \(n\). We will prove the rest by induction on \(t\). For the induction basis note that by definition \(N_{0}=1=X_{0}\). For the induction step let us assume that \(n\geq N_{t}\geq X_{t}\) for some \(t\in\mathbb{N}\). Consider first the case that \(N_{t+1}-N_{t}<(n-N_{t})\cdot\frac{N_{t}}{n}\). Since no informed node can become uninformed, we have that \(N_{t+1}\geq N_{t}\geq X_{t}=X_{t+1}\), as desired. Next consider the case that \(N_{t+1}-N_{t}\geq(n-N_{t})\cdot\frac{N_{t}}{n}\). Then \(N_{t+1}\geq N_{t}+(n-N_{t})\cdot\frac{N_{t}}{n}\) and \(X_{t+1}=X_{t}+(n-X_{t})\cdot\frac{X_{t}}{n}\). As the function \(x\mapsto x+(n-x)\frac{x}{n}\) is strictly increasing for \(x\leq n\), this proves that \(N_{t+1}\geq X_{t+1}\), as desired. **Lemma 3.4**.: _For every \(t\in\mathbb{N}\), we have that \(X_{t}\geq 1\)._ Proof.: We will again show this by induction. For the induction basis note that by definition \(1=X_{0}\). For the induction step let us assume that \(X_{t}\geq 1\) for some \(t\in\mathbb{N}\). We then have two cases, either \(X_{t+1}=X_{t}\) and the result holds trivially, or \(X_{t+1}=X_{t}+(n-X_{t})\cdot\frac{X_{t}}{n}\). Since \(1\leq X_{t}\leq n\), we have that \(X_{t+1}\geq X_{t}\geq 1\). **Lemma 3.5**.: _For every \(t\in\mathbb{N}\), we have that \(n>X_{t}\), if \(n>1\)._ Proof.: We show this claim by induction on \(t\). As \(n>1\) and \(X_{1}=1\), it is trivially true for \(t=1\). Assume it is true for \(t\in\mathbb{N}\). Then \(X_{t+1}\leq X_{t}+(n-X_{t})\cdot\frac{X_{t}}{n}=n(\frac{X_{t}}{n}+\frac{n-X_ {t}}{n}\frac{X_{t}}{n})<n\), where the last inequality holds by noting that \((\frac{X_{t}}{n}+\frac{n-X_{t}}{n}\frac{X_{t}}{n})\) is a convex combination of \(1\) and \(\frac{X_{t}}{n}\), the latter of which being strictly smaller than \(1\). Essentially, this means that \(X_{t}\) never reaches \(n\), and thus that \(X_{t+1}\) is always strictly larger than \(X_{t}\) if \(N_{t+1}-N_{t}\geq(n-N_{t})\cdot\frac{N_{t}}{n}\): **Corollary 3.6**.: _We have that \(X_{t+1}>X_{t}\) if and only if \(N_{t+1}-N_{t}\geq(n-N_{t})\cdot\frac{N_{t}}{n}\)._ **Lemma 3.7**.: _Let \(u_{t}\in\mathbb{N}\) be the \(t\)-th round such that \(X_{u_{t}+1}>X_{u_{t}}\) and let \(u_{0}=0\). Then \(X_{u_{t}}=n-n\left(\frac{n-1}{n}\right)^{2^{t}}\). Moreover, we have that \(X_{u_{t+1}}=X_{u_{t}}+(n-X_{u_{t}})\cdot\frac{X_{u_{t}}}{n}\)._ Proof.: We show the claim by induction on \(t\). By definition of \(u_{0}\) we have that \(X_{u_{0}}=1\). Thus the induction basis \(X_{u_{0}}=1=n-n\left(\frac{n-1}{n}\right)^{2^{0}}\) follows. For the induction step assume next the result is true for some \(t\in\mathbb{N}\). Note that for every \(t\in\mathbb{N}\), it holds that \(X_{u_{t+1}}=X_{u_{t}}+(n-X_{u_{t}})\cdot\frac{X_{u_{t}}}{n}\). Indeed, we have that \(X_{u_{t+1}}=X_{u_{t+1}-1}=\cdots=X_{u_{t}+1}=X_{u_{t}}+(n-X_{u_{t}})\cdot\frac {X_{u_{t}}}{n}\). Thus, \[X_{u_{t+1}} =X_{u_{t}}+(n-X_{u_{t}})\cdot\frac{X_{u_{t}}}{n}=n-n\left(\frac{ n-1}{n}\right)^{2^{t}}+\left(n-n+n\left(\frac{n-1}{n}\right)^{2^{t}}\right) \frac{n-n\left(\frac{n-1}{n}\right)^{2^{t}}}{n}\] \[=n-n\left(\frac{n-1}{n}\right)^{2^{t}}+\left(\frac{n-1}{n}\right) ^{2^{t}}\left(n-n\left(\frac{n-1}{n}\right)^{2^{t}}\right)\] \[=n-n\left(\frac{n-1}{n}\right)^{2^{t+1}}\] **Lemma 3.8**.: _If \(t\geq u_{2\ln n}\), then \(N_{t}=n\)._ Proof.: Since \(N_{t}\) is non-decreasing and upper-bounded by \(n\), it suffices to show that \(N_{u_{2\ln n}}=n\). We will do so by using its lower bound \(X_{u_{2\ln n}}\). We have that: \[X_{u_{2\ln n}} \geq n-n\left(\frac{n-1}{n}\right)^{2^{2\ln n}}=n-n\left(\frac{n -1}{n}\right)^{n^{2\ln 2}}=n-n\exp\left(n^{2\ln 2}\ln\left(\frac{n-1}{n}\right)\right)\] \[\geq n-n\exp\left(n^{2\ln 2}\left(\frac{n-1}{n}-1\right)\right)=n-n \exp\left(-n^{2\ln 2-1}\right)>n-1\] Where we used that \(\ln(x)\leq 1-x\) and for \(x\geq 1,x\exp(-x^{2\ln 2-1})<1\). Since \(n\geq N_{u_{t}}\geq X_{u_{t}}\) by Lemma 3.3, and since \(N_{t}\in\mathbb{N}\), we have that \(N_{u_{t}}=n\) We now state a result due to Greenberg and Mohri [19], that will give us an estimate of the probability of \(X_{t}\) strictly increasing in a given round. **Theorem 3.9** (Theorem 1 of [19]).: _For any positive integer \(m\) and any probability \(p\) such that \(p>\frac{1}{m}\), let \(B\) be a binomial random variable of parameters \((p,m)\). Then, the following inequality holds:_ \[\mathbb{P}(B\geq mp)>\frac{1}{4}\] **Lemma 3.10**.: _If \(n>4\), for every \(t\in\mathbb{N}\), we have that \(\mathbb{P}\left(X_{t+1}>X_{t}\right)\geq\frac{1}{4}\)_ Proof.: By Corollary 3.6, we have that \(X_{t+1}>X_{t}\) if and only if \(N_{t+1}-N_{t}\geq(n-N_{t})\cdot\frac{N_{t}}{n}\). This implies that \(\mathbb{P}\left(X_{t+1}>X_{t}\right)=\mathbb{P}\left(N_{t+1}-N_{t}\geq(n-N_{ t})\cdot\frac{N_{t}}{n}\right)\). By Lemma 3.1, \(N_{t+1}-N_{t}\) follows a binomial distribution of parameters \(\left(\frac{N_{t}}{n},n-N_{t}\right)\) for any \(t>0\). Thus the expected value of \(N_{t+1}-N_{t}\) is \((n-N_{t})\,\frac{N_{t}}{n}\). We have multiple cases to consider: _Case 1:_ If \(2\leq N_{t}\leq n-2\), then \(N_{t+1}-N_{t}\) has expected value \((n-N_{t})\,\frac{N_{t}}{n}\). We will show below that \(\frac{N_{t}}{n}\geq\frac{1}{n-N_{t}}\), implying that, by Theorem 3.9, the result holds. The function \(x\mapsto x+1+\frac{1}{x-1}\) is strictly increasing between \(2\) and \(n-2\), using that \(\frac{1}{n-3}<1\) when \(n>4\), we have that: \[N_{t}+1+\frac{1}{N_{t}-1}\leq n-2+1+\frac{1}{n-3}<n\] Therefore \(N_{t}+1+\frac{1}{N_{t}-1}<n\), which implies that \(n>\frac{N_{t}^{2}}{N_{t}-1}\). This further implies that \(-n>N_{t}(N_{t}-n)\) and therefore \(\frac{N_{t}}{n}>\frac{1}{n-N_{t}}\). _Case 2:_ If \(N_{t}=1\), then \((n-N_{t})\cdot\frac{N_{t}}{n}=\frac{n-1}{n}\). Therefore \(\mathbb{P}(N_{t+1}-N_{t}\geq(n-N_{t})\cdot\frac{N_{t}}{n})=\mathbb{P}(N_{t+1} -N_{t}\geq 1)=1-\mathbb{P}(N_{t+1}-N_{t}=0)=1-(\frac{n-1}{n})^{n-1}>\frac{1}{4}\) since \(n>4\). _Case 3:_ If \(N_{t}=n-1\), then \(\mathbb{P}(N_{t+1}-N_{t}\geq(n-N_{t})\cdot\frac{N_{t}}{n})=\mathbb{P}(N_{t+1} -N_{t})\geq\frac{n-1}{n})=\frac{n-1}{n}>\frac{1}{4}\) since \(n>4\). _Case 4:_ If \(N_{t}=n\), then \(\mathbb{P}(N_{t+1}-N_{t}\geq(n-N_{t})\cdot\frac{N_{t}}{n})=\mathbb{P}(N_{t+1} -N_{t}\geq 0)=1>\frac{1}{4}\). Let \((B_{t})_{t\in\mathbb{N}}\) be Bernoulli independent random variables of parameter \(\frac{1}{4}\). Let \(Z_{\leq t}^{B}=\sum_{z\in[t]}B_{z}\) and \(Z_{\leq t}=\sum_{z\in[t]}\mathbb{1}\left(X_{z+1}>X_{z}\right)\). **Corollary 3.11**.: _For any \(\ell\in\mathbb{N}\), we have that \(\mathbb{P}(Z_{\leq t}\leq\ell)\leq\mathbb{P}(Z_{\leq t}^{B}\leq\ell)\)._ **Lemma 3.12** (Hoeffding's inequality for binomial distributions [21]).: _Let \(Y\) be a binomial random variable with parameters \((t,p)\). We then have, for any \(x\leq tp\):_ \[\mathbb{P}(Y\leq x)\leq\exp\left(-2t\left(p-\frac{x}{t}\right)^{2}\right)\] **Lemma 3.13**.: _Let \(t=16\ln n\). Then \(\mathbb{P}(Z_{\leq t}\leq 2\ln n)\leq\frac{1}{n^{2}}\)._ Proof.: Note that \(Z_{\leq t}^{B}\) is a binomial distribution of parameters \((t,\frac{1}{4})\). Using Hoeffding's inequality, we have that: \[\mathbb{P}(Z_{\leq t}^{B}\leq 2\ln n)\leq\exp\left(-2t\left(\frac{1}{4}-\frac{2 \ln n}{t}\right)^{2}\right)=\exp\left(-2\cdot 16\ln n\left(\frac{1}{4}-\frac{2}{16} \right)^{2}\right)=n^{-2}\] Corollary 3.11 then gives the desired result. We now have all the tools to prove Theorem 1.1, which we recall here: **Theorem 1.1**.: _Broadcast on Uniformly Random Trees completes within \(16\ln n\) rounds with probability \(p>1-\frac{1}{n^{2}}\)._ Proof.: By Lemma 3.13, we have that, with probability \(p\leq 1-\frac{1}{n^{2}}\), \(X_{t+1}>X_{t}\) for at least \(2\ln n\) many rounds within the \(16\ln n\) first rounds. Recall that \(u_{2\ln n}\) is the \(2\ln n\)-th round where \(X_{t+1}>X_{t}\). We thus have that \(\mathbb{P}\left(u_{2\ln n}\leq 16\ln n\right)\geq 1-n^{-2}\). But, by Lemma 3.8 the event \(u_{2\ln n}\leq 16\ln n\) implies the event \(N_{16\ln n}=n\), therefore \(\mathbb{P}\left(N_{16\ln n}=n\right)\geq 1-n^{-2}\). We now show that this result is asymptotically tight. Indeed, we can show that if at most \(\log n\) rounds are allowed, then with probability \(q\geq\frac{1}{4}\), Broadcast does not complete: **Theorem 1.2**.: _If \(n\geq 2\), then the probability that Broadcast on Uniformly Random Trees fails to complete within \(\log n\) rounds is at least \(\frac{1}{4}\)._ Proof.: We will first show by induction that \(\mathbb{E}(N_{t})\leq X_{u_{t}}\) for every \(t\in\mathbb{N}\). We will then conclude using Markov's inequality. The induction basis is clear as \(N_{0}=X_{0}=1\). For the induction step, assume that for some \(t\in\mathbb{N}\), we have that \(\mathbb{E}(N_{t})\leq X_{u_{t}}\). Let us show that this implies that \(\mathbb{E}(N_{t+1})\leq X_{u_{t+1}}\). Indeed, by Lemma 3.1, \(N_{t+1}-N_{t}\) has a binomial distribution of parameters \(\frac{N_{t}}{n}\) and \(n-N_{t}\). This implies that: \[\mathbb{E}[N_{t+1}|N_{t}]=N_{t}+\frac{N_{t}}{n}\cdot(n-N_{t})=2N_{t}-\frac{N_ {t}^{2}}{n}\] Therefore: \[\mathbb{E}[N_{t+1}]=\mathbb{E}\left[\mathbb{E}[N_{t+1}|N_{t}]\right]=2 \mathbb{E}[N_{t}]-\frac{\mathbb{E}[N_{t}^{2}]}{n}\] As \(Var(N_{t})=\mathbb{E}[N_{t}^{2}]-\mathbb{E}[N_{t}]^{2}\geq 0\), we have that \(-\mathbb{E}[N_{t}^{2}]\leq-\mathbb{E}[N_{t}]^{2}\). This implies: \[\mathbb{E}[N_{t+1}]\leq 2\mathbb{E}[N_{t}]-\frac{\mathbb{E}[N_{t}]^{2}}{n}\] Note that we have that, by Lemma 3.7: \[X_{u_{t+1}}=2X_{u_{t}}-\frac{X_{u_{t}}^{2}}{n}\] Since \(x\mapsto 2x-\frac{x^{2}}{n}\) is strictly increasing between \(0\) and \(n\), with both \(X_{t}\) and \(\mathbb{E}[N_{t}]\) falling in that range (Lemmata 3.3 and 3.4), the induction hypothesis implies that \(2\mathbb{E}[N_{t}]-\frac{\mathbb{E}[N_{t}]^{2}}{n}\leq 2X_{u_{t}}-\frac{X_{u_{t}}^{ 2}}{n}\). This implies \(\mathbb{E}[N_{t+1}]\leq X_{u_{t+1}}\). We know the value of \(X_{u_{t}}\) from Lemma 3.7. We can thus give the upper bound \(\mathbb{E}[N_{\log n}]\leq X_{u_{\log n}}=n(1-((n-1)/n)^{n})\leq n(1-\frac{1}{4})\), since \(n\geq 2\). Using Markov's inequality, we thus have: \[\mathbb{P}(N_{\log n}\geq n)\leq\frac{\mathbb{E}[N_{\log n}]}{n}=1-\frac{1}{4}\] We now use this result to get a similar result for All-to-all Broadcast. Using a union-bound, we obtain: **Theorem 1.3**.: _All-to-All Broadcast on Uniformly Random Trees completes within \(16\ln n\) rounds with probability \(p>1-\frac{1}{n}\)._ Proof.: Let \(N_{t}^{(i)}\) be the random variable that represents the number of nodes that are informed after round \(t\) of the message given to node \(i\). By Theorem 1.1, we know that \(\mathbb{P}\left(N_{16\ln n}^{(i)}<n\right)\leq n^{-2}\) for every \(i\in[n]\). Using a union-bound, we get that: \[\mathbb{P}\left(\bigcup_{i\in[n]}N_{16\ln n}^{(i)}<n\right)\leq n^{-1}\] And thus: \[\mathbb{P}\left(\bigcap_{i\in[n]}N_{16\ln n}^{(i)}=n\right)=1-\mathbb{P} \left(\bigcup_{i\in[n]}N_{16\ln n}^{(i)}<n\right)\geq 1-n^{-1}\] We now finally recall Theorem 1.4, that states a result on Consensus: **Theorem 1.4**.: _There exists a protocol for Consensus on Uniformly Random Trees that satisfies Agreement and Validity, terminates within \(16\ln n\) rounds with probability \(p>1-\frac{2}{n^{2}}\), and only requires messages of 1 bit over each edge in each round._ Proof.: Algorithm 1 is an algorithm where everyone agrees on \(v_{1}\), the input to node 1, and where only \(v_{1}\) is passed along. Thus every node outputs either \(v_{1}\) or \(\bot\). However, if \(v_{1}\) has broadcast within the first \(16\ln n\) rounds, then everyone outputs \(v_{1}\). This happens with probability \(p\geq 1-n^{-2}\), by Theorem 1.1. Note that Algorithm 1 can be adapted to different variants for Consensus. To keep our presentation concise, we do not explore them further in detail. For example, the version given here satisfies the condition that no node continues to communicate after it has decided on a value, but Consensus does not complete with probability 1 after everyone has decided as some nodes might output \(\bot\). A different definition of Consensus could allow each node to send messages after it decides on a value, in which case a different version of the algorithm could be given, where each node can decide as soon as it receives the value \(v_{1}\). ``` Input:\(v_{p}\in\{0,1\}\) Output:\(y_{p}\) that is the same to all other nodes ifp=1then \(y_{p}\gets v_{p}\) else \(y_{p}\leftarrow\bot\) end if In round\(k:1\leq k\leq 16\ln n\)do if\(y_{p}=\bot\)then Receive a message \(M\) from the in-neighbor, if any if\(M\neq\varnothing\)then \(y_{p}\gets M\) end if else Send \(y_{p}\) to the out-neighbors end if end if return\(y_{p}\) ``` **Algorithm 1**Consensus algorithm for node \(p\) The Randomized Oblivious Message Adversary In this section, we consider a more general model where a parametrized adversary controls a certain number of edges in every round, and the others are chosen randomly. More specifically, in each round, the adversary \(A\) chooses \(k\) edges such that the resulting graph is a directed rooted forest \(F\), and then a tree is chosen uniformly at random among the rooted trees that are compatible with \(F\). We consider the model where the adversary has access to the randomly chosen trees of all previous rounds, but has no information on the random coin flips of the current and future rounds. Let us start by understanding how Broadcast works in this model. Here, we start by giving each node a message, and in each round each node can make copies of all messages it has previously received and send them to all its out-neighbors. There is no restriction on the number of copies nor the size/number of messages that can be sent per round. The goal of the adversary is to delay the number of rounds until one message is broadcast to all nodes. Note that in the case \(k=n-1\), this is the deterministic case where in each round the adversary gets to exactly choose which tree is the communication network of the round. This is exactly the model studied in [13], where it was shown that the adversary cannot delay broadcast for more than \(\left\lceil(1+\sqrt{2})n\right\rceil\approx 2.4n\). **Theorem 4.1** (Theorem 3.6 of [13]).: _The adversary cannot delay broadcast for more than \(\left\lceil(1+\sqrt{2})n\right\rceil\) rounds._ We will prove the following theorem: **Theorem 4.2**.: _If the adversary controls \(k\) edges in each round, then with probability \(p\geq 1-2n^{-2}\), broadcast completes within \(O(k+\log n)\) rounds._ In order to understand how tight this bound is, we first give a lower bound on how many rounds the adversary can delay broadcast: **Theorem 1.6**.: _If the adversary controls \(k\) edges in each round, then there exists a strategy that, with probability \(1\), guarantees that at least \(\frac{k}{2}-1\) rounds are required._ Proof.: Let the adversary choose the set of edges \((1,2),\ldots(k,k+1)\) in all of the rounds. Then for any node \(p\in[2,k+1]\), every message it has received must have been received by \(p-1\) in a strictly smaller round, unless that message is the one given initially to \(p\). Let \(m\) be a message that has been broadcast. In particular, \(m\) has been received by all nodes in \([k+1]\). If \(m\) was given initially to some node \(p\) such that \(p\leq\left\lceil\frac{k}{2}\right\rceil\) or \(p>k+1\), then \(m\) must have needed \(\left\lfloor\frac{k}{2}\right\rfloor\) rounds to travel from node \(\left\lceil\frac{k}{2}\right\rceil+1\) to node \(k+1\). If on the other hand, it was a message initially given to a node \(\left\lceil\frac{k}{2}\right\rceil+1\leq p\leq k+1\), then \(m\) must have needed \(\left\lceil\frac{k}{2}\right\rceil-1\) rounds to travel from node \(1\) to node \(\left\lceil\frac{k}{2}\right\rceil\). Let us now concentrate on the upper bound. We will consider two cases, one case where \(k\) is large, and where we will use Theorem 4.1, and one where \(k\) is small, where we will use a similar analysis to Section 3. **Lemma 4.3**.: _If \(k\geq\frac{n}{10}\), then the adversary cannot delay broadcast for more than \(\left\lceil 10(1+\sqrt{2})k\right\rceil=O(k)\) rounds._ Proof.: By Theorem 4.1, the adversary cannot delay broadcast for more than \(\left\lceil(1+\sqrt{2})n\right\rceil\) rounds even if the adversary controls _all_ edges. Since \(n\leq 10k\), we have the result. We will now prove the result for \(k\leq n/10\). To do so, we will introduce an alternative adversary \(A^{\prime}\) whose goal is to maximize the number of rounds until node \(1\) has broadcast, independently of whether other nodes have broadcast or not. Clearly, this is in favour of the adversary and will not result in a smaller number of rounds than against \(A\). Thus any upper bound on the number of rounds needed by \(A^{\prime}\) is also an upper bound for \(A\). For the rest of the section, \(I_{t}\) and \(S_{t}\) will, respectively, be the set of nodes that are informed and uninformed after round \(t\). We set \(I_{0}=\{1\}\) and \(S_{0}=[n]-\{1\}\), \(N_{t}=|I_{t}|\) to be the number of informed nodes after \(t\) rounds, and \(T_{t}\) to be the tree chosen at random in round \(t\). For a tree \(T\), for each vertex \(p\), \(P_{T}(p)\) is the (unique) parent of node \(p\) in \(T\), unless \(p\) is the root of \(T\), in which case \(P_{T}(p)=p\). Simplifying the notation, we also use \(P_{t}(p)\) to denote \(P_{T_{t}}(p)\). We start by finding the best strategy \(A^{\prime}\) could use and then analyze that strategy. ### Best Strategy for the Alternative Adversary A' To find the best strategy the adversary \(A^{\prime}\) can use, we will use the notion of stochastic dominance. Intuitively, if a strategy yields more informed nodes than another one, then the adversary will choose the latter one. Stochastic dominance is the tool we use to formalize this. **Definition 4.4** (Stochastic Dominance).: _We say that a real random variable \(Y_{1}\) stochastically dominates another real random variable \(Y_{2}\), if, for every \(x\in\mathbb{R}\), we have that \(\mathbb{P}(Y_{1}\geq x)\geq\mathbb{P}(Y_{2}\geq x)\)._ For any set \(S\), let \(\mathcal{P}(S)\) be the set of all subsets of \(S\). **Definition 4.5** (Stochastic dominance).: _We say that a random variable \(Y_{1}\) with values in \(\mathcal{P}([n])\) stochastically dominates another random variable \(Y_{2}\) with values in \(\mathcal{P}([n])\), if, for every \(x\in\mathbb{N}\), we have that \(\mathbb{P}(|Y_{1}|\geq x)\geq\mathbb{P}(|Y_{2}|\geq x)\)._ With stochastic dominance, we will use a related notion, that is coupling. Coupling is a useful tool to compare two random variables, and in particular, it helps translate probabilistic events into deterministic ones, which are easier to analyze. **Definition 4.6** (Coupling).: _A coupling of two random variables \(Y_{1},Y_{2}\) is a third random variable \((\hat{Y_{1}},\hat{Y_{2}})\) such that \(Y_{1}\) has the same distribution as \(\hat{Y_{1}}\), and \(Y_{2}\) has the same distribution as \(\hat{Y_{2}}\)._ **Theorem 4.7** (Stochastic Dominance and Coupling, Theorem 7.1 of [7]).: _If a real random variable \(Y_{1}\) stochastically dominates another real random variable \(Y_{2}\), then there exists a coupling \((\hat{Y_{1}},\hat{Y_{2}})\) of \(Y_{1}\) and \(Y_{2}\) such that_ \[\mathbb{P}(\hat{Y_{1}}\geq\hat{Y_{2}})=1\] **Theorem 4.8** (stochastic dominance and coupling, Theorem 7.8 of [7]).: _If a random variable \(Y_{1}\) with values in \([n]\) stochastically dominates another random variable \(Y_{2}\) with values in \([n]\), then there exists a coupling \((\hat{Y_{1}},\hat{Y_{2}})\) of \(Y_{1}\) and \(Y_{2}\) such that_ \[\mathbb{P}\left(\left|\hat{Y_{1}}\right|\geq\left|\hat{Y_{2}}\right|\right)=1\] **Lemma 4.9**.: _[Distribution Domination] Let \(t\) be a round. Let \(E_{1},E_{2}\) be two sets of edges the adversaries could choose for round \(t\). Let \(N_{t}^{(1)}\) (resp. \(I_{t}^{(1)}\)) be the number (resp. set) of informed nodes after round \(t\) if \(E_{1}\) is chosen, and \(N_{t}^{(2)}\) (resp. \(I_{t}^{(2)}\)) if \(E_{2}\) is chosen. Then if \(\mathbb{P}(N_{t}^{(1)}\geq m)\geq\mathbb{P}(N_{t}^{(2)}\geq m)\) for every \(m\in\mathbb{N}\) (that is, if \(N_{t}^{(1)}\) stochastically dominates \(N_{t}^{(2)}\)), then choosing \(E_{2}\) is a better strategy for the adversary than choosing \(E_{1}\)._ Intuitively, the way to prove this is to build, for any strategy the adversary might use after choosing \(E_{1}\), another strategy that would work better if used after choosing \(E_{2}\). To prove that it is indeed the case, we couple these two strategies to prove that after any round, the number of informed nodes in one strategy stochastically dominates the number of informed nodes in the other one. The full details of the proof can be found in Appendix C. The next step is to show that the adversary will never force an edge from an informed node to an uninformed one. Indeed, intuitively, this means the adversary forces a node to be informed, which is against its interests. To do so, we introduce the notions of _non-increasing_ and _increasing_ trees, and show that \(A^{\prime}\) will never choose an increasing tree. **Definition 4.10**.: _A rooted tree \(U\) in a round \(t\) is said to be non-increasing in round \(t\) if all edges in \(U\) whose source is in \(I_{t-1}\) have their target in \(I_{t-1}\) as well. Otherwise a tree is (information)-increasing in round \(t\)._ To show that the adversary never uses an increasing tree, we introduce the notion of a _correction_ of an increasing tree, which will be non-increasing, and show that choosing the correction is a better strategy for the adversary than choosing the increasing tree. **Definition 4.11** (Isomorphism).: _We say that a rooted tree \(U\) on \(n\) nodes is isomorphic to a rooted tree \(U^{\prime}\) on \(n\) nodes if there exists a bijection \(b\) from \([n]\) to \([n]\) such that for every (directed) edge \((u,v)\in U\), we have that \((b(u),b(v))\in U^{\prime}\), and for every (directed) edge \((u,v)\in U^{\prime}\), we have that \((b^{-1}(u),b^{-1}(v))\in U\)._ In particular, if \(r\) is the root of \(U\), then \(b(r)\) is the root of \(U^{\prime}\). **Definition 4.12**.: _A correction of a tree \(U\) that is increasing in a round \(t\) is a tree \(U^{\prime}\) over the same nodes as \(U\) that is non-increasing in round \(t\), is isomorphic to \(U\), and whose root is a node \(u\in S_{t-1}\) such that \(P_{U}(u)\in I_{t-1}\)._ **Lemma 4.13**.: _For any increasing tree \(U\), there exists a correction \(U^{\prime}\)._ Proof.: Let \(V(U)\) be the set of nodes of \(U\) and let \(|V(U)\cap S_{t-1}|=\ell\). To show the lemma we will give a bijection \(b\) that maps the \(\ell\) uninformed nodes of \(V(U)\) to the \(\ell\) first nodes of \(U\) in bfs-order and the informed nodes to the remaining nodes of \(U\). The resulting tree will be the correction \(U^{\prime}\). As a result of this bijection every uninformed node of \(U^{\prime}\) has only uninformed ancestors and, thus, \(U^{\prime}\) is non-increasing. More formally, if \(U\) is increasing, then there exists an edge \((i,s)\) such that \(i\in I_{t-1},s\in S_{t-1}\). Let \(\pi\) be a bijection from \([|V(U)|]\) to \(V(U)\) such that \(\pi(1)=s,\{\pi(2),\ldots,\pi(\ell)\}\subset S_{t-1}\), and \(\{\pi(\ell+1),\ldots,\pi(|V(U)|)\}\subseteq I_{t-1}\). On another hand, let \(\rho\) be a bijection from \([|V(U)|]\) to \(V(U)\) such that \(\rho(j)\) is the \(j\)-th node encountered in a breadth-first traversal starting at the root of \(U\). Then let \(b=\pi\circ\rho^{-1}\). Note that the tree \(U^{\prime}\), whose set of edges is \(\{(b(u),b(v)):(u,v)\in U\}\) is a correction of \(U\). Indeed, it is clearly a tree as a relabeling of \(U\), over the same nodes as \(U\), and for every \((b(u),b(v))\in U^{\prime}\), \(u\) is encountered in a BFS before \(v\) in \(U\), therefore \(\rho^{-1}(u)<\rho^{-1}(v)\), and therefore if \(\rho^{-1}(u)\geq\ell\), we also have \(\rho^{-1}(v)\geq\ell\). This means that if \(b(u)\in I_{t-1}\), then \(b(v)\in I_{t-1}\). **Lemma 4.14**.: _Let \(t\) be a round and \(N_{t-1}\) be the number of informed nodes after round \(t-1\). Let \(E_{1},E_{2}\) be two sets of edges that the adversary could choose for round \(t\) such that_ 1. \(E_{1}\) _is a collection of rooted trees such that at least one tree_ \(U\) _is information-increasing, and_ 2. \(E_{2}\) _is obtained from_ \(E_{1}\) _by replacing_ \(U\) _with a correction_ \(U^{\prime}\) _of_ \(U\) _Let \(N_{t}^{(1)}\) be the number of informed nodes after round \(t\) if \(E_{1}\) is chosen, and let \(N_{t}^{(2)}\) be that number if \(E_{2}\) is chosen. Then choosing \(E_{2}\) is a better strategy for the adversary than choosing \(E_{1}\)._ The proof can be found in Appendix C. This lemma proves that the adversary will never choose a set of edges such that one (or more) component is increasing. Indeed, if such components existed, then the adversary would have replaced all of them with non-increasing ones, as this will lead to no fewer and potentially more rounds. Therefore, we can assume in the following that all components are non-increasing. The next step is to show that if the adversary chooses a forest, all edges will be used in one component. For that, we introduce the notion of _merging trees_, and show that if the adversary chooses a forest with \(2\) or more non-trivial components, then merging two of those non-trivial components will yield a better strategy for the adversary. **Lemma 4.15**.: _Let \(t\) be a round, let \(E\) be the set of \(k\) edges forming a directed rooted forest over \([n]\) which the adversary chooses in round \(t\) such that each component of \(E\) is non-increasing, and let \(s_{1},\ldots,s_{x}\) be uninformed nodes that are roots of their component (which might have size only 1). Note that \(\{s_{1},\ldots,s_{x}\}\) needs not be the set of the roots of all components, simply a collection of some of them. Let \(\eta_{1},\ldots,\eta_{x}\) be the number of informed nodes in the component of \(s_{1},\ldots,s_{x}\) respectively, and \(\eta\) the number of informed nodes outside the components of \(s_{1},\ldots,s_{x}\). Then we have that:_ \[\mathbb{P}\left(\cap_{j\in[x]}(P_{t}(s_{j})\in I_{t-1})\right)=\frac{\eta( \eta+\sum_{j\in[x]}\eta_{j})^{x-1}}{n^{x}}=\frac{\eta(N_{t-1})^{x-1}}{n^{x}}\] Proof.: We have that: \[\mathbb{P}\left(\cap_{j\in[x]}(P_{t}(s_{j})\in I_{t-1})\right)=\sum_{a\in(I_{t -1})^{x}}\mathbb{P}\left(\cap_{j\in[x]}(P_{t}(s_{j})=a_{j})\right)\] However, many terms of that sum are equal to \(0\). Indeed, for example, if \(a_{1}\) is one of the \(\eta_{1}\) informed nodes in the component of \(s_{1}\), then \(\mathbb{P}(P_{t}(s_{1})=a_{1})=0\). More generally, if the choice of \(a\) is so that \(E\cup\bigcup_{j\in[i]}(a_{j},s_{j})\) contains an (undirected cycle), in other words, is incompatible with a rooted tree, then \(\mathbb{P}(P_{t}(s_{1})=a_{1})=0\). If, on the other hand, the choice of \(a\) is compatible with a rooted tree, then, applying Theorem 2.1, we have: \[\mathbb{P}\left(\cap_{j\in[x]}(P_{t}(s_{j})=a_{j})\right)=\frac{\left|T\in \mathcal{T}_{n}:(E\bigcup_{j\in[x]}(a_{j},s_{j}))\subset T\right|}{\left|T\in \mathcal{T}_{n}:E\subset T\right|}=\frac{n^{n-1-|E|-x}}{n^{n-1-|E|}}=n^{-x}\] We now have to count how many choices of \(a\) are compatible with a rooted tree. Let us first assume that none of the \(\eta_{j}\) nor \(\eta\) is equal to \(0\). Let \(\alpha\) denote the set of all such values of \(a\), and define \(\beta\) as follows: create a forest \(F\) with \(x+1\) (directed) line graphs, each line having respectively \(\eta_{1},\ldots,\eta_{x},\eta\) nodes. Then \(\beta\) is the set of all rooted trees that are compatible with \(F\), and whose root is the root of the last tree of \(F\). To determine \(|\alpha|\), we show that there is a bijection between \(\alpha\) and \(\beta\) and determine \(|\beta|\). To create the bijection first take an arbitrary but fixed bijection \(b\) that maps every informed node from \(I_{t-1}\) to a node from \(F\), such that an informed node from the component of \(s_{j}\) is mapped to a node of the \(j-th\) line of \(F\). Then we can map a choice of \(a\in\alpha\) to a tree \(T\in\beta\) by setting the parent in \(T\) of the root of the \(j-th\) line to be \(b(a_{j})\) for every \(j\). Note that this uniquely identifies a tree of \(\beta\). Conversely, to find a choice \(a\in\alpha\) from a tree \(T\in B\), set \(a_{j}=b^{-1}(p_{j})\) where \(p_{j}\) is the parent of the root of the \(j\)-th line of \(F\) in \(T\). Now note that \(\beta\) is the set of all rooted trees that are compatible with \(F\), and whose root is the root of the last tree of \(F\). By Theorem 2.2, \(|\beta|=\eta(\eta+\sum_{j\in[x]}\eta_{j})^{x-1}\), which concludes the proof. If \(\eta=0\), it is easy to see that no choice of \(a\) is compatible with a rooted tree. If there exists some values of \(j\) such that \(\eta_{j}=0\), then assume wlog that \(\eta_{1}=\cdots=\eta_{\ell}=0\), and \(\eta_{j}>0\) for every \(j>\ell\). As seen above, there will be \(\eta(\eta+\sum_{j\in[x]}\eta_{j})^{x-\ell-1}\) choices for \((a_{\ell+1},\ldots,a_{x})\). Once this choice is made, for every \(1\leq j\leq\ell\), \(a_{j}\) can take any value in \(I_{t-1}\), where \(|I_{t-1}|=\eta+\sum_{j\in[x]}\eta_{j})\). The total number of choices for \(a\) is \(\eta(\eta+\sum_{j\in[x]}\eta_{j})^{x-1}\). The following merge operation combines two trees such as to make a non-informed root the root of the merged tree, if at least one of the roots is non-informed. **Definition 4.16**.: _We say that we merge two non-trivial trees \(U\) and \(U^{\prime}\) with respective roots \(r\) and \(r^{\prime}\) in round \(t\) when we apply the following operation:_ * _If_ \(r\in I_{t-1}\)_, then for every_ \(p\in U\) _with_ \((r,p)\in U\)_, replace edge_ \((r,p)\) _with the edge_ \((r^{\prime},p)\)_._ * _If_ \(r\notin I_{t-1}\)_, then for every_ \(p\in U^{\prime}\) _with_ \((r^{\prime},p)\in U^{\prime}\)_, replace edge_ \((r^{\prime},p)\) _with the edge_ \((r,p)\)_._ **Lemma 4.17**.: _Let \(t\) be a round and \(N_{t-1}\) be the number of informed nodes after round \(t-1\). Let \(E_{1},E_{2}\) be two sets of edges that the adversary could choose for round \(t\), as follows: let \(E_{1}\) be a collection of rooted trees such that every tree is non-increasing, with at least two non-trivial components \(U\) with root \(r\) and \(U^{\prime}\) with root \(r^{\prime}\), and let \(E_{2}\) be obtained from \(E_{1}\) by merging \(U\) and \(U^{\prime}\). Let \(N_{t}^{(1)}\) be the number of informed nodes after round \(t\) if \(E_{1}\) is chosen, and \(N_{t}^{(2)}\) if \(E_{2}\) is chosen. Then choosing \(E_{2}\) is a better strategy for the adversary than choosing \(E_{1}\)._ The proof of this lemma being fairly technical, we delay it to Appendix C This lemma implies that the adversary will never choose a set of edges with more than one non-trivial component, i.e., the adversary will choose _one_ tree with \(k+1\) nodes. We already showed that the adversary will only choose non-increasing components. Therefore, we are left with analyzing the case where the adversary chooses one non-trivial non-increasing tree with \(k+1\) nodes. **Lemma 4.18**.: _Let \(t\) be a round and \(N_{t}\) be the number of informed nodes after round \(t\). Let \(U\) be a non-increasing tree over \(k+1\) nodes in round \(t+1\). Let \(\sigma\) be the number of uninformed nodes in \(U\) and \(\eta\) the number of informed nodes in \(U\). Then the distribution of \(N_{t+1}-N_{t}\) equals the sum of of \(n-N_{t}-\sigma\) independent Bernoulli random variables of parameter \(\frac{N_{t}-\eta}{n}\) plus one Bernoulli random variable of parameter \(\frac{N_{t}-\eta}{n}\)_ As this proof is similar to the proof of Lemma 3.1, we delay it to Appendix C. **Corollary 4.19**.: _Let \(t\) be a round and \(N_{t}\) be the number of informed nodes after round \(t\). Let \(U\) be a non-increasing tree over \(k+1\) nodes in round \(t+1\) and let \(\eta\) be its number of informed nodes in \(U\). The optimal strategy for the adversary is to minimize \(\eta\) in every round._ Proof.: Note that we always have \(\sigma+\eta=k+1\). Let us consider two non-increasing trees \(U\) and \(U^{\prime}\) over \(k+1\) nodes. Let \(\eta_{1}\) (resp. \(\sigma_{1}\)) be the number of informed (resp. uninformed) nodes in \(U\), and \(\eta_{2}\) (resp. \(\sigma_{2}\)) be the number of informed (resp. uninformed) nodes in \(U^{\prime}\). Assume wlog that \(\eta_{1}>\eta_{2}\geq 0\). Then \(\sigma_{1}<\sigma_{2}\). Let \(N_{t+1}^{(1)}-N_{t}\) and \(N_{t+1}^{(2)}-N_{t}\) be the number of newly informed nodes after round \(t+1\) if the adversary chooses respectively tree \(U\) or \(U^{\prime}\). The distribution of \(N_{t+1}^{(1)}-N_{t}\) is the sum of at least \(n-N_{t}-\sigma_{1}\) independent Bernoulli variables of parameter \(\frac{N_{t}}{n}\), while \(N_{t+1}^{(2)}-N_{t}\) is the sum of at most \(n-N_{t}-\sigma_{2}+1\) independent Bernoulli variables of parameter at most \(\frac{N_{t}}{n}\). The first distribution clearly dominates the second, and by the Distribution Domination Lemma (Lemma 4.9), the result holds. This shows that the optimal strategy for the adversary is always to choose \(\sigma=k+1\) for the tree \(U\) it chooses, unless \(N_{t-1}\) is so large that the number of available uninformed nodes is smaller than \(k+1\), in which case \(\sigma=n-N_{t-1}\). As the number \(N_{t}\) of informed nodes never decreases, this leads to the following partitioning of the rounds into two phases: one phase which contains all rounds \(t\) with \(n-N_{t-1}\geq k+1\), in which case \(\sigma=k+1\), and another phase which contains all rounds \(t\) with \(n-N_{t-1}<k+1\), in which case \(\sigma=n-N_{t-1}\). We will show that the first phase takes \(O(\log n)\) rounds, while the second one takes \(O(k+\log n)\) rounds. ### Phase 1 As the analysis of this phase is very similar to Section 3, we delay the proofs to Appendix C.1. We however state the main result here: **Lemma 4.20**.: _If \(n-k>4\) then Phase 1 ends within \(8(3+\sqrt{5})\log n\) rounds with probability \(p\geq 1-n^{-2}\)._ ### Phase 2 Phase two starts when there are only \(k\) more nodes to infect. This essentially means that the adversary can protect all uninformed nodes but one, as the trees they will choose will have an uninformed root and might get informed in this round, but all uninformed nodes below it will not become informed in the current round. **Lemma 4.21**.: _Let \(\gamma=\frac{65}{32}+\frac{5\sqrt{105}}{32}\approx 3.63\). Phase 2 ends within \(\gamma(\log n+k)\) rounds with probability \(p\geq 1-n^{-2}\), if \(n\geq 10\)._ Proof.: In each round, by Lemma 4.18, the root of the tree the adversary chooses gets informed with probability \(\frac{n-k-1}{n}\geq\frac{8}{10}\), where the inequality holds as \(k\leq n/10\) and \(n\geq 10\). Assimilating this to a flip of a coin where the coin has probability \(\frac{8}{10}\) of landing on heads, and flipping the coins \(\gamma(k+\log n)\) times, we are asking what is the probability \(p\) of the coin landing on heads at least \(k\) times. Again, using Hoeffding's inequality (Lemma 3.12), we have that: \[1-p\leq\exp\left(-2\times\gamma(k+\log n)\left(\frac{8}{10}- \frac{k}{\gamma(k+\log n)}\right)^{2}\right)\\ \leq\exp\left(-2\times\gamma\log n\left(\frac{8}{10}-\frac{k}{ \gamma k}\right)^{2}\right)\leq\exp\left(-2\log n\right)\leq n^{-2}\] ### Combining Phase 1 and 2 We first combine the results for Phases 1 and 2 to show that broadcast completes in \(O(\log n+k)\) rounds if \(k\leq\frac{n}{10}\): **Theorem 4.22**.: _If the adversary can control \(k\leq n/10\) edges in each round, broadcast completes within \((24+\gamma+8\sqrt{5})\log n+\gamma k\) rounds with probability \(p\geq 1-2n^{-2}\)_ Proof.: This is a direct result of Lemmata 4.20 and 4.21 And then combine this result with Lemma 4.3, that dealt with the case \(k\geq\frac{n}{10}\), to give the general result: **Theorem 4.23**.: _If the adversary can control \(k\) edges in each round, broadcast completes within \(O(\log n+k)\) rounds, with probability \(p\geq 1-2n^{-2}\)._ Proof.: This is a consequence of Theorem 4.22 and Lemma 4.3 ### Consensus Finally, we see that a direct application of Theorem 4.22 gives us a reliable algorithm for Consensus with a Randomized Oblivious Message Adversary of parameter \(k\), as long as \(k\leq\frac{n}{10}\): **Theorem 1.7**.: _There exists a protocol for Consensus with a Randomized Oblivious Message Adversary that satisfies Agreement and Validity, and terminates in \(O(k+\log n)\) rounds with probability \(p\geq 1-\frac{2}{n^{2}}\), and only requires messages of 1 bit over each edge in each round, as long as \(k\leq 0.1n\)._ Proof.: By Theorem 4.22, node 1 broadcasts within \((24+\gamma+8\sqrt{5})\log n+\gamma k\) rounds with probability \(p\geq 1-2n^{-2}\). Therefore, Algorithm 1 achieves consensus within \((24+\gamma+8\sqrt{5})\log n+\gamma k\) rounds with probability \(p\geq 1-2n^{-2}\). ## 5 Related Work Information dissemination in general and broadcasting in particular are fundamental topics in distributed computing, also because of the crucial role they play for consensus [20]. In contrast to this paper, most classic literature on network broadcast as well as on related tasks such as gossiping, considers a static setting, e.g., where in each round each node can send information to one neighbor [22, 16]. Kuhn, Lynch and Oshman [23] explore the all-to-all data dissemination problem (gossiping) in an undirected dynamic network, where nodes do not know beforehand the total number of nodes and must decide on that number. Ahmadi, Kuhn, Kutten, Molla and Pandurangan [1] study the message complexity of broadcast in an undirected dynamic setting, where the adversary pays up a cost for changing the network. In dynamic networks, the oblivious message adversary is a commonly considered model, especially for broadcast and consensus problems, first introduced by Charron-Bost and Schiper [4]. The broadcast problem under oblivious message adversaries has been studied for many years. A first key result for this problem was the \(n\log n\) upper bound by Zeiner, Schwarz, and Schmid [28] who also gave a \(\left\lceil\frac{3n-1}{2}\right\rceil-2\) lower bound. Another important result is by Fugger, Nowak, and Winkler [17] who presented an \(O(\log\log n)\) upper bound if the adversary can only choose nonsplit graphs; combined with the result of Charron-Bost, Fugger, and Nowak [3] that states that one can simulate \(n-1\) rounds of rooted trees with a round of a nonsplit graph, this gives the previous \(O(n\log\log n)\) upper bound for broadcasting on trees. Dobrev and Vrto [9, 8] give specific results when the adversary is restricted to hypercubic and tori graphs with some missing edges. El-Hayek, Henzinger, and Schmid [12, 13] recently settled the question about the asymptotic time complexity of broadcast by giving a tight \(O(n)\) upper bound, also showing the upper bound still holds in more general models. Regarding consensus, Coulouma, Godard and Peters in [6] presented a general characterization on which dynamic graphs consensus is solvable, based on broadcastability. Winkler, Rincon Galeana, Paz, Schmid, and Schmid [18] recently presented an explicit decision procedure to determine if consensus is possible under a given adversary, enabling a time complexity analysis of consensus under oblivious message adversaries, both for a centralized decision procedure as well as for solving distributed consensus. They also showed that reaching consensus under an oblivious message adversary can take exponentially longer than broadcasting. In contrast to the above works, in this paper we study a more randomized message adversary, considering a stochastic model where adversarial graphs are partially chosen uniformly at random. While a randomized perspective on dynamic networks is natural and has been considered in many different settings already, existing works on random dynamic communication networks, e.g., on the radio network model [14], on rumor spreading [5], as well as on epidemics [11], do not consider oblivious message adversaries. Note, however, that the information dissemination considered in this paper is similar to the SI model for virus propagation, with results having implications in both directions [15]. For example, Doerr and Fouz [10] introduced an information dissemination protocol inspired by epidemics. More generally, randomized information dissemination protocols can be well-understood from an epidemiological point-of-view, and are very similar to the SI model which has been very extensively studied. In contrast to the typical SI models considered in the literature [25], however, our model in this paper revolves around tree communication structures which introduce additional technical challenges. Furthermore, existing literature often provides results in expectation, while we in this paper provide tail bounds. ## 6 Conclusion We studied the fundamental problems of broadcast and consensus on dynamic networks from a randomized perspective, studying randomized oblivious message adversaries with parameter \(k\). We showed that for small values of \(k\) information dissemination is significantly faster compared to the deterministic setting. We believe that our work opens several interesting avenues for future research. In particular, it would be interesting to extend our study of randomized oblivious message adversaries to other information dissemination problems and network topologies. We also believe that our techniques can be useful to analyze other dynamic models, including the SI model in epidemics.
2303.17008
Magnetic Anisotropy and Its Structural Origins in Ru-Substituted Manganite Films
Controlling magnetic anisotropy (MA) is important in a variety of applications including magnetic memories, spintronic sensors, and skyrmion-based data distribution. The perovskite manganite family provides a fertile playground for complex, intricate, and potentially useful structure-magnetism relations. Here we report on the MA that emerges in 10% Ru substituted $La_{0.7}Sr_{0.3}MnO_{3}$ (Ru-LSMO) films for which strong perpendicular magnetization and anisotropic in-plane magnetization are found. These moderately compressively strained films possess a rich microstructure, consisting of coherently strained phase which evolves into a one dimensional (1D) periodically-modulated structure above a critical thickness. We illustrate how 10% Ru substitution plays a crucial role behind the observed MA, and how the structural distortion and 1D periodic structural modulation produce the anisotropic in-plane magnetization. We highlight the practical significance of the observed MA, which could pave the way towards the realization of cutting-edge oxide-based room temperature spintronic memory devices.
Brajagopal Das, Lena Wysocki, Jörg Schöpf, Lin Yang, Amir Capua, Paul H. M. van Loosdrecht, Lior Kornblum
2023-03-29T20:22:19Z
http://arxiv.org/abs/2303.17008v2
Magnetic Anisotropy and Its Structural Origins in Ru-Substituted Manganite Films ###### Abstract Controlling magnetic anisotropy (MA) is important in a variety of applications including magnetic memories, spintronic sensors, and skyrmion-based data distribution. The perovskite manganite family provides a fertile playground for complex, intricate, and potentially useful structure-magnetism relations. Here we report on the MA that emerges in 10% Ru substituted La\({}_{0.7}\)Sr\({}_{0.3}\)MnO\({}_{3}\) (Ru-LSMO) films for which strong perpendicular magnetization and anisotropic in-plane magnetization are found. These moderately compressively strained films possess a rich microstructure, consisting of coherently strained phase which evolves into a one dimensional (1D) periodically-modulated structure above a critical thickness. We illustrate how 10% Ru substitution plays a crucial role behind the observed MA, and how the structural distortion and 1D periodic structural modulation produce the anisotropic in-plane magnetization. We highlight the practical significance of the observed MA, which could pave the way towards the realization of cutting-edge oxide-based room temperature spintronic memory devices. ## 1 Introduction Mixed-valence manganites such as La\({}_{0.7}\)Sr\({}_{0.3}\)MnO\({}_{3}\) (LSMO) have attracted significant attention owing to their colossal magnetoresistance (CMR), metal-insulator transitions, ferromagnetism, high spin polarization near the Fermi level, and high Curie temperature (T\({}_{\rm c}\)\(\sim\)370 \({}^{\circ}\)C in bulk) [1, 2, 3, 4, 5]. This system provides a textbook example of the complexity of structure-property relationships in correlated oxide systems, where small structural details can result in dramatic changes in the macroscopic behavior. Various properties of LSMO thin films, such as coercivity, ferromagnetic domains, magnetoresistance, and magnetization switching are related to the magnetic anisotropy, which can be tuned by several parameters, such as the growth conditions, substrate surface engineering, atomic substitution, thickness, temperature, and applied magnetic field [6, 7, 8, 9, 10]. The microstructure plays crucial roles in the manifestation of magnetic anisotropy, and therefore epitaxial strain provides a direct route for tuning the magnetic properties [11]. In addition, the steps and terraces on vicinal substrates can break the 4-fold rotational symmetry and reduce it into 2-fold rotational symmetry, thereby affecting the growth mechanism and modifying the microstructure; this can lead to the manifestation of in-plane uniaxial magnetic anisotropy [8, 10]. These structural details and the resulting magnetic anisotropy can have a crucial impact on various magnetic anisotropy-based devices. Furthermore, atomic substitution can play a significant role in epitaxial strain engineering, thereby affecting the functional properties of a film. Konoto et al. demonstrated Ru substitution-induced magnetic anisotropy in 5% Ru-substituted manganite (La\({}_{0.6}\)Sr\({}_{0.4}\)Mn\({}_{0.95}\)Ru\({}_{0.05}\)O\({}_{3}\)) film grown on an STO substrate [12]. More recently, Nakamura et al. reported non-trivial magnetic topologies emerge in Ru-LSMO films when the perpendicular magnetization is controlled via Ru substitution and substrate-induced compressive strain [7]. They reported strong perpendicular magnetic anisotropy in 10% Ru-LSMO, with significantly reduced anisotropy when the Ru substitution is reduced to 5%. From a practical perspective, tilted magnetic anisotropy (TMA) with a strong perpendicular magnetization is attractive for several emerging memory and spintronic technologies, such as those based on spin-orbit torque (SOT) [13, 14]. In addition to TMA with a strong perpendicular component, anisotropic in-plane magnetization is also required for practical applications, such as deterministic perpendicular magnetization switching through SOT [13, 15]. Here we report TMA with strong perpendicular magnetization and anisotropic in-plane magnetization in 10% Ru-LSMO films. By detailed microstructural analysis, we unveil the microstructural origin of the MA. We observe and analyze one-dimensional (1D) periodic structural modulation in the thick 10% Ru-LSMO film. The relations between strain, microstructure and magnetism are discussed, illustrating the role of Ru and strain-induced structural mechanisms behind the MA. This study was carried out at low temperatures (30 K) where the various mechanisms can be readily identified. ## II Experimental Details Ru-LSMO films were epitaxially grown on LSAT (001) substrates (Crystec GmbH) using pulsed laser deposition (PLD). The substrates were held at 650 \({}^{\circ}\)C and the target was ablated using a KrF laser with a fluence of 2.4 J\(\cdot\)cm\({}^{-2}\) at a repetition rate of 3-5 Hz. The oxygen pressure was maintained at \(\sim\)0.13 mbar during growth, and it was increased to 100 mbar after growth while the samples were cooled down at a rate of 10 \({}^{\circ}\)C/min. Temperature and field-dependent magnetization measurements were performed using a superconducting quantum interference device (SQUID) magnetometer in a magnetic properties measurement system (MPMS3, Quantum Design). X-ray diffraction (XRD) measurements were performed at room temperature using a Rigaku SmartLab diffractometer with Cu K\({}_{a}\) radiation (\(\lambda=1.54\) A) and a 2-bounce incident monochromator. ## III Results and Discussion ### Magnetic Anisotropy The temperature dependence of the magnetization (M-T curves, Figs. 1a, S1a) and magnetic field-dependent magnetization loops (M-H curves, Fig. 1b) along the main lattice directions show that a 48 nm 10% Ru-LSMO film has strong perpendicular magnetization as well as anisotropic in-plane magnetization. This suggests that the easy axis of magnetization is tilted from the surface normal, implying TMA is found. We note that the in-plane projection of the easy axis does not lie along the main in-plane lattice directions. From the M-H curves (Fig. 1b, inset), the ascending order of magneto-crystalline anisotropy energy (E) can be estimated as: \(\rm E_{[001]pc}<\rm E_{[010]pc}<\rm E_{[100]pc}\), where 'pc' stands for pseudocubic lattice coordinates. Henceforth, we use the notation for the in-plane ([100], [010]) and the perpendicular ([001]) pseudocubic directions. While TMA is also observed in a thin (10.5 nm) 10% Ru-LSMO film (Figs. S2, S1b), at this lower thickness the M-T and M-H behavior (Figs. S2, S1b) indicate that the magnetization anisotropy between the two main in-plane lattice directions is significantly diminished. In addition, we note the flattening of the [100]pc M-T curve of the thick 10% Ru-LSMO below \(\sim\)200K, which includes a slight downturn (Fig. 1a) that is reproducible at higher magnetic fields (Fig. S1a). While further analysis is required to clarify the origins of this small feature, we observe it only in the sample featuring a 1D periodic structural modulation (to be discussed later on), suggesting a possible connection. Furthermore, we rule out any significant contribution of shape anisotropy by comparing the [001]pc M-H curves of a thick (48 nm) and a thin (10.5 nm) 10% Ru-LSMO films from 0 T to 3 T (Fig. S4). Altogether, we observe a TMA with strong perpendicular magnetization and anisotropic in-plane magnetization in both the thick and thin Ru-LSMO films; however, anisotropy of the in Figure 1: Magnetic properties of a 48 nm 10% Ru-LSMO film. (a) Magnetization as a function of temperature (M-T) curves along the main pseudocubic lattice directions. Before each M-T measurement, the sample was field cooled under 0.1 T and the measurement was performed during warm up under 0.1 T. (b) Magnetic field-dependent magnetization (M-H) loops at 30 K; before each M-H measurement, the sample was zero field cooled to 30 K. The inset shows these loops extended to \(\pm\)3 T. plane component in the thin film is negligible compared to that of the thick film. To explain these observations, we first study the microstructure of these films in Section III.B, and then in Section III.C we combine the magnetism and microstructure to explain the origins of magnetic anisotropy. ### Microstructure To understand the structural origins of the observed TMA, off-specular XRD reciprocal space maps (RSMs) of the thick (48 nm) 10% Ru-LSMO film were acquired at room temperature. The results (Fig. 2) suggest that the main Bragg peaks of the film and the corresponding Bragg peaks of the substrate have the same in-plane momentum transfer Q\({}_{\parallel}\), confirming coherent growth of fully-strained Ru-LSMO films (see Fig. S5 for the 20-\(\alpha\) Bragg peaks). Bulk LSMO has a rhombohedral crystal structure with a pseudo-cubic lattice parameter of 3.875 A [16], which is only slightly larger than the lattice parameter of 3.868 A of the cubic LSAT substrate. The substitution of Mn by Ru increases the lattice parameter [17], resulting in increased compressive strain (to be discussed later). The second key observation from Fig. 2 is that the (0 1 3)\({}_{\rm pc}\) film peak has shifted upward and the (0 -1 3)\({}_{\rm pc}\) film peak has shifted downward with respect to the (\(\pm\)1 0 3)\({}_{\rm pc}\) film peaks. The Q\({}_{\perp}\) position of the latter is indicated by a dashed line for clarity. This behavior is consistent with in-plane crystallographic symmetry breaking, such as an orthorhombic distortion in case of SrRuO\({}_{3}\)[18]. The upwards shift of the (0 1 3)\({}_{\rm pc}\) peak and downwards shift of the (0 -1 3)\({}_{\rm pc}\) peak Figure 2: Off-specular RSMs of 48 nm 10% Ru-LSMO film around the (0 1 3)\({}_{\rm c}\), (1 0 3)\({}_{\rm c}\), (0 -1 3)\({}_{\rm c}\), and (-1 0 3)\({}_{\rm c}\) reflections of LSAT(001). Intensities are presented on logarithmic scale; pink arrows indicate the film’s main Bragg peaks, whereas red arrows indicate the corresponding satellites. The horizontal dashed line denotes the Q\({}_{\perp}\) position of the (\(\pm\)1 0 3)\({}_{\rm pc}\) film reflections. with respect to the (\(\pm\)1 0 3)\({}_{\rm pc}\) peaks imply that (010)\({}_{\rm pc}\) planes have tilted towards [0 -1 0]\({}_{\rm c}\) with respect to (010)\({}_{\rm c}\) ('c' indicates the substrate's cubic coordinates) by an angle \(\delta\)[19], resulting in \(\alpha_{\rm pc}\) = 90\({}^{\circ}\)+ \(\delta\) which is the angle between [010]\({}_{\rm pc}\) and [001]\({}_{\rm pc}\) (Fig. 3a). This in-plane symmetry breaking of the Ru-LSMO film is therefore ascribed to a monoclinic distortion, consistent with previous observations for LSMO [20, 21]. A third observation from Figure 2 is the emergence of satellite film peaks along specific directions. The satellites are distinct around the (\(\pm\)1 0 3)\({}_{\rm pc}\) film peaks, whereas no satellites are observed around the (0 \(\pm\)1 3)\({}_{\rm pc}\) peaks. This RSM feature is explained by a periodic structural modulation [10, 20, 21], and its exclusive appearance along the [100]\({}_{\rm pc}\) direction indicates the existance of a 1-dimensional (1D) periodic structural domain array along or near this axis (Fig. 3b). Altogether, the RSM analysis of the 48 nm 10% Ru-LSMO film points to a comressively-strained, coherent monoclinic (distorted orthorhombic) crystal structure (Fig. 3a) with a 1D crystallographic domain structure along [100]\({}_{\rm pc}\). In monoclinic notation (subscript \(m\)), [110]\({}_{\rm m}\) is parallel to [001]\({}_{\rm pc}\), [1 -1 0]\({}_{\rm m}\) is parallel to [010]\({}_{\rm pc}\), and [100]\({}_{\rm pc}\) is parallel to [001]\({}_{\rm m}\). In the interest of simplicity, we will continue with the 'pc' notation. These microstructural details play a key role in the magnetic anisotropy, to be discussed in the next section. During the coherent film growth, the biaxial compressive strain applied to the film by the substrate compresses the Ru-LSMO pseudocubic unit cells along [100]\({}_{\rm pc}\) and [010]\({}_{\rm pc}\) and hence expands the pseudocubic unit cells along [001]\({}_{\rm pc}\). This leads to the distorttion of the latttice by tilting and rotating of the MnO\({}_{6}\) and RuO\({}_{6}\) octahedra, resulting in monoclinic unit cells (Fig. 3a), with the monoclinic angle \(\gamma_{\rm m}\) being less than 90\({}^{\circ}\). This interpretation agrees well with previous obsevations of similar lattice distortions and monoclinic unit cell formation in compressively strained LSMO films on LSAT substrates [20, 21]. This kind of crystallographic anisotropy affects the spin orbit coupling in perovskite oxides, affecting the magnetic properties differently along various crystallographic directions [10, 21, 22, 23, 24]. In addition to its lattice parameter mismatch with LSAT, the rhombohedral (bulk) LSMO unit cell further features a lattice _angle_ mismatch with the cubic LSAT substrate. The lattice parameter mismatch induces biaxial compressives strain on the Ru-LSMO unit cells, whereas the lattice angle mismatch induces shear strain. The angle \(\gamma_{\rm m}\) becomes less than 90\({}^{\circ}\) to accommodate the lattice parameter mismatch. The monoclinic unit cells of Ru-LSMO can release a small amount of shear strain along [010]\({}_{\rm pc}\)/[1 -1 0]\({}_{\rm m}\) by changing the angle \(\gamma_{\rm m}\), resulting in an octahedral tilt. However, shear strain accumulates as the thickness of a Ru-LSMO film increases. To release this shear strain, periodic structural lattice modulation of the film occurrs along the lattice direction [100]\({}_{\rm pc}\), at the cost of deviation of the angle (\(\beta_{\rm pc}\)) between [001]\({}_{\rm pc}\) and [100]\({}_{\rm pc}\) from 90\({}^{\circ}\). This occurs while keeping the (100)\({}_{\rm pc}\) planes perpendicular to the substrate's surface plane (001)\({}_{\rm c}\) (Fig. 3b) [10, 20, 21], resulting in satellite peaks in the Ru-LSMO films (Fig. 3). We note that the variation of the angle \(\beta_{\rm pc}\) is the key reason behind the broadening of the satellites (see Fig. S6 and the discussion therein). The lattice modulation along [100]\({}_{\rm pc}\) induces periodic shifting of the centers of pseudocubic unit cells with periodicity \(\tau\) along the lattice direction [001]\({}_{\rm pc}\) (Fig. 3b). The separation between the main peak and satellite peaks of 48 nm 10% Ru-LSMO film in reciprocal space is \(\Delta Q_{\text{l}}=0.0045\text{\AA}^{-1}\) (red arrows in Fig. 3), yielding a 1D structural modulation period of \(\tau=(\Delta Q_{\text{l}})^{-1}=22\) nm \(\pm\) 6 nm in real space (accounting for satellite broadening, see Fig. S6 and discussion therein). The Ru-LSMO (\(\pm\)1 0 3)\({}_{\text{pc}}\) main Bragg peaks are much more intense than their satellites, indicating that some volume of the film does not undergo the periodic lattice modulation. This is explained by the structural modulation starting above a critical thickness, releasing the (thickness-dependent) elastic energy. Indeed, an RSM analysis of 10.5 nm 10% Ru-LSMO film does not show any satellite features (Fig. S7) while retaining the monoclinic crystal structure. In addition, similarly to the 48 nm film, the 10.5 nm film has strong perpendicular magnetization component (Fig. S2, S1b), but it exhibits only weakly anisotropic in-plane magnetization. This suggests that the periodic structural domains are not necessary for the strong perpendicular component of magnetization, but they do play a role in the anisotropic in Figure 3: Schematic microstructure of Ru-LSMO coherently grown on LSAT. (a) Schematic of a (magnified) monoclinic unit cell of Ru-LSMO on a vicinal LSAT substrate. (b) 1D periodic structural modulation of Ru-LSMO films on a vicinal LSAT substrate. The substrate has miscut angle 0.1\({}^{\circ}\) with step edge direction being 8.0\({}^{\circ}\) clockwise from the lattice direction [010]\({}_{\text{c}}\). plane magnetization, to be discussed later. The emergence of such lattice modulation, only above a critical thickness, has been reported in LSMO films [21, 25, 26], in good agreement with our observation. 1D periodic structural modulation has also been reported in LaCoO\({}_{3}\), showing such modulation in the entire film thickness [10]. Substrate miscut can play a significant role in the manifestation and orientation of such structural modulations in complex oxide films. To determine the miscut angle and the miscut direction, rocking curve measurements were performed on the LSAT substrate Bragg peaks (Fig. S8). The calculated miscut angle of the substrate is 0.1\({}^{\circ}\), and the step direction is 8\({}^{\circ}\)\(\pm\)5\({}^{\circ}\) clockwise with respect to [010]\({}_{\rm c}\). Therefore, the 1D periodic structural modulation occurs along the terraces, and the structural domains are perpendicular to the terraces. This picture is consistent with step-edge nucleation during the initial stage of the film growth [21]. The kind of structural modulation we observe in the Ru-LSMO films at room temperature has been observed in LSMO films on STO substrates as well [25]. In the case of STO substrates, the structural modulation disappears at low temperatures due to the structural phase transition (\(\sim\)105 K) and phonon softening in STO [25, 27]. However, unlike the case of LSMO on STO, the pattern of structural modulation of LSMO films on LSAT susbtrates does not change with the temperature [25]. Also, the possibility of minute structural variations of LSAT substrates at low temperatures [28] has no expression in the M-T behavior (Figs. 1, S1 and S2a), thus validating the room temperature structural features for low temperatures as well. ### Microstructural origins of the magnetic anisotropy Having characterized the microstructure of the Ru-LSMO film, we now describe the microstructural mechanisms of its magnetic properties. The monoclinic (distorted orthorhombic) crystal structure reported here hosts the Glazer octahedral tilt system a\({}^{+}\)a\({}^{-}\)c', similarly to compressively strained LSMO and SRO films [20, 22, 29]. Therefore, we begin by considering two related magneto-crystalline anisotropy archetypes that host the same octahedral tilt system (a\({}^{+}\)a'c') as the present case: E\({}_{[001]pc}\)\(>\) E\({}_{[010]pc}\)\(>\) E\({}_{[100]pc}\) in LSMO films [11, 21, 30] versus E\({}_{[001]pc}\)\(<\) E\({}_{[010]pc}\)\(<\) E\({}_{[100]pc}\) in SRO films [22], when both are under compressive strain and the monoclinic lattice direction [110]\({}_{\rm m}\) is along [001]\({}_{\rm pc}\). As shown in Fig. 1, the anisotropy energy in the present case is E\({}_{[001]pc}\)\(<\) E\({}_{[010]pc}\)\(<\) E\({}_{[100]pc}\), which suggests that the present Ru-LSMO system behaves more closely to SRO than to LSMO, but with the distinct practical advantage of the much higher Curie temperature of LSMO. We will describe the atomic mechanisms of these archetypes, and from them we propose a mechanism for the presently observed TMA in Ru-LSMO. On one hand, LSMO films with the a\({}^{+}\)a'c' tilt system were shown to induce weakly anisotropic in-plane magnetization, with [100]\({}_{\rm pc}\) being magnetically easier than [010]\({}_{\rm pc}\)[21]. In contrast, SRO films with the same tilt system exhibit TMA with strong perpendicular magnetization and anisotropic in-plane magnetization, with [010]\({}_{\rm pc}\) being magnetically easier than [100]\({}_{\rm pc}\)[22]. This comparison therefore suggests that the single ion anisotropy in the Ru ions is induced by compressive strain and it plays a key role behind the strong perpendicular magnetization in the 10% Ru-LSMO films [7]. This in turn implies that compressive strain, together with strong spin orbit coupling (SOC) of Ru ions, induces a preferred orientation in the Ru spins (to be discussed later), which then governs the orientation of Mn spins. The octahedral rotation (discussed in the next paragraph) which influences the Ru-Mn and Mn-Mn interactions further plays an important role behind the TMA in Ru-LSMO films. While in-phase octahedral rotations enhance \(\mathrm{e_{g}}\)-\(\mathrm{e_{g}}\) orbital overlap, the out-of-phase rotations enhance six out of nine \(\mathrm{t_{2g}}\)-\(\mathrm{t_{2g}}\) orbital overlaps [21, 22]. The Mn-Mn magnetic interaction in LSMO is based on the overlap of the \(\mathrm{e_{g}}\) orbitals, whereas the Ru-Ru magnetic interaction in SRO is based on the overlap of the \(\mathrm{t_{2g}}\) orbitals. The octahedral rotation \(\mathrm{c}^{\cdot}\) about the \([001]_{\mathrm{pc}}\) is out-of-phase in both LSMO and SRO films, but strong perpendicular magnetization is observed only in SRO. The in-plane magnetic easier axis of LSMO films is the axis (\([100]_{\mathrm{pc}}\)) around which the octahedral rotation (\(\mathrm{a}^{+}\)) is in-phase, whereas the in-plane magnetic easier axis of SRO films is the axis \([010]_{\mathrm{pc}}\) around which the octahedral rotation (\(\mathrm{a}^{\cdot}\)) is out-of-phase. The octahedral rotation induced orbital anisotropy, together with spin-orbit coupling (SOC), induce two opposite orders of magneto-crystalline anisotropy energy in these examples: \(\mathrm{E_{[001]pc}>E_{[010]pc}>E_{[100]pc}\) in LSMO films versus \(\mathrm{E_{[001]pc}<E_{[010]pc}<E_{[100]pc}\) in SRO films (both compressively strained). This difference is rooted in the different dominant orbitals (and their overlap): \(\mathrm{e_{g}}\) in LSMO versus \(\mathrm{t_{2g}}\) in SRO. Here the 10% Ru-LSMO films (Fig. 1, Fig S2) exhibit the same order of magneto-crystalline anisotropy energy as observed in SRO films, opposite to the LSMO case. This suggests that the orbital anisotropy of the Ru 4d \(\mathrm{t_{2g}}\) orbitals and their interaction with Mn 3d \(\mathrm{t_{2g}}\) orbitals play a crucial role in the magnetic behavior of Ru-LSMO films. Indeed, it has recently been suggested that an antiferromagnetic interaction between Ru and Mn ions via \(\mathrm{t_{2g}}\) orbital overlap governs the magnetic properties of Ru-LSMO films [17]. We therefore propose that the monoclinic crystal structure together with the octahedral tilt system \(\mathrm{a}^{+}\mathrm{a}^{-}\mathrm{c}^{\cdot}\) create the playground for Ru-Mn \(\mathrm{t_{2g}}\) interactions in 10% Ru-LSMO films, where the single ion anisotropy of the Ru ions dictates the spin orientation of Mn ions and determines the order of magneto-crystalline anisotropy energy here as \(\mathrm{E_{[001]pc}<E_{[010]pc}<E_{[100]pc}\). The Ru ion is driving the magnetic anisotropy by playing a dual role. First, the substitution of Mn by Ru increases the compressive strain which induces \(\mathrm{t_{2g}}\) orbital anisotropy by driving the monoclinic \(\mathrm{a}^{+}\mathrm{a}^{-}\mathrm{c}^{\cdot}\) tilt structure, resulting in both in-plane and out-of-plane orbital anisotropy. The Ru 4d orbitals have an order of magnitude stronger SOC compared to the Mn 3d orbitals [31, 32]. When strained, the RuO\({}_{6}\) octahedra are expected to translate their local orbital preference to the Ru spins via strong SOC of Ru ions, more effectively than the MnO\({}_{6}\) octahedra with the weaker SOC of Mn ions. Second, the Ru spins dictates the Mn spins via Ru-Mn \(\mathrm{t_{2g}}\) interactions [17]. The spatial distribution of the Ru 4d orbitals is wider than that of the Mn 3d orbitals, making Ru-Mn interactions stronger than Mn-Mn interactions (both of which occur via the oxygen anion) in Ru-LSMO. Moreover, the additional compressive strain induced by Ru substitution further increases the Ru-Mn interaction. Overall, the strain induced single ion anisotropy in Ru ions determines the magnetic anisotropy in 10% Ru-LSMO films via Ru-Mn \(\mathrm{t_{2g}}\) interactions. Having discussed the role of Ru in magnetic anisotropy, we now consider the 1D periodic structural modulation (Fig. 3b), and its role in the anisotropic in-plane magnetization. This structural modulation appears in the thick Ru-LSMO film where the anisotropy of in-plane magnetization is relatively strong (Fig. 1), compared to the thin Ru-LSMO film (Figs. S2, S1b) where the 1D structural modulation is absent (Fig. S7). The weaker anisotropy of the in-plane magnetization is therefore ascribed to the monoclinic structure which exists in both films, as discussed earlier. The stronger anisotropy of in-plane magnetization in the thick film therefore shows correlation with the existence of the structural modulation. We ascribe the increased anisotropy of in-plane magnetization in the thick Ru-LSMO film to the additional out-of-phase octahedral rotation a' around the [010]\({}_{\rm pc}\) axis as a result of the periodic variations in the angle \(\beta_{\rm pc}\) (Fig. 3b). From the above discussion, we note that Ru plays an important role in the manifestation of TMA with strong perpendicular magnetization in Ru-LSMO. However, this raises a question whether the TMA observed here can be explained purely by strain. TMA with strong perpendicular magnetization in LSMO can be achieved without Ru substitution, albeit with high compressive strain (-2.2%) using LaAlO\({}_{3}\) (LAO) substrates [33]. However, in the present case, the TMA with strong perpendicular magnetization is observed under a moderate compressive strain (-0.41% [17]) with LSAT substrates. This comparison illustrates that strain alone cannot account for the observed TMA, highlighting the importance of Ru in translating the orbital anisotropy into magnetic anisotropy through strong SOC and more spread-out 4d orbitals. Ru substitution significantly enhances the strain induced perpendicular magnetization. From a practical perspective, high strain is less desirable as it limits the growth, thickness, and processing parameter space. We now briefly highlight a possible technological implementation of the observed TMA in the 10% Ru-LSMO films. Field-free perpendicular magnetization switching through SOT holds promise for future low-power non-volatile magnetic memories. Practical implementation of such devices requires TMA with strong perpendicular magnetization component, which is usually achieved in ultrathin ferromagnetic metals (such as Co) or alloys (such as CoFeB) by complex geometries and structures [34, 35, 36, 37], hindering their practical application. Therefore, materials that have tilted magnetic anisotropy (TMA) with a strong perpendicular magnetization component are of considerable advantage for such devices. For example, strong perpendicular magnetization of SRO [22, 38] was recently utilized to demonstrate deterministic perpendicular magnetization switching through SOT in an all-oxide heterostructure at 70K [14]. However, the low Curie temperature (T\({}_{\rm C}\)) of SRO [14, 22] is a major hurdle towards practical realization. Nakamura et al. showed that 10% Ru substitution in the high-T\({}_{\rm C}\) material LSMO supports strong perpendicular magnetization up to much higher temperatures, but the in-plane magnetization was not addressed. Along with a strong perpendicular magnetization, anisotropic in-plane magnetization is crucial for deterministic perpendicular magnetization switching through SOT [13, 15]. Moreover, the Curie temperature of the manganites can be engineered above the room temperature [39]. Therefore, the strong perpendicular magnetization along with the anisotropic in-plane magnetization in 10% Ru-LSMO could be utilized to fabricate the SOT switching devices, which could work much closer to (and potentially above) room temperature, paving the way towards practical applications. Summary and Conclusions We report TMA with strong perpendicular magnetization and anisotropic in-pane magnetization in 10% Ru-LSMO films under moderate compressive strain. The microstructure of the Ru-LSMO films was analyzed and correlated with their magnetic properties. We show how Ru magnifies the impact of strain, explaining the possible microstructural origin of magnetic anisotropy. We further illustrate how shear strain relaxation occurs above a certain thickness via the formation of 1D periodic structural modulation, which in turn plays a prominent role in the manifestation of anisotropic in-plane magnetization. Demonstrating and understanding the microstructural origin of TMA with strong perpendicular magnetization and anisotropic in-plane magnetization in 10% Ru-LSMO paves the way towards the realization of the practical oxide-based room temperature spintronic memories. ## Acknowledgements This work was funded by the German Israeli Foundation (GIF Grant No. I-1510-303.10/2019). The authors thank Dr. Ionela Lindfors-Vrejoiu for growing the films used here and for fruitful discussions. We further thank Dr. Maria Koifman Khristosov and Dr. Anna Eyal for assistance with XRD measurements and magnetometry, respectively.
2308.02011
Silence Speaks Volumes: Re-weighting Techniques for Under-Represented Users in Fake News Detection
Social media platforms provide a rich environment for analyzing user behavior. Recently, deep learning-based methods have been a mainstream approach for social media analysis models involving complex patterns. However, these methods are susceptible to biases in the training data, such as participation inequality. Basically, a mere 1% of users generate the majority of the content on social networking sites, while the remaining users, though engaged to varying degrees, tend to be less active in content creation and largely silent. These silent users consume and listen to information that is propagated on the platform. However, their voice, attitude, and interests are not reflected in the online content, making the decision of the current methods predisposed towards the opinion of the active users. So models can mistake the loudest users for the majority. We propose to leverage re-weighting techniques to make the silent majority heard, and in turn, investigate whether the cues from these users can improve the performance of the current models for the downstream task of fake news detection.
Mansooreh Karami, David Mosallanezhad, Paras Sheth, Huan Liu
2023-08-03T20:04:20Z
http://arxiv.org/abs/2308.02011v1
# Silence Speaks Volumes: Re-weighting Techniques for Under-Represented Users in Fake News Detection ###### Abstract Social media platforms provide a rich environment for analyzing user behavior. Recently, deep learning-based methods have been a mainstream approach for social media analysis models involving complex patterns. However, these methods are susceptible to biases in the training data, such as _participation inequality_. Basically, a mere 1% of users generate the majority of the content on social networking sites, while the remaining users, though engaged to varying degrees, tend to be less active in content creation and largely silent. These silent users consume and listen to information that is propagated on the platform. However, their voice, attitude, and interests are not reflected in the online content, making the decision of the current methods predisposed towards the opinion of the active users. So models can mistake the loudest users for the majority. We propose to leverage re-weighting techniques to make the silent majority heard, and in turn, investigate whether the cues from these users can improve the performance of the current models for the downstream task of fake news detection. User Behavior, Participation Inequality, Social Media, Lurkers, Fake News. ## I Introduction In an age where people's opinions are often crowdsourced on Online Social Networks (OSN), a wide variety of methods have been proposed to extract patterns from these data for different tasks, such as fake news detection [1, 2], hate speech detection [3], and recommendations [4, 5]. Moreover, deep learning methods have recently become prevalent due to their ability to model the complex and non-linear relations between the input data. However, despite all the attempts to analyze social media data, these models are prone to various biases, such as _participation inequality_. The participation inequality states that only a small subset of all the users usually account for a disproportionately large amount of content creation activities in social networks. This phenomenon has been observed among OSN users and can be easily categorized into three types: (1) _lurkers_ who comprise 90% of the OSN users and hardly ever participate in creating the content on social media (\(\sim\)1% of the postings), (2) _engagers_ group that contain 9% of the social media users who occasionally contribute to content creation (\(\sim\)9% of the postings), and (3) _contributors_, who are only 1% of the OSN users but are responsible for more than 90% of the created content on social media. This phenomenon, which is also known as the _90-9-1 Rule_ or _1% Rule_ by web usability experts [6], demonstrates the biases in the data that are used in current social media analysis applications. Deep learning methods utilize the observed data to infer user behavior. However, since contributors generate most of the data, the inferred user behavior is inclined towards these users and cannot represent that of the remaining categories of users. Figure 1 shows the percentage of the interactions by each group of users - lurkers, engagers, and contributors - for two different datasets. For example, a data point (i.e., a news piece) in the lower right corner suggests that 100% of the interactions with the news on social media are from contributors and 0% from lurkers and engagers, respectively. Early studies in behavioral and social science literature often associate lurkers with names such as _passive actors_[7], _abusers of common good_[8], and _free-riders_[9] that only consume resources without giving back to the community. This also influences machine learning researchers to overlook the contributions of the lurkers. However, we argue that lurkers' behavior can provide additional cues for social media analysis methods as these users actively consume and listen to the relevant information, create connections, and are receptive [10, 11]. This can be corroborated by recent efforts to drive user participation in online social communities. For example, among reasons listed in [12] for the lurking behavior, a user's motivation to post is decreased if they are not able to offer any vital or novel information. Furthermore, the authors in [13] Fig. 1: Ternary plots of the percentage of the interactions on social media created by each of the lurker, engager, and contributor groups in fake news datasets: (a) GossipCop and (b) Politifact. In general, the percentage of the interactions recorded by the contributors is more than the other two groups. Out of the users who reacted to the news \(a\), 4% are lurkers, 26% are engagers, and 70% are contributors. mention that one of the reasons a lurker becomes active on social networking sites is when they can gain knowledge as well as propagate it outside the community. Given these reasons, a lurker might engage with a post when they have valuable information to add related to the topic. Thus, we hypothesize that giving importance to such interactions between the posts and lurkers may improve the performance of the different social media analysis applications. For instance, consider the task of fake news detection. This task entails classifying a news article as real or fake by benefiting from the user-news interactions obtained from social media data. However, directly utilizing this network may not be fruitful due to two reasons. First, as mentioned, this interaction may be biased toward the views of the contributors as they are the ones creating about 90% of the interactions. Second, unobserved interactions (i.e., unshared news) do not guarantee that the user was not exposed to the news. A user might be exposed to the article but may choose to refrain from expressing their opinions due to one or more reasons. For example, a user might doubt the post's veracity or a user may feel like they might not add value to the already propagated content. In compliance with the earlier stated hypothesis, if a lurker engages with a news article, they might have more information about the news article. Thus, by up-weighting the limited lurkers' interaction, one may improve the detection capabilities of the fake news detection model. Figure 2 shows a motivational example from the Politifact dataset that includes fact-checked news articles. The example includes the content of the fake news and different tweets that mention the news from three different types of users. In this example, the news provoked the lurker to comment on its falsity. In this work, we only utilized retweet interactions. We propose to leverage re-weighting techniques to verify whether silence speaks volumes. We use the task of fake news detection and evaluate its performance by differentiating between various interactions based on the user categories. Our approach learns a representation that reflects the actual landscape of the platform and assigns higher weights to news that triggered the silent users more, as they could potentially offer additional information for fake news detection. The main contribution of this work is three-fold: * To the best of our knowledge, this is the first attempt to consider the types of users based on their activity for fake news detection. * We design, implement, and experiment with two weighting techniques to upvalue the under-represented users on social media platforms and record the performance for the downstream task. * We extend two benchmark datasets in the field of fake news detection to also include the information of a user being a lurker, engager, and contributor which can be utilized for generalized user behavioral analysis. ## II Related Work The proposed methodology spans the subject domains of online participation, class imbalance, and fake news detection. The state-of-the-art in these areas is discussed in this section. ### _Online Participation and Lurking Behavior_ In the field of psychology, behavioral, and social science, there is a wide range of studies dedicated to extracting factors that drive user participation as well as lurking behavior in online social communities. These behavioral factors can be classified into three major categories: (1) individual-level, (2) community-level, and (3) environmental-level. Note that we did not include offline barriers such as _user's available time_ since they were not directly associated with lurking behavior. #### Ii-A1 Individual-level Factors Studies suggest that demographic features such as gender and age as well as personality traits play an important role in online participation [12, 14]. Four prevailing intrinsic characteristics are (1) _extraversion_ that captures quantity and intensity of interpersonal interactions, (2) _neuroticism_ that captures susceptibility to emotional instability, (3) _narcissism_ that captures excessive self-promotional behavior, and (4) _self-efficacy_ that captures self-confidence in one's own ability to successfully accomplish specific tasks or achieve desired outcomes. #### Ii-A2 Community-level Factors The prominent factor related to social and community for online participation is the _social identity_. Social identity is defined as how people perceive themselves as a part of a particular community [15]. In other words, members share information to obtain a sense of belonging and identification. Another influence is the _reciprocity_ factor that looks into how much the community can provide for its members as well as how much an individual can return the benefits and reduce the perceived indebtedness [13, 16]. #### Ii-A3 Environmental-level Factors The most influential factor in the active participation of social media users related to the platform is the high _perceived ease of use_ which is defined as the degree to which the technology is easily understood [17]. Fig. 2: Example of a piece of news content from the Politifact dataset showing the tweets of users from each group. We hypothesize that if a piece of news provokes a lurker to create content on social media, giving importance to such interaction might improve the performance of fake news detection models. On the other hand, the ease of use should not result in limited functionality of the platform as it would lead to a decrease in user engagement. Other factors include the privacy-preserving functionality and security-related issues of the platform. There are also multiple factors that would involve two or more of the above categories such as _privacy_ and _security_ of the communities as well as the platform. Nevertheless, if lurkers decide to break their voices, the above factors might play an important role. Out of which the need of giving back to the community (i.e., _reciprocity_) and the confidence in possessing the knowledge to contribute to the online content (i.e., _self-efficacy_) are the core motivations of this paper. ### _Class Imbalance and Long-Tail Distribution_ The natural data classes exhibit a long-tail distribution in which the sample counts across classes are imbalanced. In other words, there are a few classes with a large number of samples while most of the other classes include a relatively fewer number of examples. This poses a challenge as most models are typically trained on artificially balanced datasets, making them vulnerable in practice when applied to real-world data. Various approaches have been developed to address this performance bias, which can be broadly categorized into three groups: (1) re-sampling approaches that involve either under-sampling the majority class or over-sampling the minority class [18], (2) re-weighting methods that apply cost-sensitive learning or loss re-weighting for different classes or different samples [19], and (3) augmentation-based methods in which they artificially expand the dataset by applying transformation functions on the data samples [20]. The concept of imbalancedness in this work is similar to class imbalance problems but varies in terms of its source. In this paper, the task classes are different from the users' participation inequality. For example, sentences extracted from social media for the task of sentiment analysis can be balanced (or imbalanced) in terms of the number of samples for positive, negative, and neutral classes; while the number of users for each user type based on the intensity of their activity who created these sentences still be highly skewed. ### _Disinformation Spreader and Fake News Detection_ In the field of user-based fake news detection and fake news spreader profiling, researchers have utilized different conjunctions of user's profile information, user's activity, user's network connectivity, and user's generated content [1, 21, 22]. Cheng et al. [22] proposed a model to identify the causal relationships between users' profiles and their susceptibility to sharing fake news articles. The authors modeled the dissemination of fake news by creating implicit feedback based on the user's exposure and interest in specific fake news. The learned fake news sharing behavior is then used in improving the detection of fake news. Karami et al. [1] extracted some features from the user's profile information, generated content, and activity that represents their motivational behavior in spreading fake news. They showed the effectiveness of their model in determining which users are more likely to spread fake news. Cardaioli et al. [2] investigated how the behavioral-based features such as Big Five personality and stylometric features extracted from the content of a user's timeline can be used to profile fake news spreaders. Shu et al. [23] investigated the importance of explicit features such as register time, follower and following count as well as implicit user meta information such as location and political bias inferred from their online behaviors and historical tweets for the detection. Nevertheless, all the aforementioned methods do not distinguish between lurkers, engagers, and contributors, hence, generalizing the dissemination behavior for all types of users. ## III Problem Statement Let \(\mathcal{X}=\{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})\}\) denote a set of \(n\) news articles with labels \(y=0\) for true and \(y=1\) for fake news. Each news article \(x_{i}\) consists of three components: (1) the news content, \(a_{i}\in\mathcal{A}\), which is a sequence of \(k\) words \(\{w_{1},w_{2},...,w_{k}\}\), (2) a set of \(m\) comments containing different views of the users' opinion related to the corresponding news article, \(c_{i}=\{c_{1i},c_{2i},...,c_{mi}\}\in\mathcal{C}\), and (3) a user-news interaction \(u_{ji}\in\mathcal{U}\) with \(p\) number of users. Typically, \(\mathcal{U}\) is a _binary_ matrix representing interaction between user \(j\) and news \(i\): if \(j\) interacts with \(i\) then \(u_{ji}=1\), otherwise \(u_{ji}=0\). Note that \(u_{ji}=0\) can be interpreted as either the user \(j\) was not exposed to the news article \(i\) or was exposed to but due to some reasons (e.g., not sure of the veracity of the news [24]) chose not to propagate it. Based on our hypothesis, to investigate the impact of interactions with under-represented users, we aim to design a fake news detection function that considers the type of users in terms of their activity, \(\mathcal{G}=\{L,E,C\}\). Formally, we can represent the model as follows: Given news articles \(\mathcal{A}\), users' comments \(\mathcal{C}\), and a user-news interaction \(\mathcal{U}\), learn a fake news detection function \(f(\mathcal{A},\mathcal{C},\mathcal{U},\mathcal{G})\rightarrow\hat{y}\) with respect to the users belonging to one of the lurkers (L), engagers (E), and contributors (C) groups \(\mathcal{G}\). ## IV Designing Fake News Detection Model Previous methods in fake news detection either do not consider user-news interaction in their model, or it is appended as a binary matrix with 1 showing the user tweeted or retweeted about specific news. Similar to other social media analysis studies, this news dissemination data in online environments is also biased toward the users who create the majority of the social media content. In other words, the user-news interaction matrix is biased towards the views of the users that are more eager on asserting their opinion about the news but belong to only 1% of the social media population - i.e. the contributors. The focus of this paper is to provide a fair representation by giving more value to the interactions created by lurkers. We design two approaches (Figure 3). The first method balances the user-news interaction matrix which later will be added to the baseline models as a weighted matrix. The second method will apply sample re-weighting based on the activity of the users to see whether this would improve the performance of the downstream task. In this section, we will briefly talk about the text representation learning for news articles as well as the news comments and then introduce our weighting mechanisms. ### _News Articles and Users' Comments Representations_ To generate a vector representation of the news content as well as the users' comments, different models apply different text representations. In the task of fake news detection, earlier methods use word-level and sentence-level features such as bag-of-words and n-grams. Recent models use deep learning-based methods such as Recurrent neural networks (RNN), Long Short Term Memory (LSTM), and Transformers to model sequential data. Transformers use a self-attention mechanism to extract vital information from the input data. Both the news and the comment encoder inputs are text sequences, and they output the vector representation of text. Formally, if we show the article's content and the comment encoder as \(g_{a}(\cdot)\) and \(g_{c}(\cdot)\) functions, respectively, then for each news \(i\), \[z_{ia}=g_{a}(w_{1},w_{2},...,w_{k})\quad\text{and}\quad z_{ic}=g_{c}(c_{1},c_{3},...,c_{m}) \tag{1}\] where \(z_{ia}\) and \(z_{ic}\) are the embedding vectors for the news content and the user comments, respectively, \(w_{1},w_{2},...,w_{k}\) is the sequence of the words in the news articles and \(c_{1},c_{2},...,c_{m}\) are its corresponding comments. ### _Edge Re-weighting Mechanism for News Dissemination Network_ The news dissemination network consists of two different types of nodes: users and news. In Figure 4, users are denoted by circles while the news pieces are illustrated by squares. Each user node can belong to one category of lurkers, engagers, or contributors. To handle the imbalancedness of the user types on social media, we propose a weighting mechanism based on the 90-9-1 Rule. The calculated weight would be applied to all the edges connected to a square-shaped node based on the type of all its connected circle-shaped nodes. Formally, we substitute the binary user-news interaction matrix (\(\mathcal{U}\)) in our formulation of the fake news detection function with a normalized weighted version (\(\overline{\mathcal{U}}\)). We propose the following weighting mechanism: \[\overline{u}_{i}=u_{i}\cdot\left(1+\frac{\omega_{i}}{\parallel\omega\parallel} \right)^{\alpha}\quad\forall i\in\{1,...,n\} \tag{2}\] where \(\omega_{i}\) is calculated as follows: \[\begin{split}\omega_{i}&=[0.9\cdot\sum_{j=1}^{p} \mathds{1}_{L}(j)\cdot u_{ji}+0.09\cdot\sum_{j=1}^{p}\mathds{1}_{E}(j)\cdot u _{ji}\\ &+0.01\cdot\sum_{j=1}^{p}\mathds{1}_{C}(j)\cdot u_{ji}]\end{split} \tag{3}\] In the above equations, \(u_{i}\) is a vector showing the user's interaction activity (i.e., 0 or 1) with all the news. \(n\) and \(p\) are the number of news articles and users, respectively. \(L\), \(E\), and \(C\) are the list of lurkers, engagers, and contributors. The \(\alpha\geq 0\) is a hyperparameter that controls the intensity of the weighting mechanism. For example, \(\alpha=1\) will apply a weighting based on the 90-9-1 Rule on each user type while \(\alpha=\frac{1}{2}\) is the smoother version of it. Moreover, \(\mathds{1}_{S}(j)\) is an indicator function and is 1 if \(j\in S\), otherwise, it is 0, where Fig. 4: An example of a network with 11 users (1 turker, 3 engagers, and 7 contributors) interacting with 6 pieces of news. This interaction vector is a binary vector with 1 indicating the existence of an interaction. The weights are calculated based on equation 3. Fig. 3: Two re-weighting strategies were used to learn a balanced representation for the task of fake news detection: (1) Edge Re-weighting (§IV-B) and (2) Sample-level Re-weighting (§IV-C). \(S\) is one of the user types. The indicator functions defines which type a specific user belongs to. An example is given in Figure 4. In this figure, for instance, four users interacted with news \(b\), out of which one is a lurker, one is an engager, and two are contributors. The weight is calculated as: \[\omega_{b} =0.9\cdot(\text{\text{\# of lurkers}})+0.09\cdot(\text{\text{\# of engagers}})\] \[+0.01\cdot(\text{\text{\# of contributors}})=0.9\cdot 1+0.09\cdot 1 \tag{4}\] \[+0.01\cdot 2=1.01\] ### _Sample-level Re-weighting Mechanism for News Representation_ Sample re-weighting has been a mainstream approach in creating a robust model when dealing with imbalanced training data [19, 25]. Inspired by this, we trained the models by applying a sample-level re-weighting method based on the users belonging to lurker, engager, or contributor groups. In other words, for the news article \(i\) and \(M\) number of samples in a batch, the normalized weight is integrated into the loss function to model a balanced fake news detection. Formally, \[\mathcal{L}_{balanced}=-\frac{1}{M}\sum_{i=1}^{M}\left(1+\frac{\omega_{i}}{ \parallel\omega\parallel}\right)^{\alpha}\cdot\mathcal{L}_{CE}(y_{i},\hat{y} _{i}) \tag{5}\] where \(y_{i}\) and \(\hat{y}_{i}\) is the true and the predicted labels, respectively. The weights are calculated as a batch-wise version of equation 3. Moreover, \(\mathcal{L}_{CE}\) is the cross-entropy loss, formulated as: \[\mathcal{L}_{CE}(y_{i},\hat{y}_{i})=y_{i}\log(\hat{y}_{i})+(1-y_{i})\log(1- \hat{y}_{i}) \tag{6}\] The batch-wise learning process of the balanced fake news detection and the weighting procedure is provided in Algorithm 1. ``` Input :\(\mathcal{X}_{tr}\); \(\theta^{0}\); epochs; UN Matrix \([u_{ji}]\); \(\alpha\); Lurkers (L), Engagers (E), and Contributors (C) sets. Output :\(\theta^{T}\) 1for\(e=0,...,\text{epochs}\)do: for\(t=0,...,T-1\)do: \(\mathcal{X}_{tr}^{t}\leftarrow\text{SampleMiniBatch}(\mathcal{X}_{tr},t)\) \(\hat{y}_{tr}^{t}\leftarrow\text{Forward}(\mathcal{X}_{tr}^{t},\theta^{t})\) \(\omega_{tr}^{t}\leftarrow\sum_{S\in\{L_{p},C\}}w_{S}\cdot\sum_{j=1}^{p}\mathds{1 }_{S}(j)\cdot u_{ji}\) \(loss=mean\left[\left(1+\frac{\omega_{tr}^{t}}{\parallel\omega\parallel} \right)^{\alpha}\mathcal{L}_{CE}(y_{tr}^{t},\hat{y}_{tr}^{t})\right]\) \(\nabla\theta^{t}\leftarrow\text{Backward}(loss,\theta^{t})\) \(\theta^{t+1}\leftarrow\text{OptimizerStep}(\theta^{t},\nabla\theta^{t})\) endfor endfor ``` **Algorithm 1**Learning to Re-weight News Representations Based on the User Types. ## V Experimental Setting In this section, we describe the details of the experimental setup including the benchmark datasets, dataset preparation, baseline methods, and implementation details. ### _Datasets and Dataset Preparation_ We used two datasets from the FakeNewsNet repository as the seed datasets for the evaluation: Politifact and GossipCop [26]. * Politifact1: a fact-checking website where reporters and editors from the media fact-check political news articles. The URLs of news articles are available on the Politifact website and are used to collect tweets related to them. Footnote 1: [https://www.politifact.com/](https://www.politifact.com/) * GossipCop2: a website for fact-checking entertainment stories aggregated from various media outlets. On the GossipCop website, articles get a score between 0 and 10 as the degree from fake to real. Footnote 2: [https://www.gossipcop.com/](https://www.gossipcop.com/) In these datasets, along with the content of the news, the news comments and IDs of the Twitter users who reposted these fake and real stories are also included. The textual data (i.e., news content and news comments) were pre-processed to remove punctuation, out-of-vocabulary words, URLs, hashtags, and mentions. We utilized the Twitter user ids to create the user-news interaction matrix. Footnote 1: [https://www.politifact.com/](https://www.politifact.com/) We also collected the history of the activities of each of the Twitter users identified in the Politifact and GossipCop datasets. Some of these users were deleted or suspended accounts and we were not able to access their activity and profile information anymore (9,537 of the GossipCop users and 13,181 of the Politifact users). We ignored these users in our matrix creation. For the rest, to categorize them into three groups of lurkers, engagers, and contributors, we calculate the average number of activities per day. We set the thresholds for the average number of activities per day in creating the lurkers and engagers to 0.025 and 0.15, respectively, such that it approximately follows the 90-9-1 Rule [6] as well as the definition provided in social science behavioral papers [16]. Statistics of the created datasets are summarized in Table I. ### _Baselines_ In this section, for evaluation, we consider state-of-the-art baselines that use both news content and users' comments. To also include the BERT [27] model to the group of baselines, we integrate BERT with a comment encoder for a fair comparison. The followings are the details regarding each baseline: * CSI [28]: This method applies a hybrid deep model to capture the characteristics of fake news such as the text \begin{table} \begin{tabular}{c c c c} \hline & & **Politifact** & **GossipCop** \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} **Number of** \\ **News** \\ \end{tabular} } & _Real_ & 132 & 3,588 \\ & _Fake_ & 319 & 2,230 \\ \cline{2-4} & **Total** & 451 & 5,818 \\ \hline \hline \multirow{4}{*}{ \begin{tabular}{c} **Number of** \\ **Interactions** \\ \end{tabular} } & _Lurkers_ & 482 & 382 \\ & _Engagers_ & 4,295 & 3,945 \\ & _Contributors_ & 41,738 & 30,054 \\ \cline{1-1} \cline{2-4} & **Total** & 46,515 & 34,381 \\ \hline \hline **\# of Comments** & & 89,999 & 231,269 \\ \hline \end{tabular} \end{table} TABLE I: Statistics of the Datasets. of the article, the set of tweets in which users commented about the fake news, and the source of the article such as the credibility of the media source. For a fair comparison, we disregarded the news source feature. * dEFEND [29]: This model applies deep hierarchical sentence-comment co-attention network. dEFEND learns feature representations of the content and the comments for fake news detection and jointly discovers explainable sentences from these two sources. * TCNN-URG [30]: Based on convolutional neural network idea for text classification [31], this model tries to capture semantic information from the article's text using Two-level Convolutional Neural Network (TCNN). Moreover, it incorporates a User Response Generator (URG) module to learn a variational autoencoder to model the user responses to the article and generate responses for unseen news articles. * BERT+HAN: We created a variant of the BERT model that includes the comments to match the other baseline models. We added the Hierarchical Attention Network for training the news comment section following Mosallanezhad et al. [32] which models the importance of each comment along with the salient word features. ### _Implementation Details_ Traditional fake news detection methods only utilize the text of the news for detecting the fake from the real. However, integrating auxiliary information would provide a comprehensive representation of the samples and help in improving the performance of the models. For example, news comments provide useful signals for fake news detection [29, 32], since semantic cues such as signals supporting or doubting the veracity of the content can be extracted from the comments. On the other hand, user-news interactions can highlight the type of items a user interacts with and further improve the understanding of user behaviors [32, 33, 34]. Moreover, it has been well documented that, fake news tends to spread faster than true news articles on social media sites such as twitter. Thus, incorporating user-item interactions provides additional cues to enhance fake news detection. To study the effectiveness of our weighting mechanism in the task of fake news detection, we integrated this user-news interaction component into each of the baseline models. In other words, the output of the news and comment encoders were concatenated to the user-news interaction encoder which is a feed-forward network, and was fed to a dense layer to be trained for the fake news detection task, similar to the illustration provided in the Figure 3. Table II shows the performance (accuracy) of these models with the original architecture, when the binary user-news interaction is added, and when we incorporate the two proposed weighting techniques. To improve the training process time of the BERT+HAN models, we initialize the news and comments encoder by fine-tuning them with the news content and users' comments, respectively. Due to BERT's input size limitation, we truncate each news content and comment to include its first 512 words. The embedding dimension for the HAN architecture is set to 100. Both the news content and user comments networks were trained using a simple feed-forward fake news classifier on top of it which was removed in the final architecture of the model. Once pre-trained, we merged the news and comments encoders in the BERT+HAN model with the user-news interaction encoder. With passing the news elements (i.e., news content, user comments, and user-news interaction matrix) through this integrated network, we train the final fake news classifier. We trained the models with early stopping for all the baselines. For the edge re-weighting mechanism, instead of the binary user-news interaction matrix, we fed the weighted version, while for the sample-level re-weighting, we changed the loss based on the equation 5. Moreover, we tracked all the experiments using the Weights & Biases tool [35] where applicable. The hyperparameters tuned are the batch size, epochs, and learning rate. ## VI Experimental Results In this section, we review the designed experiments using the task of fake news detection. We specifically are looking to answer the following research questions: * How much effect do the designed weighting mechanisms have on the performance of the models? * Which weighting mechanism would capture the voice of the silence better? Using the available data, one way to investigate whether the voices of the silent users make a difference is to up-weight the silent users' signals and compare the performance of the downstream task with the original case. To be able to apply the weighting procedures based on the designed architecture, at first, we need to integrate the user-news interaction module (i.e., the UN interaction Embedding in Figure 3) to different baselines introduced in SSV-B and record their performance. Comparing the first two accuracy columns in Table II, we can see that user-news interaction conveys valuable information when added to the current fake news detection algorithms. The average improvement in the accuracy of the models for Politifact news is +4.63% while the average improvement of +8.14% has been observed in the GossipCop dataset. In the following sub-sections, we investigate each of the above questions (i.e., **Q1** in SSVI-A and **Q2** in SSVI-B) along with the discussions on the results. ### _How much effect do the designed weighting mechanisms have on the performance of the models?_ To check whether in fact the cues from the silent users have additional information and can improve the performance of the current models, we will apply the proposed re-weighting techniques and look into the performance of the downstream task. With that, as our first attempt at incorporating the type of users who retweeted the news for fake news detection, we started by re-weighting the edges of the user-news network as described in section IV-B. As another re-weighting technique, we added the sample-level re-weighting technique to the loss of the deep neural network to learn a re-weighting of the inputs as introduced in section IV-C. This technique, based on the gradient direction, learns to up-weight those news articles that provoke silent users more since they may contain additional cues for detection. By comparing the performance values with the models with the binary user-news interaction, we can infer how much of the increase in performance is due to the weighting procedure. In other words, it will give more importance to the voice of the under-represented groups and see whether this would change the performance of the downstream task. Overall, for all models in the edge re-weighting technique, we can see an average of +2.82% and +1.23% improvement for the Politifact and GossipCop datasets, respectively, when compared to the model with binary user-news interaction. Same with the sample re-weighting technique, in which the average of +1.66% and +0.55% improvement has been achieved. In conclusion, when the results of the two techniques are compared with the original architecture of the models and with the case when the binary user-news interaction matrix is added, both techniques provide evidence to support our hypothesis. The improvement, although slight, can provide us with a representation that gives importance to the potential cues in silent users' interactions. The reason for this marginal improvement is mostly because of the limited positive interaction of the lurkers with the news. For example, out of the 34,381 users who reposted the news in the GossipCop dataset, only 382 are lurkers. Re-weighting these signals would help, but it is not expected to provide us with a significant improvement. In addition to these signals, if we were able to provide other cues such as whether a user is interested in a piece of news or topic, we would have expected to see more improvement. However, with the API limitations, such data is not accessible. ### _Which weighting mechanism would capture the voice of the silence better?_ To see which weighting mechanism is better at capturing the voice of the silence, we can look into the amount of improvement with both of the models and compare them with each other. By comparing the values in each line of the Table II, except for one case (i.e., sample re-weighting for dEFEND model in Politifact dataset), the highest accuracy has been captured by the edge-reweighting technique. To better visualize the difference, Figure 5 shows the accuracy gain for both edge-reweighting and sample re-weighting methods. By comparing both methods, on average edge re-weighting improvements were higher and more consistent among all models when compared with the sample re-weighting values. As another observation, by comparing the results of the different datasets used in our experiment, the models' improve \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{Original} & With Binary User-News & \multirow{2}{*}{Edge Re-weighting} & \multirow{2}{*}{Sample Re-weighting} \\ \cline{3-3} \cline{5-6} & & **Accuracy** & & **Accuracy** & **Accuracy** & **Accuracy** \\ \hline \hline \multirow{4}{*}{**Politifact**} & CSI & 81.10 \(\pm\) 1.07 & 85.93 \(\pm\) 2.63 & **87.25 \(\pm\) 1.40** & 86.59 \(\pm\) 1.76 \\ \cline{2-6} & dEFEND & 81.48 \(\pm\) 1.50 & 84.36 \(\pm\) 2.20 & 86.72 \(\pm\) 0.72 & **87.16 \(\pm\) 1.51** \\ \cline{2-6} & TCNN-URG & 80.32 \(\pm\) 2.06 & 86.92 \(\pm\) 1.24 & **92.41 \(\pm\) 2.22** & 88.57 \(\pm\) 0.53 \\ \cline{2-6} & BERT+HAN & 83.04 \(\pm\) 1.35 & 87.25 \(\pm\) 1.32 & **89.67 \(\pm\) 0.80** & 88.79 \(\pm\) 0.82 \\ \hline \hline \multirow{4}{*}{**GossipCop**} & CSI & 85.98 \(\pm\) 0.29 & 88.77 \(\pm\) 0.50 & **91.13 \(\pm\) 0.42** & 89.94 \(\pm\) 0.74 \\ \cline{2-6} & dEFEND & 78.34 \(\pm\) 1.55 & 87.62 \(\pm\) 0.84 & **88.81 \(\pm\) 0.32** & 88.79 \(\pm\) 0.22 \\ \cline{1-1} \cline{2-6} & TCNN-URG & 81.42 \(\pm\) 2.62 & 85.66 \(\pm\) 0.46 & **85.95 \(\pm\) 0.68** & 85.21 \(\pm\) 1.83 \\ \cline{1-1} \cline{2-6} & BERT+HAN & 71.86 \(\pm\) 0.00 & 88.14 \(\pm\) 0.41 & **89.21 \(\pm\) 0.17** & 88.42 \(\pm\) 0.33 \\ \hline \hline \end{tabular} \end{table} TABLE II: The performance on the original architecture of the baselines along with a variation that includes the binary user-news interaction component (+UN) as well as variations that incorporate the proposed re-weighting techniques (i.e., user-news edge re-weighting and sample re-weighting methods). The highest accuracy is bolded for each row. Fig. 5: Accuracy gain of the proposed techniques in comparison with the model with binary UN interaction for (a) PolitiFact and (b) GossipCop. The edge re-weighting method has consistently yielded improvements across all the baselines. ment is more evident when the number of news is limited. Despite the power of deep neural networks for text classification, their effectiveness and performance highly depend on the quantity and quality of the labeled data. As listed in Table I, the number of news in Politifact is 451, while the number of news in GossipCop is about 13 times more, with 5,818 pieces of news. However, the edge re-weighting technique applied to the models provided a more robust representation in the case when the number of training data is limited and scarce. ## VII Conclusion and Future Work In this paper, we suggest two weighting techniques to upvalue the under-represented users on social media. From our observations on the empirical results, the results of the edge re-weighting method were consistent for all the baselines and improved the accuracy of the detection. It is worth mentioning that the assigned weights in the weighting formula can be leveled based on the platform. Since some works reported the 3-level Nielsen's rule being extreme [21], with some statistical analysis, weight alignment can be applied based on the user's behavior on different platforms. Moreover, since, to the best of our knowledge, this is the first attempt in considering user types in terms of the activities, more potential solutions can be investigated. Our priority with this work is to raise the issue of _participation inequality_ with the currently deployed models. In this work, due to API limitations, we only considered those users as lurkers if their minimal activity was recorded. In other words, we only examined the positive interactions and ignored negative ones (i.e., zeros in the UN matrix). Since some of the lurkers are highly active on social media (i.e., daily logins and consuming content) but do not post any content at all, future work, can exchange the user-news interaction matrix with the user's exposure matrix [36] and interpret the degree of interestingness of a piece of news for a user. Therefore, creating a less sparse user-news interaction matrix. ## Acknowledgment This material is based upon work supported by ONR (N00014-21-1-4002). Opinions, interpretations, conclusions, and recommendations are those of the authors.
2303.00574
Multichromatic Quantum Superpositions in Entangled Two-Photon Absorption Spectroscopy
Quantum information science is driving progress in a vast number of scientific and technological areas that cover molecular spectroscopy and matter-light interactions in general. In these fields, the ability to generate quantum mechanically-entangled photons is opening avenues to explore the interaction of molecules with quantum light. This work considers an alternative way of correlating photons by including energy superpositions. We study how the multichromatic quantum superposition, or color superposition of photon-pair states, influences the optical properties of organic chromophores. This work uses electronic structure calculations based on time-dependent density functional theory, and a simple modification of the standard entangled two-photon absorption theory. Our calculations show that it is possible to substantially modify the optical absorption cross section of molecules, where constructive and destructive interferences are computed. The quantum interference effects are more pronounced than the constructive ones. These quantum effects, or related ones, could be observed in quantum spectroscopic experiments where qudit photon states are generated.
M Wittkop, Juan M. Marmolejo-Tejada, Martín A. Mosquera
2023-03-01T15:16:39Z
http://arxiv.org/abs/2303.00574v1
# Multichromatic quantum superpositions in entangled two-photon absorption spectroscopy ###### Abstract Quantum information science is driving progress in a vast number of scientific and technological areas that cover molecular spectroscopy and matter-light interactions in general. In these fields, the ability to generate quantum mechanically-entangled photons is opening avenues to explore the interaction of molecules with quantum light. This work considers an alternative way of correlating photons by including energy superpositions. We study how the multichromatic quantum superposition, or color superposition of photon-pair states, influences the optical properties of organic chromophores. This work uses electronic structure calculations based on time-dependent density functional theory, and a simple modification of the standard entangled two-photon absorption theory. Our calculations show that it is possible to substantially modify the optical absorption cross section of molecules, where constructive and destructive interferences are computed. The quantum interference effects are more pronounced than the constructive ones. These quantum effects, or related ones, could be observed in quantum spectroscopic experiments where qudit photon states are generated. Introduction Quantum spectroscopy offers tools to elucidate molecular systems and materials that both expand and complement techniques based on classical light [1, 2, 3]. This has motivated work that is constantly demonstrating the significant transformative potential of quantum light to understand molecular function and open technological opportunities. Such advances are taking place in parallel with scientific and engineering fields such as photonic quantum computing [4, 5], where the precise and accurate control of correlated photons could bring substantial advantages for diverse, cutting-edge applications. A specific phenomenon that has received widespread attention recently is the absorption and emission of entangled photon pairs by molecules. Entangled two-photon absorption (ETPA) [6], for instance, introduces physical correlations that are not possible in classical two-photon absorption (TPA) spectroscopy [7]. ETPA commonly relies on the generation of entangled photon-pairs, which usually occurs through the well-known spontaneous parametric down conversion (SPDC) process [8]. SPDC provides photon pairs where the polarization of the photons are quantum mechanically correlated, and such entanglement can be verified in EPR-like (Einstein-Podolsky-Rosen) devices [9, 10, 11]. These photons are emitted within an quantum area, and the photons in each entangled pair are delayed with respect to one another [12, 13]. Such delay is expressed in terms of an entanglement time, which endow ETPA techniques with unique properties that to-date continue to be explored by the community. Entangled photons can be emitted through other mechanisms such as molecular pathways [14, 15], quantum dots [16, 17], or semiconducting devices [18, 19]. A family of chromophores have been investigated recently: molecules such as Rhodamine 6G [20], Zinc-TPP [21], and thiophene dendrimers [22], among other dyes [23]. ETPA absorption of a molecular unit is quantified commonly through ETPA cross-sections, which have the same units as classical one-photon cross sections, usually expressed in cm\({}^{2}\) units. Rigorous experimental efforts suggest that ETPA offers considerable quantum advantage over classical TPA (CTPA) [23], especially at low photon (quantum light) fluxes (at extremely low fluxes ETPA will dominate significantly over CTPA). However, the estimation of entangled cross sections with very high accuracy is the subject of current efforts [24, 23]. This is a strong motivation, in our opinion, to further the understanding of the interaction of molecules and materials with quantum light and their connection to quantum technologies. One can hypothesize that experimental and theoretical techniques will continue to advance in these directions, unlocking unexpected and highly beneficial quantum phenomena in a wide variety of systems. Experimental and theoretical studies so far have focused on the interplay between single entangled photon pairs and molecules, where the frequencies of the photons are assumed to be given. For example, in degenerate pumping the frequency of both photons are the same. Recently, however, energy superposition of photons has been achieved for individual photons [25, 26, 27] and for photon pairs [18]. Energy superpositions give rise to the well-known "qudit" states, which generalize the concept of the qubit. Also known as color superposition, in this phenomenon, the color of the photons is undetermined, and each single photon is in a superposition of two (or more) colors. Even though the interaction between photon qubit (or qudit) states and molecules has not been reported experimentally so far, their effects can be explored theoretically. We do so in this case for three chromophores of interest: flavin mononucleotide (FMN) [28, 29], topotecan (TPT) [30, 31, 32, 33], and lucifer yellow (LY) [34]. This work studies the molecular absorption of entangled photon-pairs that also feature multichromatic superpositions and polarization entanglement. These photon pairs could form qudits of six-fold dimensionality (or eight-fold if four colors are used); this work, however, focuses on a qubit representation, as specified herein, but a higher-dimensionality of quantum states is possible. We find that the cross sections in this case show signatures of constructive and destructive interferences, depending on the location of the quantum superposition in the Bloch sphere. For a special set of angles in the Bloch sphere, we notice that the destructive interference can be quite substantial, and for other angles, we observed that the absorption cross section can be enhanced by close to an order of magnitude, whereas the quantum interference can lower absorption by up to two orders of magnitude. These findings then suggest the phenomenon of color-superposition could be of interest for additional quantum control of ETPA cross sections and related experiments. ## 2 Theory This work focuses on the interaction of photon pairs with vertical electronic excited states only. This is a common approximation that is employed in CTPA spectroscopy because full vibronic transitions are quite demanding, computationally. For photon-pairs characterized by two unique frequencies \(\omega_{1}\) and \(\omega_{2}\), entanglement time \(T_{\rm e}\), and entanglement area \(A_{\rm e}\), the absorption cross-section is given by [35]: \(\sigma_{f,0}=(4\pi^{3}\alpha a_{0}^{5})/(A_{\rm e}\tau_{0}c)\times g(\omega_{ \rm T}-\Omega_{f})|W_{f,0}|^{2}\), where \(g\) is the line shape function, \(\omega_{\rm T}\) is the sum of the two photon frequencies, \(\tau_{0}\) is the atomic unit of time (\(\tau_{0}=m_{e}a_{0}^{2}/\hbar\)), \(\Omega_{f}\) is the excitation energy for transition from the ground state to the excited state labeled \(f\) (the ground state is labeled as the 0-th state), \(a_{0}\) is the Bohr length, \(\alpha\) the fine structure constant, and \(c\) the speed of light. The cross-section then has the units that arise from the term \((4\pi^{3}\alpha a_{0}^{5})/(A_{\rm e}\tau_{0}c)\). This result for \(\sigma_{f,0}\) can be derived using second perturbation theory and assuming that the photons are modeled as uniform plane wave packets with fronts that are spatially separated by a distance of \(cT_{\rm e}\). In terms of random photon detection times, the time difference between photon detections is then in average the entanglement time, \(T_{\rm e}\). The function \(W_{f,0}\) reads: \[W_{f,0}(\omega_{1},\omega_{2},T_{e})=\sqrt{\frac{\omega_{1}\omega_{2}}{T_{e}}} S_{f,0} \tag{1}\] where \(\omega_{1}\) and \(\omega_{2}\) are the frequencies of the first and second incoming photons, correspondingly. In the expressed equations, all the frequencies, dipole moments, and entanglement times are expressed in atomic units (a.u.); this includes \(\kappa\) and \(\Gamma\). The term \(S_{f,0}\) represents the transition function (in atomic units): \[S_{f,0}=\sum_{j}\frac{(\vec{\mu}_{fj}\cdot\vec{\epsilon}_{2})(\vec{\mu}_{j0} \cdot\vec{\epsilon}_{1})}{\Omega_{j}-\omega_{1}-{\rm i}\kappa}\Big{\{}1-\exp \big{[}-{\rm i}(\Omega_{j}-\omega_{1}-{\rm i}\kappa)T_{\rm e}\big{]}\Big{\}}+( 1\leftrightarrow 2) \tag{2}\] where \(\kappa\) represents the inverse of the lifetime of the intermediate virtual state, \(\vec{\epsilon}_{1}\) and \(\vec{\epsilon}_{2}\) are the polarizations of the first and second photon, respectively. The transition dipole vector for a transition from ground state to excited state "\(j\)" is denoted as \(\vec{\mu}_{j0}\), whereas \(\vec{\mu}_{fj}\) denotes the transition dipole vector for a transition from the \(j\)-th excited state into the final state \(f\). The lineshape function \(g\) is a described in terms of a Lorentzian profile of the form \(g(\Delta\omega)=\pi^{-1}(\Gamma/2)/[\Delta\omega^{2}+(\Gamma/2)^{2}]\). We now consider the two-photon packet as being in a superposition of two states: "MC", in which the photons are monochromatic (\(\omega_{1}=\omega_{2}=\omega_{\rm T}/2\)), and "BC", in which the photons are bichromatic, \(\omega_{1}^{\prime}\neq\omega_{2}^{\prime}\); the "primed" quantities are assigned to the BC state. It is assumed that MC and BC photon-pair quantum states have entanglement times \(T_{\rm e}\) and \(T_{\rm e}^{\prime}\), respectively, but have assigned the same area \(A_{\rm e}\). The quantum superimposed two-photon state is thus described by: \[|\Psi_{\gamma}\rangle=\cos\Big{(}\frac{\theta}{2}\Big{)}|{\rm MC}\rangle+\sin \Big{(}\frac{\theta}{2}\Big{)}e^{{\rm i}\phi}|{\rm BC}\rangle \tag{3}\] we refer to this state as a "multichromatic superposition" (MCS). The MCS is thus controlled by the Bloch sphere parameters \(\theta\) and \(\phi\), where \(0\leq\phi\leq 2\pi\) and \(0\leq\theta\leq\pi\). Because the MCS photon configuration obeys the standard linear superposition of states, the function \(\tilde{W}_{f,0}\) must also transform as: \[W_{f,0}^{\rm QS}=\cos\Big{(}\frac{\theta}{2}\Big{)}W_{f,0}^{\rm MC}+\sin\Big{(} \frac{\theta}{2}\Big{)}e^{{\rm i}\phi}W_{f,0}^{\rm BC} \tag{4}\] The amplitude \(W_{f,0}^{\rm QS}\), as a quantum mechanical transition element, is also described in terms of the Bloch-sphere angles \(\theta\) and \(\phi\). The \(W\) functions are evaluated as \(W_{f,0}^{\rm MC}=W_{f,0}(\omega_{1},\omega_{2},T_{e})\) and \(W_{f,0}^{\rm BC}=W_{f,0}(\omega_{1}^{\prime},\omega_{2}^{\prime},T_{e}^{\prime})\). Figure 1 shows a pictorial summary of the theoretical concept explored in this work. The cross section for a discrete MCS transition of interest is: \[\sigma_{0\to f}^{\rm QS}=\frac{4\pi^{3}\alpha a_{0}^{5}}{A_{e}\tau_{0}c}g(0)|W_ {f,0}^{\rm QS}|^{2} \tag{5}\] Figure 1: Pictorial representation of the present theoretical model: A two-photon quantum packet is directed towards a molecule. The two-photon state is a quantum superposition of two possible entangled states, a monochromatic pair, labeled “MC”, and a bichromatic pair, referred to as the “BC” pair. The molecule can then be excited through different pathways where different intermediate states are involved in the overall two-photon excitation, as suggested in this figure. here \(\omega_{f,0}=\omega_{T}\), so \(g(0)=2/\pi\Gamma\). The above equation is a relatively simple extension of the standard formula used in standard ETPA spectroscopy, but it now includes the possibility of there being color-superposition. This formula supposes, as expected, that dissipation effects and thereby decoherence in the generation of these MCS photon states are minimal. It is important to notice that perturbation theory demands energy conservation, as the interaction time is assumed to last for a very long time. For this reason we have that \(\omega_{1}+\omega_{2}=\omega_{1}^{\prime}+\omega_{2}^{\prime}=\omega_{\rm T}\). ## 3 Computational Method The transition dipoles are calculated by means of linear-response time-dependent density functional theory (TDDFT). We use the so-called unrelaxed dipoles, or dipoles that come from "CIS-like" linear response TDDFT wavefunctions. This is a practical approximation that works reasonably well, and is computationally efficient. We extract these dipoles with an in-house code that is based on the quantum chemistry suite NWChem [36], version 7.0. We use Figure 2: Molecular structures considered in this work: Flavin mononucleotide, topotecan, and lucifer yellow. the standard B3LYP exchange-correlation functional, and the 6-31G* basis set, which is commonly applied to determine excited-state transitions in the optical regime. As discussed before, there is yet a source ETPA enhancement that is not well-understood from a theory perspective. In Ref. [37], the use of radiative lifetimes was suggested to obtain ETPA cross-section values in the range of what has been experimentally observable. Motivated by the common practice in TPA theory of using a standard value for the broadening, for the ETPA calculations, we assume a value of \(\Gamma\) that corresponds to \(10^{-8}\) eV, reflecting a final state lifetime of approximately 1.0 ns. For the intermediate state we assume \(\kappa=0.01\) eV. In all the ETPA calculations the entanglement area is \(A_{e}=1.0\times 10^{-8}\) cm\({}^{2}\). For the standard ETPA (non-color superposition), we assume \(T_{\rm e}=100\) fs. This same value, \(T_{\rm e}=100\) fs, is applied for the MCS (color superimposed) ETPA, for the BC and MC quantum states of TPT and LY (Figure 2) we take \(T_{\rm e}=T_{\rm e}^{\prime}=100\) fs, but for FMN we set \(T_{\rm e}=100\) fs and \(T_{\rm e}^{\prime}=75\) fs (\(T_{\rm e}^{\prime}\) being the entanglement time of the BC quantum state). Classical TPA spectra are computed for comparison as well. For these we take the standard final state broadening factor, the classical equivalent of \(\Gamma\), as 0.1 eV, and intermediate linewidth, the analogue of \(\kappa\), as 0.05 eV. We assume that the photons have cross-polarization: so if one photon has horizontal polarization, the other has vertical polarization. The cross sections are averaged with respect to all possible molecular orientations [38]. ## 4 Results and Discussion To illustrate the proof-of-concept for the effect of color/multichromatic superpositions on the ETPA cross sections, we first compute the conventional TPA spectrum and the standard ETPA spectra. Then, we proceed to examine the MCS-ETPA properties of the chromophores selected for our theoretical study. These chromophores are flavin mononucleotide (FMN), topotecan (TPT), and lucifer yellow (LY); their molecular drawings are shown in Figure 2. FMN is a biomolecule produced from riboflavin that forms part of NADH Hydrogenase, TPT is used as a chemotherapy drug, and LY is utilized in spectroscopic studies. The spectra in this work are reported in terms of half the frequency of the total two-photon frequency. This half-frequency is denoted as \(\omega_{\rm h}=\omega_{\rm T}/2\). The theoretical CTPA spectra of FMN, TPT, and LY, are shown in Figure Figure 3: Entangled and classical two-photon absorption cross sections: a) classical TPA profiles of FMN and TPT; b), monochromatic ETPA cross section as a function of \(\omega_{\rm h}\) for FMN and TPT; c), ETPA cross section vs. entanglement time for FM, TPT, and LY at energies 1.64, 1.94, and 2.33 eV, respectively. d), ETPA profile of LY, and CTPA cross section (inset). 3.a. and d. (as an inset plot). We note that FMN has a classical TPA cross section of about 220 GM at around a half-frequency of 1.64 eV (760 nm). As is commonly seen in theoretical TPA spectra, the classical TPA shows comparable cross-section values at higher \(\omega_{\rm h}\) frequencies. A similar numerical trend is noticed for TPT, where the CTPA cross section is about 188 GM at 1.94 eV (640 nm), and then it slightly decreases to about 41 GM at 2.12 eV, but it raises again to 70 GM around \(\omega_{\rm h}=2.3\) eV. LY, however, shows a different profile, as can be seen in Figure 3.d, where it is relatively low - under 1 GM for half-frequencies between 2.1 and 2.7 eV, where it raises to about 8 GM at 2.9 eV. Regarding standard monochromatic ETPA (\(\omega_{1}=\omega_{2}=\omega_{\rm h}\)), Figure 3.b Figure 4: Purely bichromatic ETPA. The frequencies of BC quantum state are \(\omega_{1}^{\prime}=1/3\times\omega_{T}\), \(\omega_{2}^{\prime}=2/3\times\omega_{T}\): a), BC ETPA spectra of FMN and TPT, and their comparison to the monochromatic counterpart; b), variation of FMN and TPT BC ETPA cross section as a function of the entanglement time, the frequencies \(\omega_{\rm h}\) are the same as in Figure 3.c; c), BC ETPA spectrum of LY, inset shows dependency on \(T_{\rm e}\) at 2.9 eV. shows the entangled cross section profiles of FMN and TPT, and Figure 3.d shows that of LY. For these three systems we use \(T_{\rm e}=100\) fs. For these systems we see a correlation between the CTPA and ETPA spectra, however, for TPT such connection between classical and entangled TPA is somewhat less evident, as there are some changes between relative peak heights. FMN and TPT have ETPA cross section values in the same order of magnitude, \(10^{-20}\) cm\({}^{2}\). The maximum values of FMN and TPT occur at the same frequencies as their classical TPA counterparts, with values close to \(3.0\times 10^{-20}\) cm\({}^{2}\) and \(1.8\times 10^{-20}\) cm\({}^{2}\), respectively. As mentioned earlier, for the three systems considered we use the same linewidth factors. The entanglement time is another variable that affects the magnitude of the ETPA cross sections; the effect of varying this is displayed in Figure 3.c, where we observe, as expected, that the cross section can vary by one order of magnitude, or more, if the \(T_{\rm e}\) is further increased beyond the plot time range. For the LY chromophore, we observe theoretical cross section values lower than those of FMN and TPT, which indicate ETPA activity around 2.3 eV (540 nm) and 2.5 eV (496 nm), and then higher values at 2.9 eV (430 nm). Figure 4 displays the standard ETPA spectra under a non-degenerate condition, where the two photons have different frequencies. In this case we choose \(\omega_{1}^{\prime}=\omega_{T}/3\), and \(\omega_{2}^{\prime}=2\omega_{\rm T}/3\) and refer to this as "bichromatic ETPA". The bichromatic ETPA spectrum of each molecule shows a few differences with respect to the monochromatic ones. These are mainly a slight enhancement of the ETPA cross section, up to 30 %. However, between 2.0 and 2.4 eV, the degenerate-pumping ETPA spectrum of TPT is not improved much by the bichromatic condition. As in the degenerate case, the cross section at low photon frequency varies by about one order of magnitude for \(T_{\rm e}\) in the range 20 - 100 fs. Due to the parameter \(\kappa\), the ETPA cross-sections in all cases have their oscillations damped. The oscillatory pattern of the BC ETPA case is slightly different than in the monochromatic case, but not too significantly. Having determined the difference between the separate MC and BC ETPA cases, we now proceed to examine the multichromatic quantum superposition effect, as expressed in Equation (4). We focus on highest intensity transitions, which happen at 1.64, 1.94, and 2.9 eV for FMN, TPT, and LY, respectively. Figure 5.a shows the comparison between the quantum superposition effect and the standard MC and BC ETPA cases where there is no superposition. For a polar angle of \(\theta=60^{\circ}\), the two-photon state is a quantum mixture of 75 % the monochromatic state, and 25 % the bichromatic state, yet we Figure 5: ETPA cross sections based on quantum color superpositions, or quantum MCS. The Bloch sphere parameters are fixed an taken as \(\theta=60^{\circ}\) and \(\phi=0^{\circ}\). For TPT and LY the entanglement times of the BC and MC states are the same \(T_{\rm e}=T_{\rm e}^{\prime}=100\) fs, but for FMN we study \(T_{\rm e}=100\) fs and \(T_{\rm e}^{\prime}=75\) fs. a) Shows the standardd quantum MCS ETPA spectra of FMN, b), that of TPT, and c), LY. see that the quantum cross section is over 100 % enhanced with respect to the monochromatic ETPA case, for a frequency of 1.64 eV. A similar enhancement takes place for TPT and LY (Figures 5.b and.c, respectively), at 1.94 eV and 2.9 eV, correspondingly. This enhancement is also due to setting the phase factor as \(\phi=0^{\circ}\). In additional preliminary calculations, we have noted \(\phi=0^{\circ}\) leads to quantum constructive effects. For the three molecular systems, FMN, TPT, and LY, Figure 6 shows the variation of cross-section values with respect to the Bloch sphere parameters (again, we examine this at the frequencies 1.64 eV, 1.94 eV, and 2.9 eV, correspondingly). The most interesting behavior in terms of quantum constructive effects takes place at the points where theta is between \(60^{\circ}\) and \(120^{\circ}\), while \(\phi\) being close to \(0^{\circ}\) or \(180^{\circ}\). There is thus a region of constructive interference, where the enhancement in cross section doubles. A higher enhancement is possible for different combinations of entanglement times [39], or if the quantum superposition has an additional constructive effect that improves the lifetimes of the intermediate and final excited states involved in the ETPA quantum process (which are not explored in this work). On the other hand, a significant destructive quantum interference takes place in a region center around \(\theta=90^{\circ}\) and \(\phi=180^{\circ}\). The cross section value drops by up to two orders of magnitude under the current parameter selection. It drops by two orders of magnitude (from \(10^{-20}\) cm\({}^{2}\) to \(10^{-22}\) cm\({}^{2}\)) for FMN, one order of magnitude for TPT, and two for LY. Figure 6.b shows the dependency of \(\sigma\) with respect to \(\theta\) for \(\phi=90^{\circ}\). Clearly, the quantum superposition engenders behaviors that are not possible under the separate circumstances, which is to be expected from this type of phenomenon. Therefore, besides present in coherent electronic transfer [40], destructive quantum interference is also possible in photon absorption. As is common in quantum coherent phenomena, there are conditions that must be satisfied to observe quantum superposition effects. On of them is the suppression of decoherence related to vibrational degrees of freedom. These occur during the emission of entangled photons and the absorption of these. It is currently a subject of intensive research determining the influence of entangled photons on the lifetime of excited states. But there are indications that such lifetimes are enhanced by entangled photon absorption [41]. In this work, we assumed fixed linewidth parameters in line with values used before [37] to obtain theoretical values consistent with experimental measurements. However, as in standard ETPA, the effect of MCS on the lifetimes could be a subject of further research as well, as additional quantum correlations could Figure 6: Two-dimensional heat plots for FMN (subfigure a) at \(\omega_{\rm h}=1.64\) eV, TPT (subfigure c) at 1.94 eV, and LY (subfigure d) at 2.9 eV, and a “cut” for \(\phi=90^{\circ}\) for the FMN system. have unexpected consequences on electronic lifetimes and other quantum properties. ## 5 Conclusion In this work we investigated the interaction between entangled photon pairs, that in addition to having the standard polarization correlation, feature energy superpositions. That is, the frequency of the photons are undetermined prior to interaction with matter. We referred to this phenomenon as a multichromatic superposition. The state of the entangled photon pair was represented in the well-known Bloch sphere, or qubit space. In comparison to standard ETPA simulations, we observed the emergence of constructive and destructive effects in the quantum cross section profiles. The enhancement (or constructive) of cross section was computed to be improved by nearly a 100 %, whereas the descructive interference could reduce the cross section by around two orders of magnitude. Our work then suggests that these types of coherent effects could be added to the toolkits of quantum control within the context of optical quantum spectroscopy. ## 6 Acknowledgments The authors kindly thank the MonArk NSF Quantum Foundry supported by the National Science Foundation Q-AMASE-i program under NSF award No. DMR-1906383.
2307.02694
Loss Functions and Metrics in Deep Learning
When training or evaluating deep learning models, two essential parts are picking the proper loss function and deciding on performance metrics. In this paper, we provide a comprehensive overview of the most common loss functions and metrics used across many different types of deep learning tasks, from general tasks such as regression and classification to more specific tasks in Computer Vision and Natural Language Processing. We introduce the formula for each loss and metric, discuss their strengths and limitations, and describe how these methods can be applied to various problems within deep learning. We hope this work serves as a reference for researchers and practitioners in the field, helping them make informed decisions when selecting the most appropriate loss function and performance metrics for their deep learning projects.
Juan Terven, Diana M. Cordova-Esparza, Alfonso Ramirez-Pedraza, Edgar A. Chavez-Urbiola, Julio A. Romero-Gonzalez
2023-07-05T23:53:55Z
http://arxiv.org/abs/2307.02694v3
# Loss Functions and Metrics in Deep Learning ###### Abstract One of the essential components of deep learning is the choice of the loss function and performance metrics used to train and evaluate models. This paper reviews the most prevalent loss functions and performance measurements in deep learning. We examine the benefits and limits of each technique and illustrate their application to various deep-learning problems. Our review aims to give a comprehensive picture of the different loss functions and performance indicators used in the most common deep learning tasks and help practitioners choose the best method for their specific task. Deep learning loss functions performance metrics ###### Contents * 1 Introduction * 2 Loss Functions vs. Performance Metrics * 2.1 Properties of loss functions * 3 Regression * 3.1 Regression Loss Functions * 3.1.1 Mean Squared Error (MSE) * 3.1.2 Mean Absolute Error (MAE) * 3.1.3 Huber Loss * 3.1.4 Log-Cosh Loss * 3.1.5 Quantile Loss * 3.1.6 Poisson Loss * 3.2 Regression Performance Metrics ###### Abstract We propose a novel approach to the classification of the proposed method for detecting the class of features of a class of features. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method. The proposed method is based on the proposed method. * 6.1 Segmentation Loss Functions * 6.1.1 Cross Entropy Loss for Segmentation * 6.1.2 Intersection Over Union (IoU) loss for segmentation * 6.1.3 Dice Loss * 6.1.4 Tversky loss * 6.1.5 Lovasz Loss * 6.2 Segmentation Metrics * 6.2.1 Pixel Accuracy * 6.2.2 Boundary F1 Score (BF) * 6.2.3 Panoptic Quality (PQ) * 7 Face Recognition * 7.1 Face Recognition Loss Functions and Metrics * 7.1.1 Softmax Loss * 7.1.2 A-Softmax Loss * 7.1.3 Center Loss * 7.1.4 CosFace: Large-Margin Cosine Loss * 7.1.5 ArcFace. Additive Angular Margin Loss * 7.1.6 Triplet Loss * 7.1.7 Contrastive Loss * 7.1.8 Circle Loss * 7.1.9 Barlow Twins Loss * 7.1.10 SimSiam Loss * 8 Image Generation * 8.1 Image Generation Loss functions * 8.1.1 Reconstruction Loss * 8.1.2 Kullback-Leibler Divergence Loss * 8.1.3 Adversarial Loss * 8.1.4 Wasserstein loss * 8.1.5 Negative Log-likelihood in Normalizing Flows * 8.1.6 Contrastive Divergence * 8.2 Image Generation Metrics * 8.2.1 Peak Signal-to-Noise Ratio (PSNR) * 8.2.2 Structural Similarity Index (SSIM) * 8.2.3 Inception Score (IS) * 8.2.4 Frechet Inception Distance (FID) * 9 Discussion * 10 Conclusion ## 11 Acknowledgments ### Acronyms **AP**: Average Precision. 26 **AUC-ROC**: Area under the Receiver Operating Characteristic curve. 21 **BCE**: Binary Cross Entropy. 14 **BF**: Boundary F1 Score. 30 **CCE**: Categorical Cross Entropy. 15 **COCO**: Common Objects in Context. 26 **FDR**: False Discovery Rate. 20 **FID**: Frechet Inception Distance. 42 **FPR**: False Positive Rate. 20 **IoU**: Intersection Over Union. 23 **IS**: Inception Score. 41 **KL**: Kullback-Leibler. 37, 38 **MAE**: Mean Absolute Error. 9 **MAPE**: Mean Absolute Percentage Error. 12 **MSE**: Mean Square Error. 8 **NPV**: Negative Predictive Value. 20 **PQ**: Panoptic Quality. 30 **PSNR**: Peak Signal-to-Noise Ratio. 40 **RMSE**: Root Mean Square Error. 12 **SMAPE**: Symmetric Mean Absolute Percentage Error. 12 **SSIM**: Structural Similarity Index. 41 **TDR**: True Discovery Rate. 20 **TPR**: True Positive Rate. 18 **VOC**: Visual Object Classes. 26 **WBCE**: Weighted Binary Cross Entropy. 15 **YOLO**: You Only Look Once. 24 ## 1 Introduction Deep Learning has become a powerful tool for solving complex problems in various fields, such as image [1, 2, 3, 4, 5] and speech recognition [6, 7, 8, 9, 10], natural language processing [11, 12, 13, 14, 15], and computer vision [16, 17, 18]. One of the critical components of Deep Learning is the choice of the loss function and performance metrics used to train and evaluate models. Loss functions measure how well a model can approximate the desired output, while performance metrics evaluate the model's ability to make accurate predictions on unseen data. Selecting a suitable loss function and performance metric is crucial for achieving good performance in deep learning tasks. However, with a wide range of options available, it can be challenging for practitioners to choose the most appropriate method for their specific task. This paper aims to comprehensively review the most commonly used loss functions and performance metrics in deep learning. We will discuss the advantages and limitations of each method and provide examples of their application in various Deep Learning tasks. We begin by discussing regression and classification's most commonly used loss functions, including mean squared error, cross-entropy, and hinge loss. Then, we explain their advantages and limitations and when they are typically used. For example, mean squared error is widely used for regression tasks, while cross-entropy is used for classification tasks. We will also examine more complex tasks such as object detection, segmentation, face recognition, and image generation. Along the way, we review the most commonly used performance metrics in each category, explaining how these metrics are calculated, their advantages and limitations, and when they are typically used. ## 2 Loss Functions vs. Performance Metrics A loss function and a performance metric are both used to evaluate the performance of a deep learning model, but they serve different purposes. A loss function is used during training to optimize the model's parameters. It measures the difference between the predicted and expected outputs of the model, and the goal of training is to minimize this difference. On the other hand, a performance metric is used to evaluate the model after training. It measures how well the model can generalize to new data and make accurate predictions. Performance metrics also compare different models or configurations to determine the best-performing one. The following list describes the common differences between loss functions and performance metrics: * **Optimization vs. Evaluation**: As mentioned previously, loss functions optimize the model's parameters during training. In contrast, performance metrics evaluate the model's performance after training. * **Model-Dependence**: Loss functions depend on the model's architecture and the specific task. Performance metrics, however, are less dependent on the model's architecture and can be used to compare different models or configurations of a single model. * **Minimization vs. Maximization**: The goal of training a deep learning model is to minimize the loss function. However, evaluating a model aims to maximize the performance metric --except for error performance metrics such as Mean Squared Error. * **Interpretation**: Loss functions can be challenging to interpret as their values are often arbitrary and depend on the specific task and data. On the other hand, performance metrics are often more interpretable as they are used across different tasks. ### Properties of loss functions The loss functions have a series of properties that need to be considered when selected for a specific task: 1. **Convexity**: A loss function is convex if any local minimum is the global minimum. Convex loss functions are desirable because they can be easily optimized using gradient-based optimization methods. 2. **Differentiability**: A loss function is differentiable if its derivative with respect to the model parameters exists and is continuous. Differentiability is essential because it allows the use of gradient-based optimization methods. 3. **Robustness**: Loss functions should be able to handle outliers and not be affected by a small number of extreme values. 4. **Smoothness**: Loss function should have a continuous gradient and no sharp transitions or spikes. 5. **Sparsity**: A sparsity-promoting loss function should encourage the model to produce sparse output. This is useful when working with high-dimensional data and when the number of important features is small. 6. **Multi-modality**: A loss function is considered multi-modal if it has multiple global minima. Multi-modal loss functions can be useful for tasks requiring the model to learn multiple data representations. 7. **Monotonicity**: A loss function is monotonous if its value decreases as the predicted output approaches the true output. Monotonicity ensures that the optimization process is moving toward the correct solution. 8. **Invariance**: A loss function is invariant if it remains unchanged under particular input or output transformations. Invariance is valuable when working with data that may be transformed in various ways, such as rotation, scaling, or translation. The following sections review the loss functions and performance metrics for common deep learning tasks. Table 1 summarizes common vision-related tasks with their common loss functions used and performance metrics. ## 3 Regression Regression is a supervised learning problem in machine learning that aims to predict a continuous output value based on one or more input features. Common regression models include linear regression, polynomial regression, and regression trees. _Linear regression_ assumes a linear relationship between the independent and dependent variables. It is represented by the equation \[\hat{y}=\beta_{0}+\beta_{1}x_{1}+\beta_{2}x_{2}+\cdots+\beta_{n}x_{n}, \tag{1}\] where \(\hat{y}\) is the predicted value, \(\beta_{0}\) is the intercept or bias, \(\beta_{1},\beta_{2},...,\beta_{n}\) are the coefficients or weights corresponding to the input features or independent variables \(x_{1},x_{2},...,x_{n}\) and \(n\) is the number of input features. The goal is to find the bias and the coefficient values that minimize the difference between the predicted and actual values, usually using a loss function such as Mean Squared Error (MSE) or Mean Absolute Error (MAE). In _polynomial regression_, the relationship between the independent variable \(x\) and the dependent variable \(y\) is modeled as an \(n^{th}\) degree polynomial. This is useful for capturing complex, non-linear relationships between input and output variables. The general form of a polynomial regression equation is given by \[\hat{y}=\beta_{0}+\beta_{1}x+\beta_{2}x^{2}+\beta_{3}x^{3}+\cdots+\beta_{n}x^ {n}, \tag{2}\] where \(\hat{y}\) is the predicted value, \(\beta_{0}\) is the intercept or bias, \(\beta_{1},\beta_{2},...,\beta_{n}\) are the coefficients corresponding to the powers of \(x\) and \(n\) is the degree of the polynomial. Like linear regression, the objective is to find the bias and the coefficients that minimize the difference between the predicted and the actual values. However, high-degree polynomials tend to overfit when the model becomes excessively complex such that it performs well on training data but poorly on unseen or test data. _Regression trees_, on the other hand, are a type of decision tree where the output is a continuous variable. Unlike linear and polynomial regression models that establish a single prediction equation, regression trees split the input space into smaller regions where a simple model is used. The tree is built during training through a process known as binary recursive partitioning. The output for a new instance is predicted by traversing the tree until a leaf node is reached. The value associated with the leaf node is typically the mean target value of the training samples in this node. Unlike polynomial regression, this model can capture complex, non-linear relationships and interactions between features without specifying them explicitly. However, regression trees can also overfit the training data if not properly pruned or controlled, leading to poor generalization performance on new, unseen data. Figure 1 shows a regression tree. Regression is used in various domains, including finance, healthcare, social sciences, sports, and engineering. Some practical applications include house price prediction [19], energy consumption forecasting [20], healthcare and disease prediction [21], stock price forecasting [22], and customer lifetime value prediction [23]. In the following subsections, we will review the most common lost functions and performance metrics used for regression. \begin{table} \begin{tabular}{l l l} \hline Deep Learning Task & Loss Functions & Performance Metrics \\ \hline Regression & MSE (3.1.1), MAE (3.1.2) & MSE (3.1.1), MAE (3.1.2) \\ & Huber loss (3.1.3), Log-Cosh (3.1.4) & RMSE (3.2.1), MAPE (3.2.2) \\ & Quantile loss (3.1.5) & SMAPE (3.2.3) \\ & Poisson loss (3.1.6) & \(R^{2}\) (3.2.4), Adjusted \(R^{2}\) (3.2.5) \\ \hline Binary Classification & BCE (4.1.1) & Accuracy (4.2.2), Precision (4.2.3) \\ & WBCE (4.1.2) & Recall (4.2.4), F1-Score (4.2.6) \\ & Hinge loss (4.1.7) & AUC-ROC (4.2.14) \\ & Focal loss (4.1.6) & PR Curve (4.2.13) \\ \hline Multi-Class Classification & CCE (4.1.3) & Accuracy (4.2.2) \\ & Sarse CCE (4.1.4) & Precision (4.2.3) \\ & CCE w/label smoothing (4.1.5) & Recall or TPR (4.2.4) \\ & Focal loss (4.1.6) & F1-Score (4.2.6), F2-Score (4.2.7) \\ & Hinge loss (4.1.7) & PR Curve (4.2.13) \\ \hline Object Detection & Smooth L1 (5.1.1) & AP (5.2.1) \\ & IoU loss (5.1.2) & AR (5.2.2) \\ & Focal loss (4.1.6) & \\ & YOLO loss (5.1.3) & \\ \hline Semantic Segmentation & CCE & IoU (5.1.2), \\ & IoU loss (5.1.2) & Pixel Accuracy (6.2.1), \\ & Dice Loss (6.1.3) & AP (5.2.1) \\ & Tversky loss (6.1.4) & BF (6.2.2) \\ & Lovasz loss (6.1.5) & \\ \hline Instance Segmentation & CCE (4.1.3) & AP (5.2.1) \\ & IoU loss (6.1.2) & \\ & Smooth L1 (5.1.1) & \\ \hline Panoptic Segmentation & CCE (4.1.3) & PQ (6.2.3) \\ & Dice Loss (6.1.3) & \\ \hline Face Recognition & A-Softmax (7.1.2) & Accuracy (4.2.2) \\ & Center loss (7.1.3) & Precision (4.2.3) \\ & CosFace (7.1.4) & Recall (4.2.4) \\ & ArcFace (7.1.5) & F1-Score (4.2.6) \\ & Triplet loss (7.1.6) & \\ & Contrastive loss (7.1.7) & \\ & Circle loss (7.1.8) & \\ \hline Image Generation & Adversarial Loss (8.1.3) & PSNR (8.2.1) \\ & Reconstruction loss (8.1.1) & SSIM (8.2.2) \\ & KL Divergence (8.1.2) & IS (8.2.3), \\ & Wasserstein Loss (8.1.4) & FID (8.2.4) \\ & Constrastive Divergence (8.1.6) & \\ \hline Image-to-Image Translation & Adversarial Loss (8.1.3), & Perceptual Measures, MOS \\ & Cycle-Consistency Loss & \\ \hline Style Transfer & Content Loss, & Perceptual Measures, MOS \\ & Style Loss & \\ \hline Image Super-resolution & MSE (3.1.1), & PSNR (8.2.1), SSIM (8.2.2) \\ & Perceptual Loss & \\ \hline Depth Estimation & MSE (3.1.1) & RMSE (3.2.1), Absolute Relative \\ & Depth Error & \\ \hline Pose Estimation & MSE (3.1.1) & PCK, OKS \\ \hline Optical Character Recognition (OCR) & CCE (4.1.3), & Accuracy (4.2.2), Precision (4.2.3), \\ & CTC Loss & Recall (4.2.4), F1-Score (4.2.6) \\ \hline \end{tabular} \end{table} Table 1: Loss functions and performance metrics for deep learning tasks. ### Regression Loss Functions Table 2 shows the common loss functions used for regression and their applications. The following subsections describe each of these loss functions in more detail. #### 3.1.1 Mean Squared Error (MSE) The Mean Square Error (MSE) measures the average of the squared differences between the predicted values and the true values [24]. The MSE loss function can be defined mathematically as \[MSE=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}, \tag{3}\] where \(n\) is the number of samples, \(y_{i}\) is the true value of the \(i^{th}\) sample and \(\hat{y}_{i}\) is the predicted value of the \(i^{th}\) sample. The MSE loss function has the following properties: * Non-negative: Since the differences between the predicted and actual values are squared, MSE is always non-negative. A value of 0 indicates a perfect fit, while larger values correspond to higher discrepancies between predictions and actual values. \begin{table} \begin{tabular}{l l} \hline Loss Function & Applications \\ \hline Mean Squared Error (MSE) & Linear Regression, Ridge Regression, Lasso Regression, \\ Neural Networks, Support Vector Regression, Decision \\ Trees, Random Forests, Gradient Boosting \\ \hline Mean Absolute Error (MAE) & Quantile Regression, Robust Regression, L1 Regression, \\ Neural Networks, Decision Trees, Random Forests, Gradient Boosting \\ \hline Huber Loss & Robust Linear Regression, Robust Neural Networks, Gradient Boosting, Random Forests \\ \hline Log-Cosh Loss & Robust Regression, Neural Networks, Gradient Boosting \\ \hline Quantile Loss & Quantile Regression, Distributional Regression, Extreme \\ Value Prediction \\ \hline Poisson Loss & Poisson Regression, Count Data Prediction, Generalized \\ & Linear Models, Neural Networks, Gradient Boosting \\ \hline \end{tabular} \end{table} Table 2: Loss Functions and their applications in regression tasks. Figure 1: Regression Tree. * Quadratic: MSE is a quadratic function of the prediction errors, which means it places more emphasis on larger errors than smaller ones. This property makes it sensitive to outliers and can lead to models that prioritize reducing large errors over smaller ones. * Differentiable: MSE is a smooth and continuous function for the model parameters. This property allows for the efficient computation of gradients, which is essential for optimization algorithms like gradient descent. * Convex: MSE is a convex function, which means it has a unique global minimum. This property simplifies the optimization process, as gradient-based optimization techniques can converge to the global minimum without getting trapped in local minima. However, for deep neural networks, the error landscape is generally non-convex due to the multiple layers of non-linear activation functions, leading to a complex and highly non-linear optimization problem. * Scale-dependent: The value of MSE depends on the scale of the target variable, making it difficult to compare the performance of models across different problems or target variable scales. For this purpose, researchers often use the root mean squared error (RMSE) or mean squared percentage error (MSPE). The MSE, also called L2 loss, is computationally simple. However, it is not robust to outliers due to the square of the error term. Thus if the data includes outliers, it is better to use another loss function, such as Mean Absolute Error (MAE) which is more robust to outliers, or Huber Loss, which is a combination of MSE and MAE. The MSE is also used as a performance metric. #### 3.1.2 Mean Absolute Error (MAE) The Mean Absolute Error (MAE) is another commonly used loss function in regression problems. It measures the average of the absolute differences between the predicted values and the true values [25]. The MAE loss can be defined as \[MAE=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y_{i}}|, \tag{4}\] where \(n\) is the number of samples, \(y_{i}\) and \(\hat{y_{i}}\) are the true and predicted value of the \(i^{th}\) sample. The MAE loss function has the following properties: * Non-negative: Like MSE, MAE is always non-negative because it takes the absolute value of the differences between predicted and actual values. A value of 0 indicates a perfect fit, while larger values correspond to higher discrepancies between predictions and actual values. * Linear: MAE is a linear function of the prediction errors, which treats all errors equally regardless of their magnitude. This property makes MAE less sensitive to outliers than MSE, as it does not disproportionately emphasize large errors. * Robust: Due to its linear nature and reduced sensitivity to outliers, MAE is considered a more robust loss function than MSE. This makes it suitable for applications where the presence of outliers is expected or the distribution of errors is not symmetric. * Non-differentiable: Although MAE is continuous, it is not differentiable when the prediction error is zero due to the absolute value function. This property can complicate the optimization process for specific algorithms, particularly those relying on gradient-based techniques. However, subgradient methods[26, 27, 28, 29] can be employed to overcome this issue. * Convex: MAE is a convex function, which means it has a unique global minimum. This property simplifies the optimization process, as gradient-based optimization techniques can converge to the global minimum without getting trapped in local minima. Like the MSE, the MAE is non-convex for Deep neural networks due to the multiple layers with non-linear activation functions. * Scale-dependent: Like MSE, the value of MAE depends on the scale of the target variable, making it difficult to compare the performance of models across different problems or target variable scales. To address this issue, researchers often use scale-invariant metrics such as mean absolute percentage error (MAPE) or normalized mean absolute error (NMAE) to compare models across different scales or units. The MAE, called L1 loss, is often used as an evaluation metric. It is computationally simple and easy to understand, but it does not have the smooth and differentiable property of the MSE and is not sensitive to outliers. #### 3.1.3 Huber Loss The Huber loss combines the properties of both Mean Squared Error (MSE) and Mean Absolute Error (MAE). Huber loss is designed to be more robust to outliers than MSE while maintaining smoothness and differentiability [30]. The Huber loss function is defined as \[L(y,\hat{y})=\begin{cases}\frac{1}{2}(y-\hat{y})^{2}&\text{for }|y-\hat{y}|\leq \delta\\ \delta(|y-\hat{y}|-\frac{1}{2}\delta)&\text{otherwise},\end{cases} \tag{5}\] where \(y\) is the true value, \(\hat{y}\) is the predicted value, and \(\delta\) is a user-specified threshold value. When the error is small, the Huber loss function behaves like the MSE loss function, and when the error is large, the Huber loss function behaves like the MAE loss function. This property makes the Huber loss function more robust to outliers than the MSE loss function, as it is less sensitive to large errors. The Huber loss function is differentiable, which makes it suitable for use in gradient-based optimization algorithms such as stochastic gradient descent (SGD). It is commonly used in linear regression and time series forecasting, as it can handle outliers and noise in the data. It is also used in robust optimization problems where the data may contain outliers or noise. The threshold \(\delta\) can be chosen empirically by trying different values and evaluating the model's performance. However, common practice is to set \(\delta\) to a small value if the data has a lot of noise and to a large value if the data has outliers. #### 3.1.4 Log-Cosh Loss The Log-Cosh loss function is smooth and differentiable. It is commonly used in regression problems where the data may contain outliers or noise [31]. The Log-Cosh loss is defined as \[L(y,\hat{y})=\frac{1}{n}\sum_{i=1}^{n}\log(\cosh(y_{i}-\hat{y}_{i})), \tag{6}\] where \(y\) is the true value, \(\hat{y}\) is the predicted value and \(n\) is the number of samples. One of the advantages of the log-cosh loss function is that it is less sensitive to outliers than the mean squared error (MSE), as it is not affected by extreme data values. However, it is more sensitive to small errors than the Huber loss. #### 3.1.5 Quantile Loss Also known as quantile regression loss, this function is often used for predicting an interval instead of a single value [32]. If we denote the quantile as \(q\) where \(0<q<1\), and the predicted and actual values as \(\hat{y}\) and \(y\) respectively, then the quantile loss is given by \[L(y,\hat{y})=q\cdot\max(y-\hat{y},0)+(1-q)\cdot\max(\hat{y}-y,0) \tag{7}\] \(\max(a,b)\) represents the maximum of \(a\) and \(b\). The expression \(y-\hat{y}\) is used when the prediction underestimates, and \(\hat{y}-y\) is used when the prediction overestimates. The loss is scaled by \(q\) for underestimations and \((1-q)\) for overestimations. Note that when \(q=0.5\), the quantile loss is equivalent to the Mean Absolute Error (MAE), making it a generalization of MAE that allows for asymmetric penalties for underestimations and overestimations. Overestimation occurs when a model's prediction exceeds the actual value. Underestimation is the opposite of overestimation. It occurs when a model's prediction is lower than the actual value. Practical examples of quantile regression include: **Financial Risk Management**: To estimate Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR), which are measures of financial risk used in risk management. These quantile-based measures help to understand the potential for extreme losses [33]. **Supply Chain and Inventory Management**: Predicting demand for products can benefit from quantile loss as it can give a range of potential demand rather than a single point, which can help manage inventory and reduce stockouts or overstock situations [34]. **Energy Production**: To predict power output, having a range of potential outputs to manage grid stability [35]. **Economic Forecasting**: Predicting economic indicators can use quantile regression to give a range of possible values, which can help planning and policy-making [36]. **Weather Forecasting**: Can be useful for predicting variables like temperature or rainfall, where providing a range can be more informative than a single-point estimate [37, 38]. **Real Estate Pricing**: Predicting the price of a property within a range can be more useful than predicting a single price [39]. **Healthcare**: Quantile regression can predict a range of possible patient outcomes based on a set of features, which can assist doctors in making more informed decisions [40]. #### 3.1.6 Poisson Loss Poisson loss is used in regression tasks when the target variable represents count data and is assumed to follow a Poisson distribution. The Poisson loss is derived from the negative log-likelihood of the Poisson distribution. It maximizes the likelihood of observing the count data given the predicted values [41]. It is defined as \[L(y,\hat{y})=\frac{1}{n}\sum_{i=1}^{n}(\hat{y}_{i}-y_{i}\log(\hat{y}_{i})), \tag{8}\] where \(y_{i}\) represents the actual target value, \(\hat{y}_{i}\) is the predicted value, and \(n\) is the number of samples. When applying the Poisson loss function to model count data, we must ensure that the predicted values are non-negative since negative counts are not meaningful in real-world scenarios. To achieve this, it is common to use a link function that transforms the linear combination of input features to a non-negative output, which can then be interpreted as the expected count. A link function is a mapping from the linear predictor to the predicted value. In the context of Poisson regression, the exponential function is a common choice for the link function because it guarantees non-negative outputs. The exponential function has the following form: \[\hat{y}_{i}=\exp(\mathbf{w}^{\top}\mathbf{x}_{i}+b), \tag{9}\] where \(\mathbf{w}\) is a vector of weights, \(\mathbf{x}_{i}\) is a vector of input features for the \(i\)-th observation, and \(b\) is the bias term. Using the exponential function as a link function, we ensure that the predicted values \(\hat{y}_{i}\) are always non-negative. In this case, the Poisson loss function can be written as \[L(y,\hat{y})=\frac{1}{n}\sum_{i=1}^{n}\left(\exp(\mathbf{w}^{\top}\mathbf{x}_{ i}+b)-y_{i}\log(\exp(\mathbf{w}^{\top}\mathbf{x}_{i}+b))\right) \tag{10}\] The Poisson distribution is typically used for modeling the number of times an event occurred in an interval. Here are some examples of applications where Poisson loss can be useful. **Traffic Modeling**: Poisson regression can predict the number of cars that pass through a toll booth during a given time interval based on factors like the time of day, day of the week, and weather conditions [42]. **Healthcare**: Epidemiology can predict the number of disease cases in different regions based on variables like population density, vaccination rates, and social behavior patterns [43]. **Insurance**: In the insurance industry, it can be used to model claim counts for certain types of insurance policies [44]. **Customer Service**: Poisson regression can be used to predict the number of calls that a call center receives during different times of the day, to aid in staff scheduling [45]. **Internet Usage**: It can be used to model the number of website visits or clicks on an ad during a given time interval to help understand user behavior and optimize ad placement [46]. **Manufacturing**: It can predict the number of defects or failures in a manufacturing process, helping in quality control and maintenance planning [47]. **Crime Analysis**: Poisson regression can be used to model the number of occurrences of certain types of crimes in different areas to help in police resource allocation and crime prevention strategies [48]. ### Regression Performance Metrics Table 3 shows the most common metrics used in regression tasks. The following sections delve into more details on each of these metrics skipping the mean square error (MSE) and the mean absolute error (MAE) because they are the same discussed previously as loss functions. #### 3.2.1 Root Mean Squared Error (RMSE) The Root Mean Square Error (RMSE) is the square root of the mean squared error (MSE) defined as \[RMSE=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}}, \tag{11}\] where \(y_{i}\) is the true value, \(\hat{y}_{i}\) is the predicted value, and \(n\) is the number of samples. The RMSE measures the average deviation of the predictions from the true values. This metric is easy to interpret because it is in the same units as the data. However, it is sensitive to outliers. Lower RMSE values indicate better model performance, representing smaller differences between predicted and actual values. #### 3.2.2 Mean Absolute Percentage Error (MAPE) The Mean Absolute Percentage Error (MAPE) measures the average percentage error of the model's predictions compared to the true values. It is defined as \[MAPE=\frac{1}{n}\sum_{i=1}^{n}\frac{|y_{i}-\hat{y}_{i}|}{y_{i}}\times 100, \tag{12}\] where \(y_{i}\) is the true value, \(\hat{y}_{i}\) is the predicted value, and \(n\) is the number of samples. One of the advantages of using MAPE is that it is easy to interpret, as it is expressed in percentage terms. It is also scale-independent, which can be used to compare models across different scales of the target variable. However, it has two limitations: it can produce undefined results when \(y_{i}\) is zero and is sensitive to outliers. #### 3.2.3 Symmetric Mean Absolute Percentage Error (SMAPE) The Symmetric Mean Absolute Percentage Error (SMAPE) is a variation of the Mean Absolute Percentage Error (MAPE) commonly used to evaluate the accuracy of predictions in time series forecasting [49]. SMAPE is defined as \[SMAPE=\frac{2}{n}\sum_{i=1}^{n}\frac{|y_{i}-\hat{y}_{i}|}{|y_{i}|+|\hat{y}_{i} |}*100, \tag{13}\] \begin{table} \begin{tabular}{l l} \hline Performance Metric & Applications \\ \hline Mean Squared Error (MSE) & General-purpose regression, model selection, \\ & optimization, linear regression, neural networks \\ \hline Root Mean Squared Error (RMSE) & General-purpose regression, model selection, \\ & optimization, linear regression, neural networks \\ \hline Mean Absolute Error (MAE) & General-purpose regression, model selection, \\ & optimization, robustness to outliers, \\ & time series analysis \\ \hline R-squared (R\({}^{2}\)) & Model evaluation, goodness-of-fit, linear regression, \\ & multiple regression with many predictors \\ \hline Adjusted R-squared & Model evaluation, goodness-of-fit, linear regression, \\ & multiple regression with many predictors \\ \hline Mean Squared Logarithmic Error (MSLE) & Forecasting, model evaluation, skewed \\ & target distributions, finance, sales prediction \\ \hline Mean Absolute Percentage Error (MAPE) & Forecasting, model evaluation, time series analysis, \\ & business analytics, supply chain optimization \\ \hline \end{tabular} \end{table} Table 3: Common performance metrics used in regression. where \(y_{i}\) is the true value, \(\hat{y}_{i}\) is the predicted value, and \(n\) is the number of samples. One of the advantages of using SMAPE is that it is symmetric, which means that it gives equal weight to over-predictions and under-predictions. This is particularly useful when working with time series data, where over-predictions and under-predictions may have different implications, and SMAPE helps to ensure that the model is equally penalized for both types of errors, leading to better overall performance in terms of how well it meets the business needs or objectives. However, SMAPE has some limitations; for example, it can produce undefined results when both \(y_{i}\) and \(\hat{y}_{i}\) are zero and can be sensitive to outliers. The implications of over-predictions and under-predictions varied depending on the application. In the following, we discuss real-world examples. **Inventory Management**: Over-predicting demand can lead to excess inventory, which ties up capital and can result in waste if products expire or become obsolete. Under-predicting demand can lead to stockouts, lost sales, and damage to customer relationships [50]. A symmetric error measure like SMAPE penalizes both cases because over-prediction and under-prediction have costly implications. **Energy Demand Forecasting**: Over-prediction of energy demand can cause unnecessary production, leading to waste and increased costs. Under-prediction can lead to insufficient power generation, resulting in blackouts or the need for expensive on-demand power generation [51]. **Financial Markets**: In financial markets, over-prediction of a stock price might lead to unwarranted investments resulting in financial loss, while under-prediction might result in missed opportunities for gains [52]. **Sales Forecasting**: Over-prediction of sales could lead to overstaffing, overproduction, and increased costs, while under-prediction could lead to understaffing, missed sales opportunities, and decreased customer satisfaction [53]. **Transportation and Logistics**: Over-predicting the demand for transportation might lead to underutilized vehicles or routes, resulting in unnecessary costs. Under-predicting demand might lead to overcrowding and customer dissatisfaction [54]. #### 3.2.4 Coefficient of Determination \(R^{2}\) The Coefficient of Determination (\(R^{2}\)), measures how well the model can explain the variation in the target variable [55]. \(R^{2}\) is defined as the proportion of the variance in the target variable that the model explains. It ranges from 0 to 1, where 0 means that the model does not explain any variation in the target variable, and one means that the model explains all the variation in the target variable. The formula for R-squared is \[R^{2}=1-\frac{\sum_{i=1}^{n}(y_{i}-\hat{y}i)^{2}}{\sum i=1^{n}(y_{i}-\bar{y}) ^{2}}, \tag{14}\] where \(y_{i}\) is the true value, \(\hat{y}_{i}\) is the predicted value, \(\bar{y}\) is the mean of the true values, and \(n\) is the number of samples. **Benefits and Limitations of R-squared** Some of the main benefits of \(R^{2}\) are: 1. **Measures the relationship between the model and the response variable**: R-squared describes the strength of the relationship between the model and the response variable on a convenient 0 - 1 scale. 2. **Interpretable**: It can be more interpretable than other statistics because it provides a percentage that can be intuitively understood. 3. **Helps in model selection**: If we have two models, we can compare their R-squared values as a part of the selection process. The model with the higher R-squared could indicate a better fit. The limitations of \(R^{2}\) include: 1. **Misleading with non-linear relationships**: \(R^{2}\) works as intended in a simple linear regression model with one explanatory variable but can be misleading with more complex, nonlinear, or multiple regression models. 2. **Influenced by the number of predictors**: \(R^{2}\) always increases as we add more predictors to a model, even if they are unrelated to the outcome variable. This can lead to overly complex models that overfit the data. This is the benefit of the adjusted \(R^{2}\), which adjusts the \(R^{2}\) value based on the number of predictors in the model. 3. **Sensitive to outliers**: \(R^{2}\) is sensitive to outliers. 4. **Does not check for biased predictions**: \(R^{2}\) cannot determine whether the coefficient estimates and predictions are biased, which is to say, whether the predictions systematically over or underestimate the actual values. 5. **Limitation with small sample sizes**: When the sample size is small, the \(R^{2}\) value might be unreliable. It can be artificially high or low and might not represent the true strength of the relationship between the variables. #### 3.2.5 Adjusted \(R^{2}\) Adjusted \(R^{2}\) is a modified version of \(R^{2}\) that has been adjusted for the number of predictors in the model. It increases only if the new term improves the model more than would be expected by chance. It decreases when a predictor improves the model by less than expected by chance [56]. The adjusted R-squared is defined as \[\text{Adjusted }R^{2}=1-\left(\frac{(1-R^{2})(n-1)}{n-k-1}\right), \tag{15}\] where \(n\) is the number of observations, \(k\) is the number of predictors. The adjustment is a penalty for adding unnecessary predictors to the model. This penalty increases with the increase in the number of predictors. This is particularly useful in multiple regression, where several predictors are used simultaneously. The Adjusted \(R^{2}\) is often used for model comparison, as it won't necessarily increase with adding more variables to the model, unlike regular \(R^{2}\). It is useful when we need to compare models of different sizes. Unlike \(R^{2}\), its value can be negative, meaning that the model is a poor fit for the data. ## 4 Classification Classification is a supervised machine learning task in which a model is trained to predict the class or category of a given input data point. Classification aims to learn a mapping from input features to a specific class or category. There are different classification tasks, such as binary classification, multi-class classification, and multi-label classification. Binary classification is a task where the model is trained to predict one of two classes, such as "spam" or "not spam," for an email. Multi-class classification is a task where the model is trained to predict one of the multiple classes, such as "dog," "cat," and "bird," for an image. Multi-label classification is a task where the model is trained to predict multiple labels for a single data point, such as "dog" and "outdoor," for an image of a dog in the park. Classification algorithms can be based on techniques such as decision trees, Naive Bayes, k-nearest neighbors, Support Vector Machines, Random Forest, Gradient Boosting, Neural Networks, and others. ### Classification Loss Functions Several loss functions can be used for classification tasks, depending on the specific problem and algorithm. In the following sections, we describe the most common loss functions used for classification: #### 4.1.1 Binary Cross-Entropy Loss (BCE) The Binary Cross Entropy (BCE), also known as log loss, is a commonly used loss function for binary classification problems. It measures the dissimilarity between the predicted probability of a class and the true class label [57]. Cross-entropy is a well-known concept in information theory commonly used to measure the dissimilarity between two probability distributions. In binary classification, the true class is usually represented by a one-hot encoded vector, where the true class has a value of 1, and the other class has a value of 0. The predicted probability is represented by a vector of predicted probabilities for each class, where the predicted probability of the true class is denoted by \(p(y=1|x)\) and the predicted probability of the other class is denoted by \(p(y=0|x)\). The loss function is defined as \[L(y,p)=-(y\log(p)+(1-y)\log(1-p)) \tag{16}\] Which intuitively can be split into two parts: \[\begin{cases}-log(p)&\text{if }y=1\\ -log(1-p)&\text{if }y=0,\end{cases} \tag{17}\] where \(y\) is the true class label (0 or 1) and \(p\) is the predicted probability of the positive class. The loss function is minimized when the predicted probability \(p\) equals the true class label \(y\). The binary cross-entropy loss has several desirable properties, such as being easy to compute, differentiable, and providing a probabilistic interpretation of the model's output. It also provides a smooth optimization surface and is less sensitive to outliers than other loss functions. However, it is sensitive to the class imbalance problem, which occurs when the number of samples of one class is significantly greater than the other. We can use the _Weighted Binary Cross Entropy_ for these cases. #### 4.1.2 Weighted Binary Cross Entropy (WBCE) Variation of the standard binary cross-entropy loss function, where the weight of each sample is considered during the loss calculation. This is useful in situations where the distribution of the samples is imbalanced [58]. In the standard binary cross-entropy loss, the loss is calculated as the negative log-likelihood of the true labels given the predicted probabilities. In the Weighted Binary Cross Entropy (WBCE), a weight is assigned to each sample, and the loss for each sample is calculated as \[L=-(w_{i}\cdot log(p)+w_{i}(1-y)log(1-p)), \tag{18}\] where \(w_{i}\) is the weight assigned to the \(i^{th}\) sample, \(y\) is the true label, and \(p\) is the predicted probability of the positive class. By assigning a higher weight to samples from under-represented classes, the model is encouraged to pay more attention to these samples, and the model's overall performance can be improved. #### 4.1.3 Categorical Cross-entropy Loss (CCE) The Categorical Cross Entropy (CCE), also known as the negative log-likelihood loss or Multi-class log loss, is a function used for multi-class classification tasks. It measures the dissimilarity between the predicted probability distribution and the true distribution [59]. Given the predicted probability distribution, it is defined as the average negative log-likelihood of the true class. The formula for categorical cross-entropy loss is expressed as \[L=-\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{C}y_{i,j}log(p_{i,j}), \tag{19}\] where \(N\) is the number of samples, \(C\) is the number of classes, \(y\) is the true label, and \(p\) is the predicted probability of the true class. The loss is calculated for each sample and averaged over the entire dataset. The true label is a one-hot encoded vector in traditional categorical cross-entropy loss, where the element corresponding to the true class is one, and all other elements are 0. However, in some cases, it is more convenient to represent the true class as an integer, where the integer value corresponds to the index of the true class leading to the _sparse categorical cross-entropy loss_ discussed next. #### 4.1.4 Sparse Categorical Cross-entropy Loss Variation of the categorical cross-entropy loss used for multi-class classification tasks where the classes are encoded as integers rather than one-hot encoded vectors [59]. Given that the true labels are provided as integers, we directly select the correct class using the provided label index instead of summing over all possible classes. Thus the loss for each example is calculated as \[H(y,\hat{y})=-\log(\hat{y}_{i,y_{i}}) \tag{20}\] And the final sparse categorical cross-entropy loss is the average over all the samples: \[H(Y,\hat{Y})=-\frac{1}{n}\sum_{i=1}^{n}\log(\hat{y}_{i,y_{i}}), \tag{21}\] where \(y_{i}\) is the true class of the \(i\)-th sample and \(\hat{y}_{i,y_{i}}\) is the predicted probability of the \(i\)-th sample for the correct class \(y_{i}\). #### 4.1.5 Cross-Entropy loss with label smoothing In the Cross-Entropy loss with label smoothing, the labels are _smoothed_ by adding a small value to the true label and subtracting the same value from all other labels. This helps reduce the model's overconfidence by encouraging it to produce more uncertain predictions [60, 61]. The motivation behind this is that when training a model, it is common to become over-confident in its predictions, particularly when trained on a large amount of data. This overconfidence can lead to poor performance on unseen data. Label smoothing helps to mitigate this problem by encouraging the model to make less confident predictions. The formula for the Cross-Entropy loss with label smoothing is similar to the standard categorical cross-entropy loss but with a small epsilon added to the true label and subtracted from all other labels. The formula is given by \[L(y,\hat{y})=-\sum_{c=1}^{C}\left[(1-\epsilon)y_{c}\log\hat{y}_{c}+\frac{ \epsilon}{C}\log\hat{y}_{c}\right], \tag{22}\] where \(y\) is the true label, \(\hat{y}\) is the predicted label, \(C\) is the number of classes, and \(\epsilon\) is the smoothing value. Typically, \(\epsilon\) is set to a small value, such as 0.1 or 0.2. Label smoothing does not always improve performance, and it is common to experiment with different epsilon values to find the best value for a specific task and dataset. #### 4.1.6 Focal loss The focal loss introduced by Tsung-Yi Lin et al. [62] is a variation of the standard cross-entropy loss that addresses the issue of class imbalance, which occurs when the number of positive samples (objects of interest) is much smaller than the number of negative samples (background). In such cases, the model tends to focus on the negative samples and neglect the positive samples, leading to poor performance. The focal loss addresses this issue by down-weighting the easy negative samples and up-weighting the hard positive samples. The focal loss is defined as \[FL(p_{t})=-\alpha_{t}(1-p_{t})^{\gamma}log(p_{t}), \tag{23}\] where \(p_{t}\) is the predicted probability for the true class, \(\alpha_{t}\) is a weighting factor that controls the importance of each example, and \(\gamma\) is a focusing parameter that controls the rate at which easy examples are down-weighted. The weighting factor \(\alpha_{t}\) is usually set to the inverse class frequency to balance the loss across all classes. The focusing parameter \(\gamma\) is typically set to a value between 2 and 4 to give more weight to hard examples. In the original paper, the authors used a sigmoid activation function for binary classification and the cross-entropy loss for multi-class classification. The focal loss is combined with these loss functions to improve the performance of object detection and semantic segmentation models. In recent works, focal loss has been used in object detection, semantic [63], instance segmentation [64], and human pose estimation [65]. #### 4.1.7 Hinge Loss Hinge loss is a popular function used for _maximum-margin_ classification, commonly used for support vector machines (SVMs) for example in _one-vs-all_ classification where we classify an instance as belonging to one of many categories and situations where we want to provide a margin of error [66]. The hinge loss function for an individual instance can be represented as \[L(y,f(x))=\max(0,1-y\cdot f(x)), \tag{24}\] where \(y\) is the true label of the instance, which should be -1 or 1 in a binary classification problem. \(f(x)\) is the predicted output for the instance \(x\). The raw margin is \(y\cdot f(x)\). The hinge loss is 0 if the instance is on the correct side of the margin. The loss is proportional to the distance from the margin for data on the wrong side of the margin. ### Classification Performance Metrics Table 4 summarizes the common metrics used for classification. The following sections will delve into each of these metrics. #### 4.2.1 Confusion Matrix The confusion matrix is used to define a classification algorithm's performance. It contains the number of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) that result from the algorithm. The confusion matrix for a binary classification problem is represented in a 2x2 table as shown in Table 5. For example, consider a binary classification problem where the algorithm tries to predict whether an image contains a cat. The confusion matrix for this problem would look like Table 6: Where: * TP: the number of images correctly classified as cats. * TN: the number of images correctly classified as not cats. \begin{table} \begin{tabular}{l c c} \hline \multicolumn{2}{c}{Predicted Positive} & Predicted Negative \\ \hline Actual Positive & True Positive (TP) & False Negative (FN) \\ \hline Actual Negative & False Positive (FP) & True Negative (TN) \\ \hline \end{tabular} \end{table} Table 6: Confusion Matrix \begin{table} \begin{tabular}{l l l l l} \hline Common Name & Other Names & Abbr & Definitions & Interpretations \\ \hline True Positive & Hit & TP & True Sample & Correctly labeled \\ & & & labeled true & True Sample \\ \hline True Negative & Rejection & TN & False Sample & Correctly labeled \\ & & & labeled false & False sample \\ \hline False Positive & False alarm & FP & False sample & Incorrectly labeled \\ & Type I Error & & labeled True & False sample \\ \hline False Negative & Miss, & FN & True sample & Incorrectly label \\ & Type II Error & & labeled false & True sample \\ \hline \hline Recall & True Positive & TPR & TP/(TP+FN) & \(\%\) of True samples \\ & Rate & & & correctly labeled \\ \hline Specificity & True Negative & SPC, & TN/(TN+FP) & \(\%\) of False samples \\ & Rate & TNR & & correctly labeled \\ \hline Precision & Positive Predictive Value & PPV & TP/(TP+FP) & \(\%\) of samples labeled \\ & & & & True that really are True \\ \hline Negative Predictive Value & & NPV & TN/(TN+FN) & \(\%\) of samples labeled \\ & & & & False that really are False \\ \hline \hline False Negative & & FNR & FN/(TP+FN)= & \(\%\) of True samples \\ & Rate & & 1-TPR & incorrectly labeled \\ \hline False Positive & Fall-out & FPR & FP/(FP+FN)= & \(\%\) of False samples \\ & & & 1-SPC & incorrectly labeled \\ \hline False Discovery & FDR & FP/(TP+FP)= & \(\%\) of samples labeled \\ & Rate & & 1-PPV & True that are really False \\ \hline True Discovery & TDR & FN/(TN+FN)= & \(\%\) of samples labeled \\ & Rate & & 1-NPV & False that are really True \\ \hline \hline Accuracy & ACC & \(\frac{(TP+TN)}{(TP+TN+FP+FN)}\) & Percent of samples \\ & & & correctly labeled \\ \hline F1 Score & F1 & \(\frac{(2*TP)}{((2*TP)+FP+FN)}\) & Approaches 1 as \\ & & & errors decline \\ \hline \end{tabular} \end{table} Table 4: Metrics used in classification task. \begin{table} \begin{tabular}{l c c} \hline \multicolumn{2}{c}{Predicted Positive} & Predicted Negative \\ \hline Actual Positive & True Positive (TP) & False Negative (FN) \\ \hline Actual Negative & False Positive (FP) & True Negative (TN) \\ \hline \end{tabular} \end{table} Table 5: Confusion Matrix * FP: the number of images incorrectly classified as cats. * FN: the number of images incorrectly classified as not cats. Using the values in the confusion matrix, we can calculate performance metrics such as accuracy, precision, recall, and F1-score. #### 4.2.2 Accuracy Accuracy is a commonly used metric for object classification. It is the ratio of correctly classified samples to the total number of samples [67]. Mathematically, it can be represented as \[Accuracy=\frac{Number\ of\ Correctly\ Classified\ Samples}{Total\ Number\ of\ Samples} \tag{25}\] Accuracy can be expressed in terms of the confusion matrix values as \[Accuracy=\frac{TP+TN}{TP+FP+TN+FN} \tag{26}\] It is a simple and intuitive metric, but it can be misleading when the class distribution is imbalanced, as it tends to favor the majority class. For example, let's assume that we want to predict the presence of cancer in a cell. If for every 100 samples, only one contains cancer, a useless model that always predicts "No cancer" will have an accuracy of 99%. Other metrics, such as precision, recall, or F1-score, are more appropriate in these cases. #### 4.2.3 Precision Precision measures the accuracy of positive predictions. It is defined as the number of true positive predictions divided by the number of true positive predictions plus the number of false positive predictions [68]. Mathematically, it can be represented as \[Precision=\frac{TP}{TP+FP}, \tag{27}\] where \(TP\) is the number of true positive predictions, and \(FP\) is the number of false positive predictions. Precision is useful when the cost of a false positive prediction is high, such as in medical diagnosis or fraud detection. A high precision means the model is not generating many false positives, so the predictions are reliable. However, it is important to note that precision is not the only metric to consider when evaluating a model's performance, as high precision can also be achieved by a model that is not generating many positive predictions at all, which would result in a low recall. #### 4.2.4 Recall, Sensitivity, or True Positive Rate (TPR) The recall metric, also known as sensitivity or True Positive Rate (TPR), measures the proportion of true positive instances (i.e., instances correctly classified as positive) out of the total number of positive instances [68]. Mathematically, it is represented as \[Recall=\frac{TruePositives}{TruePositives+FalseNegatives} \tag{28}\] It measures how well the model can identify all the positive instances in the dataset. A high recall value indicates the model has fewer false negatives, meaning it can correctly identify the most positive instances. However, a high recall value does not necessarily mean the model has a high precision, as the number of false positives can also influence it. #### 4.2.5 Precision-Recall Tradeoff The precision-recall tradeoff refers to the inverse relationship between precision and recall. As one metric increases, the other tends to decrease. Imagine a machine learning model trying to predict whether an email is spam. If the model is tuned to be very conservative and only marks an email as spam when confident, it is likely to have high precision (i.e., if it marks an email as spam, it is very likely to be spam). However, this conservative approach means it will probably miss many spam emails it is unsure about, leading to a lower recall. Conversely, if the model is tuned to be liberal and marks emails as spam more freely, it will probably identify most spam emails correctly, leading to a high recall. However, this approach will also incorrectly mark many non-spam emails as spam, leading to a lower precision. This tug-of-war between precision and recall is the crux of the tradeoff. An optimal balance between the two must be found depending on the use case. For instance, in a medical context, a high recall might be prioritized to ensure that all possible disease cases are identified, even at the expense of some false positives. On the other hand, a spam detection system might aim for high precision to avoid annoying users with wrongly classified emails, accepting that some spam messages might slip through. The precision-recall tradeoff is a crucial consideration when tuning machine learning models. Maximizing both metrics is only sometimes possible; thus, a balance must be struck based on the requirements and constraints of the specific application. #### 4.2.6 F1-score The F1 score combines precision and recall to provide a single value representing a classification model's overall performance [68]. It is defined as the harmonic mean of precision and recall computed as \[F1=2*\frac{precision\cdot recall}{precision+recall} \tag{29}\] The F1 score considers both the model's ability to correctly identify positive examples (precision) and the ability of the model to identify all positive examples in the dataset (recall). A higher F1 score indicates that the model has a better balance of precision and recall, whereas a low F1 score indicates that the model may have a high precision or recall but not both. It is particularly useful when the class distribution is imbalanced, or we want to give equal weight to precision and recall. #### 4.2.7 F2-score The F2 score is a variation of the F1 score, with more weight given to the recall metric. The F2 score is the harmonic mean of precision and recall, with a weighting factor of 2 for recall [68]. The formula for the F2 score is \[F2=(1+2^{2})\frac{Precision*Recall}{2^{2}*Precision+Recall} \tag{30}\] Like the F1 score, the F2 score ranges from 0 to 1, with a higher score indicating better performance. However, the F2 score places a greater emphasis on recall, making it useful when it is important to minimize false negatives. For example, a false negative could mean a patient is not diagnosed with a serious disease in medical diagnosis, so the F2 score is often used in such scenarios [69]. #### 4.2.8 Specificity Specificity, also known as the true negative rate (TNR), is a metric that measures the proportion of actual negatives that are correctly identified as negatives by a classification model. It is defined as the number of true negatives (TN) divided by the number of true negatives plus the number of false positives (FP) [70]. The formula for specificity is \[Specificity=\frac{TN}{TN+FP} \tag{31}\] This metric is particularly useful in medical diagnostic testing, where it is important to minimize the number of false positives to avoid unnecessary treatments or interventions. High specificity indicates that the model is good at identifying negatives and has a low rate of false positives. It is often used with the Recall or TPR to evaluate the overall performance of a classification model. #### 4.2.9 False Positive Rate (FPR) The False Positive Rate (FPR) is used to evaluate the proportion of false positives (i.e., instances that are incorrectly classified as positive) to the total number of negatives (i.e., instances that are correctly classified as negative). It is also known as the _Type I Error rate_, which complements the Specificity metric. Formally, the FPR is calculated as \[FPR=\frac{FP}{FP+TN} \tag{32}\] FPR directly relates to the threshold classifying instances as positive or negative. A lower threshold will increase the number of false positives and thus increase the FPR, while a higher threshold will decrease the number of false positives and decrease the FPR. In practice, the FPR is often plotted on the x-axis of a Receiver Operating Characteristic (ROC) curve to visualize the trade-off between the TPR and FPR for different classification thresholds. See section 4.2.14 for more details. #### 4.2.10 Negative Predictive Value (NPV) The Negative Predictive Value (NPV) measures the proportion of negative cases that are correctly identified as such [70]. It is calculated as \[NPV=\frac{TN}{TN+FN} \tag{33}\] The NPV is useful when the cost of a false negative (i.e., an actual negative case being classified as positive) is high. For example, a false negative result in medical diagnostics can delay treatment or even death. In such cases, a high NPV is desired. The NPV is not affected by the prevalence of the condition in the population, whereas other metrics, such as sensitivity and specificity, are. This makes the NPV a useful metric for evaluating the performance of a classifier when the class distribution is imbalanced. The NPV can be interpreted as the complement of the false positive rate (FPR) \[NPV=1-FPR \tag{34}\] #### 4.2.11 True Discovery Rate (TDR) True Discovery Rate (TDR) evaluates the proportion of true positive predictions a model makes among all the positive predictions. It is also known as the Positive Predictive Value (PPV) or precision of the positive class [70]. TDR is calculated as \[TDR=\frac{TP}{TP+FP} \tag{35}\] TDR is a useful metric for evaluating the performance of a model in situations where the number of false positive predictions is high, and the number of true positive predictions is low. It is particularly useful in high-dimensional datasets where the number of features is large and the number of positive observations is low. TDR can provide a more accurate picture of the model's performance than accuracy or recall in such cases. There may be a trade-off between TDR and recall in some cases: TDR may be low when the recall is high, and vice versa. Therefore, it's important to consider both TDR and recall when evaluating the performance of a model. #### 4.2.12 False Discovery Rate (FDR) The False Discovery Rate (FDR) measures the proportion of false positives among all positive predictions made by a classifier [70]. It is defined as \[FDR=\frac{FP}{TP+FP} \tag{36}\] The FDR can be an alternative to the False Positive Rate (FPR) when the cost of false positives and true negatives differs. It is particularly useful in cases where the number of false positives is more critical than the number of false negatives, such as in medical testing or fraud detection. A lower FDR value indicates that the classifier makes fewer false positive predictions. #### 4.2.13 Precision-Recall Curve The precision-recall (PR) curve is a graphical representation of the trade-off between precision and recall for different threshold values of a classifier. Precision is the proportion of true positive predictions out of all positive predictions, while recall is the proportion of true positive predictions out of all actual positive instances. The precision-recall curve plots precision on the y-axis and recall on the x-axis for different threshold values of the classifier [68]. **Computing the Precision-Recall Curve** 1. Start with a binary classifier that can predict a binary outcome and estimate the probability of the positive class. These probabilities are also known as _scores_. 2. For every possible threshold (from 0 to 1) on these scores, calculate the Precision (see Section 4.2.3) and the Recall (see Section 4.2.4). 3. Plot a curve with Recall on the X-axis and Precision on the Y-axis. Figure 2(a) shows an example of Precision-Recall curves for three models. **Interpretation of the Precision-Recall Curve** Figure 2(a) shows the precision/recall curves for three models trained on the same data. The dashed line shows the ideal performance. Each model reports its Average Precision metric (see Section 5.2.1 for more details on Average Precision). In the following, we explain how to interpret PR curves. _The closer the curve is to the top-right corner, the better the model's performance._ Ideal performance is indicated by a point at (1,1), which signifies perfect precision (no false positives) and recall (no false negatives). If the curve is closer to the top-right corner of the plot, it indicates that the model achieves a good balance of precision and recall for most threshold settings. _The area under the curve (AUC-PR)_ provides a single-number summary of the information in the curve. The maximum possible AUC is 1, which corresponds to a perfect classifier. A random classifier will have an AUC of 0.5. A model with a higher AUC is generally considered better. _Steepness of the curve._ Ideally, we want the recall to increase quickly as precision decreases slightly, resulting in a steep curve. This steepness reflects a good balance between precision and recall. If the curve is less steep, we are losing a lot of precision for small increases in recall. _Comparison of different models._ We can compare the PR curves of different models to understand their performance. If the PR curve of one model is entirely above that of another, it indicates superior performance across all thresholds. #### 4.2.14 Area Under the Receiver Operating Characteristic curve (AUC-ROC) The Area Under the Receiver Operating Characteristic Curve (AUC-ROC) is a commonly used performance metric for evaluating the performance of binary classification models [68]. It measures the ability of the model to distinguish between positive and negative classes by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The AUC-ROC is a value between 0 and 1, with 1 indicating a perfect classifier and a value of 0.5 indicating a classifier that performs no better than random guessing. The Area under the Receiver Operating Characteristic curve (AUC-ROC) offers a single-value summary of the model's performance across all possible threshold values. This measure is particularly valuable when comparing the performance of different models, as its assessment is independent of threshold choice. However, in cases where the positive and negative class distributions are significantly imbalanced, the AUC-ROC, while still applicable, may not provide the most accurate performance representation. With a heavy imbalance, the ROC curve can appear overly optimistic, as a low false positive rate can still mean a large number of false positives if the total count of actual negatives is high, resulting in a misleadingly high AUC-ROC value. In such imbalanced scenarios, the Precision-Recall (PR) curve and its corresponding area under the curve (AUC-PR) can often provide a more nuanced and accurate performance assessment. As PR curves focus more on detecting positive instances, often the minority class in an imbalanced dataset, they can deliver a more insightful evaluation of a model's ability to detect positive instances, providing a more relevant representation of the model's performance. **Computing the ROC Curve** Start with a binary classifier that can predict a binary outcome and estimate the probability of the positive class. These probabilities are also known as _scores_. For every possible threshold (from 0 to 1) on these scores, calculate the TPR (see Section 4.2.4) and the FPR (see Section 4.2.9). Plot a curve with FPR on the X-axis and TPR on the Y-axis. Figure 2(b) shows an example of ROC curves for three models. **Interpretation of the ROC Curve** Figure 2(b) shows the ROC curves for three models trained on the same data. The dashed line shows random performance. Each model reports its Area under the curve (AUC) in the legend. In the following, we explain how to interpret ROC curves. _TPR and FPR on each axis:_ The True Positive Rate (TPR) is used for the vertical axis. It measures the proportion of actual positives that are correctly identified as such. The False Positive Rate (FPR), also known as the fall-out or Probability of False Alarm, measures the proportion of actual negatives that are incorrectly identified as positives. The ROC curve plots the TPR vs. FPR at different classification thresholds. Lowering the classification threshold classifies more items as positive, thus increasing both False Positives and True Positives. _Area Under the ROC Curve (AUC-ROC):_ AUC provides an aggregate performance measure across all possible classification thresholds. AUC-ROC of a model equals the probability that the model will rank a randomly chosen positive instance higher than a randomly chosen negative instance. Hence, the higher the AUC-ROC score, the better the model (from 0 to 1). _Diagonal line equals random guess:_ The diagonal line in the ROC curve plot has an AUC of 0.5 and represents a model with no discriminatory ability, i.e., one that predicts positives and negatives at random. _Towards the top-left corner:_ The more the curve sits in the top-left corner, the better the classifier, as it means the True Positive Rate is high and the False Positive Rate is low. _Compare Models:_ ROC curves are useful for comparing different models. The model with a higher AUC and its curve towards the top-left corner is generally considered better. ## 5 Object Detection Object detection in deep learning is a computer vision technique that involves localizing and recognizing objects in images or videos. It is common in various applications such as autonomous driving [71, 72, 73, 74], surveillance [75, 76, 77], human-computer interaction [78, 79, 80, 81], and robotics [82, 83, 84, 85]. Object detection involves identifying the presence of an object, determining its location in an image, and recognizing the object's class. ### Object Detection Loss Functions Since object detection involves localization (regression) and recognition (classification), object detection systems use a combination of multiple loss functions. Among these loss functions, we find: * Multi-Class Log Loss (also known as Cross-Entropy Loss): It is used for the multi-class classification part of the object detector. Penalizes the difference between the predicted class probabilities and the ground truth class labels. * Smooth L1 Loss: It is used for the regression part of the object detector. It aims to reduce the mean absolute error between the predicted and ground truth bounding box coordinates. * IoU Loss: It calculates the Intersection Over Union (IoU) between the predicted bounding box and the ground truth bounding box and penalizes the difference between the predicted IoU and the ground truth IoU. * Focal Loss: It is used to overcome the problem of class imbalance and focuses on the misclassified samples. It penalizes the samples that are easily classified with high confidence and gives more weight to the samples that are difficult to classify. * YOLO Loss: It is used for the You Only Look Once (YOLO) object detection family of algorithms and combines the prediction of bounding box coordinates, objectness scores, and class probabilities. In the following sections, we will delve into the loss functions that we have not touched on before or are defined differently. #### 5.1.1 Smooth L1 Loss The smooth L1 loss, also known as the smooth mean absolute error (SMAE) loss, is a commonly used loss function in object detection tasks; it was introduced in Fast R-CNN [86]. The smooth L1 loss is a modification of the mean absolute error (MAE) loss that aims to balance between being too sensitive to outliers and insensitive to small errors. The formula for the smooth L1 loss is given by \[L=\begin{cases}0.5*(x_{i}-y_{i})^{2}&\text{if }|x_{i}-y_{i}|<1\\ |x_{i}-y_{i}|-0.5&\text{otherwise},\end{cases} \tag{37}\] where \(x_{i}\) and \(y_{i}\) are the predicted and ground truth values, respectively. It is commonly used in the region proposal network (RPN) part of the two-stage object detectors [86, 87] to regulate the regression of the bounding box coordinates outperforming the mean square error (MSE) loss in terms of both accuracy and efficiency. #### 5.1.2 Intersection over Union (IoU) Loss Intersection Over Union (IoU) is a metric used in object detection that measures the overlap between two bounding boxes. Figure 3 depicts the IoU metric used in object detection. The IoU between two bounding boxes is calculated as \[IoU=\frac{area\ of\ intersection}{area\ of\ union} \tag{38}\] The IoU loss function is defined as \[L=1-IoU \tag{39}\] This function encourages the predicted bounding boxes to overlap highly with the ground truth bounding boxes. A high IoU value indicates that the predicted bounding box is close to the ground truth, while a low IoU value indicates that the predicted bounding box is far from the ground truth. Figure 2: Precision/Recall and ROC Curves. (a) Shows the precision/recall curves for three models trained on the same data. The dashed line shows the ideal performance. Each model reports its Average Precision metric. (b) Shows the ROC curves for three models trained on the same data. The dashed line shows random performance. Each model reports its Area under the curve (AUC) in the legend The IoU loss function is commonly used for one-stage detectors [88, 89] as part of a multi-task loss function that includes a classification loss and a localization loss. #### 5.1.3 YOLO Loss The You Only Look Once (YOLO) loss function is used in the YOLO object detection architecture. It was introduced by Redmon et al. in [90]. The YOLO loss function is a multi-part loss that consists of three components: 1. Localization loss: This component penalizes the network for misprediction of the object's coordinates in the image. It is calculated as the mean squared error between the predicted and ground-truth bounding box coordinates. 2. Confidence loss: This component penalizes the network for not detecting an object even when one is present. It is a binary cross-entropy loss calculated between the predicted objectiveness score and the ground-truth label. 3. Classification loss: This component penalizes the network for misclassifying the object. It is a multi-class cross-entropy loss calculated between the predicted class scores and the ground-truth label. The total YOLO loss is the weighted sum of these three components. Figure 4 explains the full YOLO loss function. ### Object Detection Metrics To compute the metrics in object detection, we also compute the True Positives, False Positives, True Negatives, and False Negatives. The definitions of these metrics are based on the IoU score as follows: **True Positives in object detection**: The match between the predicted location of an object and its actual location is measured using an Intersection Over Union (IoU) score. The IoU score ranges from 0 to 1, with a score of 1 indicating a perfect match between the predicted and ground-truth locations. Since a perfect match is hard to achieve, we define a threshold value to determine whether a prediction is a true positive. Common values for the threshold are 0.25, 0.5, and 0.75. These thresholds are not fixed and can be adjusted based on the application's requirements. If the IoU score between the predicted and ground-truth boxes is greater than or equal to the defined threshold, the prediction is considered a true positive. **False Positive in object detection**: Occurs when the model predicts the presence of an object, but the object is not present in the image. This affects the precision metric. **False Negative in object detection**: Occurs when the model fails to detect an object that is present in an image. This affects the recall metric. Figure 3: Intersection Over Union (IoU). a) The IoU is calculated by dividing the intersection of the two boxes by the union of the boxes; b) examples of three different IoU values for different box locations. **True Negative in object detection**: Refers to a case where the object detector correctly determines that an object is not present in an image. **Common IoU thresholds for object detection**: * 0.5: A threshold of 0.5 is commonly used as a balanced threshold for object detection. A predicted bounding box is considered a true positive if its IoU with the ground truth bounding box is greater than or equal to 0.5. * 0.75: A threshold of 0.75 is used for applications that require higher precision, such as autonomous driving, where false positive detections can lead to critical consequences. * 0.25: A threshold of 0.25 is used for applications that require higher recall, such as medical image analysis, where missing detections can lead to an incorrect diagnosis. Figure 4: The YOLO Loss function comprises three parts: a localization loss, a confidence loss, and a classification loss. The common object detection metrics are: * Average Precision (AP). * Intersection over union (IoU). See details in section 5.1.2. * Precision-Recall Curve. See details in section 4.2.13. #### 5.2.1 Average Precision (AP) Object detection models must identify and localize multiple object categories in an image. The AP metric addresses this by calculating each category's Average Precision (AP) separately and then taking the mean of these APs across all categories (that is why it is also called mean average precision or mAP). This approach ensures that the model's performance is evaluated for each category individually, providing a more comprehensive assessment of the model's overall performance. To accurately localize objects in images, AP incorporates the Intersection over Union (IoU) to assess the quality of the predicted bounding boxes. As described previously, IoU is the ratio of the intersection area to the union area of the predicted bounding box and the ground truth bounding box (see Figure 3). It measures the overlap between the ground truth and predicted bounding boxes. The COCO benchmark considers multiple IoU thresholds to evaluate the model's performance at different levels of localization accuracy. The two most common object detection datasets are The Pascal Visual Object Classes (VOC) [91] and Microsoft Common Objects in Context (COCO) [92]. The AP is computed differently in each of these. In the following, we describe how it is computed on each dataset. #### VOC Dataset This dataset includes 20 object categories. To compute the AP in VOC, we follow the next steps: 1. _Compute Intersection over Union (IoU)_: For each detected object, compute the IoU with each ground truth object in the same image (refer to section 5.1.2 for more details). 2. _Match Detections and Ground Truths:_ For each detected object, assign it to the ground truth object with the highest IoU, if the IoU is above the threshold. 3. _Compute Precision and Recall:_ For each category, calculate the precision-recall curve by varying the confidence threshold of the model's predictions (refer to section 4.2.13 for more details). This results in a set of precision-recall pairs. 4. _Sort and interpolate with 11-points:_ Sort the precision-recall pairs by recall in ascending order. Then, for each recall level \(r\) in the set \(\{0,0.1,0.2,...,1.0\}\), find the highest precision \(p(r)\) for which the recall is at least \(r\). This is known as interpolated precision. This process results in a precision-recall curve that is piecewise constant and monotonically decreasing. 5. _Compute Area Under Curve (AUC):_ The Average Precision is then defined as the area under this interpolated precision-recall curve. Since the curve is piecewise constant, this can be computed as a simple sum: \(AP=sum(p(r)/N)\), where the sum is over the \(N\) recall levels, and \(p(r)\) is the interpolated precision at recall level \(r\). #### Microsoft COCO Dataset This dataset includes 80 object categories and uses a more complex method for calculating AP. Instead of using an 11-point interpolation, it uses a 101-point interpolation, i.e., it computes the precision for 101 recall thresholds from 0 to 1 in increments of 0.01. Also, the AP is obtained by averaging over multiple IoU values instead of just one, except for a common AP metric called \(AP_{50}\), which is the AP for a single IoU threshold of 0.5. Table 7 shows all the metrics used to evaluate models in the COCO dataset. The steps for computing AP in COCO are the following: 1. _Compute the Intersection over Union (IoU):_ For each detected object, compute the IoU with each ground truth object in the same image. 2. _Match Detections and Ground Truths:_ For each detected object, assign it to the ground truth object with the highest IoU, if this IoU is above the threshold. 3. _Compute Precision and Recall:_ For each possible decision threshold (confidence score of the detection), compute the precision and recall of the model. This results in a set of precision-recall pairs. 4. _Interpolate Precision:_ For each recall level \(r\) in the set \(\{0,0.01,0.02,...,1.00\}\) (for the 101-point interpolation used in COCO), find the maximum precision \(p(r)\) for which the recall is at least \(r\). This is known as interpolated precision. 5. _Compute Area Under Curve (AUC):_ The Average Precision is then defined as the area under this interpolated precision-recall curve. Since the curve is a piecewise constant, this can be computed as a simple sum: \(AP=sum(p(r))/101\), where the sum is over the 101 recall levels, and \(p(r)\) is the interpolated precision at recall level \(r\). 6. _Average over IoU Thresholds:_ Repeat steps 2-5 for different IoU thresholds (e.g., 0.5, 0.55, 0.6,..., 0.95) and average the AP values. 7. _Average over Categories:_ Repeat steps 2-6 for each category and average the AP values. This is to prevent categories with more instances from dominating the evaluation. 8. _Average over Object Sizes:_ Finally, you can compute AP for different object sizes (small, medium, large) to see how well the model performs on different sizes of objects. #### 5.2.2 Average Recall (AR) Average Recall (AR) is used to evaluate the performance of object detection models. Unlike Precision or Recall, defined at a particular decision threshold, Average Recall is computed by averaging recall values at different levels of Intersection over Union (IoU) thresholds and, if needed, at different maximum numbers of detections per image. This metric is commonly used to report COCO data results [92, 93]. The general steps to compute AR are the following: 1. _Compute the Intersection over Union (IoU):_ For each detected object, compute the IoU with each ground truth object in the same image. 2. _Match Detections and Ground Truths:_ For each ground truth object, find the detected object with the highest IoU. If this IoU is above a certain threshold, the detection is considered a true positive, and the ground truth is _matched_. Each ground truth can only be matched once. 3. _Compute Recall:_ For each image, recall is the number of matched ground truths divided by the total number of ground truths. 4. _Average over IoU Thresholds:_ Repeat steps 2 and 3 for different IoU thresholds (e.g., from 0.5 to 0.95 with step size 0.05), and average the recall values. 5. _Average over Max Detections:_ Repeat steps 2-4 for different maximum numbers of detections per image (e.g., 1, 10, 100), and average the recall values. This step is necessary because allowing more detections per image can potentially increase recall but at the cost of potentially more false positives. 6. _Average over Images:_ Finally, compute the average recall over all the images in the dataset. For COCO, the Average Recall measure can also be computed separately for different object sizes (small, medium, and large) to evaluate how well the model works for objects of different sizes. ## 6 Image Segmentation Image segmentation aims to assign a label or category to each pixel in the image, effectively segmenting the objects at a pixel level. Segmentation is usually performed using deep learning models trained to classify each pixel in the image based on its features and context. Segmentation methods are mainly classified into three categories: semantic segmentation [94, 95, 96, 97, 98, 99, 100, 101], instance segmentation [102, 103, 104, 105, 106], and panoptic segmentation [107, 108, 109, 110, 111]. _Semantic Segmentation_ studies the uncountable stuff in an image. It analyzes each image pixel and assigns a unique class label based on the texture it represents. In a street image, the semantic segmentation's output will assign the same label to all the cars and the same image to all the pedestrians; it cannot differentiate the objects separately. _Instance Segmentation_ deals with countable things. It can detect each object or instance of a class present in an image and assigns it to a different mask or bounding box with a unique identifier. _Panoptic Segmentation_ presents a unified segmentation approach where each pixel in a scene is assigned a semantic label (due to semantic segmentation) and a unique instance identifier (due to instance segmentation). Segmentation applications include scene understanding [112, 113, 114, 115, 116], medical image analysis [117, 118, 119, 120, 121, 122, 123], robotic perception [124, 125, 126, 127, 128], autonomous vehicles [129, 130, 131, 132, 133, 134], video surveillance [135, 136, 137, 138, 139], and augmented reality [140, 141, 142, 143, 144]. ### Segmentation Loss Functions Common loss functions include cross-entropy loss, Intersection over union (IoU) loss, Focal loss, Dice loss, Tversky loss, and Lovasz Loss. The following sections will describe these loss functions and their applications. #### 6.1.1 Cross Entropy Loss for Segmentation The cross-entropy loss for segmentation measures the dissimilarity between the predicted and ground truth segmentation maps. The cross-entropy loss is calculated by comparing the predicted and ground truth segmentation maps pixel-by-pixel [95]. It is defined as the negative log-likelihood of the ground truth segmentation map given the predicted segmentation map. The cross-entropy loss is calculated using the following formula: \[-\frac{1}{N}\sum_{i=1}^{N}\sum_{c=1}^{C}y_{i,c}log(p_{i,c}), \tag{40}\] where \(N\) is the total number of pixels in the image, \(C\) is the number of classes, \(y\) is the ground truth segmentation map, and \(p\) is the predicted segmentation map. The values of \(y\) and \(p\) should be between 0 and 1 and sum up to 1. The lower the cross-entropy loss, the better the prediction. #### 6.1.2 Intersection Over Union (IoU) loss for segmentation The Intersection Over Union (IoU) loss is a commonly used loss function and evaluation metric for semantic segmentation tasks. The goal is to predict a per-pixel segmentation mask for a given image. The IoU loss is also known as the Jaccard loss or Jaccard Index (JI), and is defined as the ratio of the intersection of the predicted and ground-truth masks to the union of the predicted and ground-truth masks. The IoU loss is calculated per-pixel basis, and the final loss is the average IoU across all pixels in the image. The IoU loss can be defined mathematically as: \[IoU=\frac{1}{n}\sum_{i=1}^{n}\frac{y_{i}\cap\hat{y}_{i}}{y_{i}\cup\hat{y}_{i}}, \tag{41}\] where \(y_{i}\) is the ground-truth mask for pixel \(i\), \(\hat{y}_{i}\) is the predicted mask, \(y_{i}\cap\hat{y}_{i}\) is the intersection of the ground-truth and predicted masks, and \(y_{i}\cup\hat{y}_{i}\) is the union of the ground-truth and predicted masks. \begin{table} \begin{tabular}{l l} \hline \multicolumn{2}{l}{Average Precision (AP)} \\ \hline AP & \(\%\) AP at IoU=.50:.05:95 (primary challenge metric) \\ \hline \(AP^{IoU=.50}\) & \(\%\) AP at IoU=.50 (PASCAL VOC metric) \\ \hline \(AP^{IoU=.75}\) & \(\%\) AP at IoU=.75 (strict metric) \\ \hline \multicolumn{2}{l}{AP Across Scales:} \\ \hline \(AP^{small}\) & \(\%\) AP for small objects: area \(<32^{2}\) \\ \hline \(AP^{medium}\) & \(\%\) AP for medium objects: \(32^{2}<\) area \(<96^{2}\) \\ \hline \(AP^{large}\) & \(\%\) AP for large objects: area \(>96^{2}\) \\ \hline \multicolumn{2}{l}{Average Recall (AR):} \\ \hline \(AR^{max=1}\) & \(\%\) AR given 1 detection per image \\ \hline \(AR^{max=10}\) & \(\%\) AR given 10 detections per image \\ \hline \(AR^{IoU=100}\) & \(\%\) AR given 100 detection per image \\ \hline \multicolumn{2}{l}{AR Across Scales:} \\ \hline \(AR^{small}\) & \(\%\) AR for small objects: area \(<32^{2}\) \\ \hline \(AR^{medium}\) & \(\%\) AR for medium objects: \(32^{2}<\) area \(<96^{2}\) \\ \hline \(AR^{large}\) & \(\%\) AR for large objects: area \(>96^{2}\) \\ \hline \end{tabular} \end{table} Table 7: COCO Evaluation Metrics. IoU is commonly used in various semantic segmentation works as a loss function [145, 146] and as an evaluation metric[94, 147]. #### 6.1.3 Dice Loss The Dice loss, also known as the Dice similarity coefficient [148], is used to evaluate the similarity between the predicted segmentation mask and the ground truth mask. The loss function is defined as \[L=1-\frac{2\cdot intersection(pred,gt)}{|pred|+|gt|}, \tag{42}\] where \(pred\) is the predicted segmentation mask, \(gt\) is the ground truth segmentation mask, \(intersection(pred,gt)\) is the number of pixels that are in the intersection of the predicted and ground truth masks, and \(|pred|\) and \(|gt|\) are the total number of pixels in the predicted and ground truth masks, respectively. Dice loss is widely used in medical imaging, where the goal is to segment structures in images with high precision. #### 6.1.4 Tversky loss The Tversky loss [149] is a variation of the Dice loss, commonly used in image segmentation tasks. It is defined as \[Tversky(A,B)=\frac{|A\cap B|}{|A\cap B|+\alpha|A\backslash B|+\beta|B\backslash A |}, \tag{43}\] where \(A\) and \(B\) are the predicted and ground truth segmentation masks, respectively, and \(\alpha\) and \(\beta\) are user-defined hyperparameters that control the weighting of false positives and false negatives. This loss function is similar to the Dice loss, but it allows the assignment of different weights to false positives and false negatives, which can be useful in certain scenarios where the imbalance between the two types of errors is significant. #### 6.1.5 Lovasz Loss The main idea behind the Lovasz Loss[150] is to optimize the IoU by optimizing the Jaccard index or IoU between the predicted segmentation and the ground-truth segmentation. This loss function is particularly useful in image segmentation tasks where the Intersection-over-Union (IoU) score is highly important. The Lovasz Loss is defined as the sum of the predicted segmentation mask and the ground-truth segmentation mask, weighted by the Jaccard index as follows: \[L=-\frac{1}{N}\sum_{i=1}^{N}Jaccard(p_{i},y_{i})\log(p_{i}), \tag{44}\] where \(N\) is the number of pixels in the image, \(p\) is the predicted segmentation mask, and \(y\) is the ground-truth segmentation mask. The Lovasz loss provides a differentiable surrogate for the non-differentiable IoU metric, allowing it to be optimized directly. ### Segmentation Metrics The common metrics for evaluating segmentation are the Mean Intersection over union (mIoU), pixel accuracy, Average Precision (AP) (refer to section 5.2.1), BF score, and Panoptic Quality. The following sections will explain each, skipping IoU, and AP already discussed. #### 6.2.1 Pixel Accuracy Measures the proportion of correctly classified pixels in the whole image. It is calculated by dividing the number of correctly classified pixels by the total number of pixels in the image. The formula for pixel accuracy is \[Pixelaccuracy=\frac{\text{Number of correctly classified pixels}}{\text{Total number of pixels in image}} \tag{45}\] Like regular accuracy, pixel accuracy can be misleading when the class imbalance is high, as it does not consider false positives or negatives. This is why other metrics, such as Intersection over Union (IoU) or Dice coefficient, are commonly used to evaluate image segmentation models. #### 6.2.2 Boundary F1 Score (BF) The Boundary F1 Score (BF) [151], often abbreviated as BF score, is a metric used for evaluating image segmentation quality, particularly when the precision of the boundary location is important. The BF score applies the concept of the F1 score to the segmentation boundaries rather than the segment regions themselves. The computation of the BF score involves the following steps: 1. For each segment in the ground truth, find the closest predicted segment (according to some distance measure, often the mean shortest distance between the boundaries of the two segments). 2. Compute the precision as the proportion of predicted segments close enough to a ground truth segment. Formally, this is \(P=TP/(TP+FP)\), where \(TP\) (True Positive) is the number of predicted segments close enough to a ground truth segment, and \(FP\) (False Positive) is the number of predicted segments not close enough to any ground truth segment. 3. Compute the recall as the proportion of ground truth segments close enough to a predicted segment. Formally, this is \(R=TP/(TP+FN)\), where \(FN\) (False Negative) is the number of ground truth segments not close enough to any predicted segment. 4. The BF score is then the harmonic mean of precision and recall: \[F\_score=\frac{2*Precision*Recall}{Precision+Recall}\] (46) A threshold is typically defined as _close enough_ when matching predicted segments to ground truth segments. The BF score ranges from 0 to 1, with 1 indicating a perfect match between the predicted and ground truth boundaries. #### 6.2.3 Panoptic Quality (PQ) Panoptic Quality (PQ) is a metric proposed for evaluating panoptic segmentation tasks [107]. The Panoptic Quality metric is defined as: \[PQ=\frac{\Sigma_{(p,g)\in TP}IoU(p,g)}{|TP|}\times\frac{|TP|}{|TP|+\frac{1}{2 }|FP|+\frac{1}{2}|FN|}, \tag{47}\] where * IoU(p, g) denotes the intersection-over-union of prediction p and ground truth g. * TP (True Positive) is a set of matched pairs of predicted and ground truth segments. * FP (False Positive) is a set of predicted segments not matched with any ground truth segment. * FN (False Negative) is a set of ground truth segments not matched with any predicted segment. The PQ metric ranges between 0 and 1, where 1 signifies a perfect segmentation. It is a product of two terms: The first term, called _segmentation quality_ (SQ), calculates the average IoU of the correctly predicted segments (true positives). This term measures how accurately each detected object has been segmented and thus rewards prediction quality. The second term, called _Recognition Quality_ (RQ), calculates the ratio of the number of true positives to the total number of segments. This term measures how accurately each object has been recognized. This metric can be more informative than mean IoU when dealing with complex scenes containing multiple object instances per class, as it considers both the segmentation quality and the correct recognition of distinct instances. ## 7 Face Recognition Face recognition aims to accurately match an individual's face in an image or video to a corresponding entry in a database of faces. This task is often performed using deep learning algorithms, such as Convolutional Neural Networks (CNNs) or Transformers [152], that are trained on large datasets of face images. The algorithms are trained to extract features from images faces and then use these features to recognize faces that match those in the database. Face recognition has many applications, including security [153, 154], social media [155], and biometric identification systems [156, 157]. ### Face Recognition Loss Functions and Metrics The loss functions used in face recognition are typically aimed at preserving the structure and similarity of the input faces. They can be divided into two classes: loss functions based on classification and loss functions based on representation learning. Common loss functions based on classification are softmax loss, A-softmax loss, Center loss, Large-Margin cosine loss, and Additive Angular Margin loss. On the other hand, loss functions based on representation learning are Triplet loss, Contrastive loss, Circle loss, and the Barlow twins loss. The metrics commonly used for face recognition are the same as the ones used for classification, such as accuracy, precision, recall, F1-score, ROC, etc. In the following subsections, we will describe each of these loss functions. #### 7.1.1 Softmax Loss The Softmax Loss computes the cross-entropy between the predicted class probabilities and the true class label, followed by the logarithmic operation to convert the negative log-likelihood into a loss value. The final loss is obtained by summing the cross-entropy over all classes and all samples. Let's denote the weight vectors for each class (or, in this case, each face identity) as \(W=\{w_{1},w_{2},...,w_{n}\}\), where \(n\) is the number of classes (or identities). For a given input image \(x\) of class \(y\), the linear classifier would compute a score \(f(x,W)=Wx+b\), where \(b\) is the bias term. The softmax function converts these scores into probabilities. The probability of the \(i^{th}\) class is computed as follows: \[P(y=i|x;W)=\frac{e^{w_{i}^{T}x+b_{i}}}{\sum_{j=1}^{n}e^{w_{j}^{T}x+b_{j}}} \tag{48}\] The softmax loss (also known as the cross-entropy loss) for an input-output pair \((x,y)\) is defined as the negative log-likelihood of the correct class, which we can be expressed as \[L_{i}=-\log(P(y=y_{i}|x;W))=-f_{y_{i}}+\log\sum_{j=1}^{n}e^{f_{j}}, \tag{49}\] where \(f_{y_{i}}\) is the score for the correct class, and \(f_{j}\) are the scores for all classes. The total loss for a batch of data is the mean of \(L_{i}\) over all the examples in the batch. The disadvantage of this loss function is that it does not have fine-grained control over the intra-class and inter-class distances that come in handy for face recognition purposes. #### 7.1.2 A-Softmax Loss The A-Softmax loss [158], also known as the SphereFace loss, was designed to address the limitations of the traditional softmax loss by considering the angular information between the features of face images and their corresponding labels. The A-Softmax loss aims to maximize the inter-class separability and minimize the intra-class variations of face features. Given a weight matrix \(W\), an input feature vector \(x\), and a margin parameter \(m\), the SphereFace loss is calculated as follows: 1. Compute the Normalized weight matrix \(W_{norm}\) as \[W_{norm}=\frac{W}{||W||} \tag{50}\] where each column \(w_{i}\) in \(W_{norm}\) is a unit vector, i.e., \(||w_{i}||=1\). The normalization operation makes the weights for each class lie on the unit hypersphere. 2. Compute the margin \(M\) to be applied to the angles: \[M=(m-1)\cdot y_{true}+1 \tag{51}\] In this equation, \(y_{true}\) is the true class label. If \(y_{true}\) equals 1, \(M\) equals \(m\), and if \(y_{true}\) equals 0, \(M\) equals 1. 3. Compute the cosine of the angle \(\theta\) between the feature vector \(x\) and the weight vector: \[cos(\theta)=\frac{W_{norm}\cdot x}{||x||}, \tag{52}\] where \(||\cdot||\) denotes the L2 norm. 4. Compute the new angle \(\theta^{\prime}\) and its cosine after applying the margin: \[\theta^{\prime}=\theta\cdot M \tag{53}\] \[cos(\theta^{\prime})=cos(\theta^{\prime})\] 5. Compute the new prediction \(y^{\prime}_{pred}\) by rescaling with the norm of \(x\): \[y^{\prime}_{pred}=||x||\cdot cos(\theta^{\prime}) \tag{54}\] 6. Finally, compute the SphereFace loss \(L\), which is the negative log-likelihood of the true class: \[L=-\log\frac{\sum y_{true}\cdot e^{y^{\prime}_{pred}}}{\sum e^{y^{\prime}_{pred}}} \tag{55}\] Here, the summation is taken over all classes. The numerator is the exponentiated prediction for the true class, and the denominator is the sum of exponentiated predictions for all classes. Compared to the traditional softmax loss, the A-Softmax loss produces more discriminative and compact features, improving face recognition performance. #### 7.1.3 Center Loss The center loss [159] aims to increase the inter-class variance while reducing the intra-class variance by penalizing the distances between the features of a sample and the center of its corresponding class. The center loss is inspired by the idea of having a center for each class in the feature space, and the loss function encourages the features of the same class to be close to its center. The center loss is defined as the Euclidean distance between the feature of a sample and the center of its class in the feature space. The center loss is usually added to the main loss function in the training process, and it is defined as \[L_{center}=\frac{1}{2}\sum_{i=1}^{n}\left|\mathbf{x_{i}}-\mathbf{c_{y_{i}}} \right|_{2}^{2}, \tag{56}\] where \(\mathbf{x_{i}}\) is the feature representation of the \(i^{th}\) sample, \(y_{i}\) is its corresponding class label, \(\mathbf{c_{y_{i}}}\) is the center of class \(y_{i}\), and \(n\) is the number of samples. #### 7.1.4 CosFace: Large-Margin Cosine Loss CosFace loss, also known as the Large Margin Cosine Loss, maximizes the decision margin in the cosine space to further enhance the discriminative power of the deeply learned features. The cosine of the angle between the feature vector \(x_{i}\) and the weight \(w_{j}\) of the \(j^{th}\) class is given by \[cos\theta_{j}=\frac{w_{j}^{T}x_{i}}{||w_{j}||||x_{i}||}, \tag{57}\] where \(||.||\) denotes the l2 norm. The CosFace method adds a cosine margin \(m\) to the target logit, so the modified cosine of the angle \(\theta_{y_{i}}\) corresponding to the ground truth class \(y_{i}\) is given by: \[cos\theta_{y_{i}}-m \tag{58}\] Then the CosFace loss for an input-output pair \((x_{i},y_{i})\) is defined as \[L_{i}=-\log\frac{e^{s(cos\theta_{y_{i}})-m}}{e^{s(cos\theta_{y_{i}})-m}+\sum_{j \neq y_{i}}^{n}e^{scos\theta_{j}}}, \tag{59}\] where \(s\) is a scaling factor. One advantage of this function is that the cosine function's non-monotonicity does not create a problem here, unlike SphereFace. Also, because the feature vector is normalized, the model must learn better separation of the angles as it cannot reduce loss by learning a different norm [160]. #### 7.1.5 ArcFace. Additive Angular Margin Loss The ArcFace loss [161] enhances the discriminative power of the softmax loss by adding an angular margin penalty to the target logit. The ArcFace method adds an additive angular margin \(m\) to the target logit, so the modified cosine of the angle \(\theta_{y_{i}}\) corresponding to the ground truth class \(y_{i}\) is given by \[cos(\theta_{y_{i}}+m) \tag{60}\] Then the ArcFace loss for an input-output pair \((x_{i},y_{i})\) is defined as \[L_{i}=-\log\frac{e^{scos(\theta_{y_{i}}+m)}}{e^{scos(\theta_{y_{i}}+m)}+\sum_ {j\neq y_{i}}^{n}e^{scos\theta_{j}}}, \tag{61}\] where \(s\) is a scaling factor. The margin \(m\) can be interpreted as an additional arc length on the hypersphere of radius \(s\). Experiments show better inter-class discrepancy than Triplet Loss while having about the same intra-class similarity [160]. #### 7.1.6 Triplet Loss Probably the best-known loss function for face recognition. The idea behind the triplet loss [162, 163] is to train the model to distinguish between a positive pair of images (two images of the same person) and a negative pair of images (two images of different persons). Given an anchor image, \(A\), a positive image, \(P\), and a negative image, \(N\), (see Figure 5), the loss is calculated as the distance between the anchor image and positive image and the distance between the anchor image and negative image, plus a margin. The equation is defined as \[L_{triplet}=\max 0,\left\|\mathbf{f}_{A}-\mathbf{f}_{P}\right\|_{2}^{2}-\left\| \mathbf{f}_{A}-\mathbf{f}_{N}\right\|_{2}^{2}+\alpha, \tag{62}\] where \(\mathbf{f}_{A}\), \(\mathbf{f}_{P}\), and \(\mathbf{f}_{N}\) are the embeddings of the anchor image, positive image, and negative image, respectively, and \(\alpha\) is a hyperparameter known as the margin. By minimizing this loss, the embeddings of the positive images get closer to each other than the embeddings of the negative images. #### 7.1.7 Contrastive Loss The contrastive loss [164] learns a feature representation or image embedding that projects similar images close to each other and dissimilar images far apart. This loss is based on a Siamese architecture of the neural network. The contrastive loss function is defined as: \[L=\frac{1}{N}\sum_{i=1}^{N}y_{i}\left|f(x_{i}^{a})-f(x_{i}^{p})\right|_{2}^{2} +(1-y_{i})\max\left(0,m-|f(x_{i}^{a})-f(x_{i}^{n})|_{2}^{2}\right), \tag{63}\] where \(f(x_{i}^{a})\) is the deep feature representation of the anchor image \(x_{i}^{a}\), \(f(x_{i}^{p})\) is the deep feature representation of the positive image \(x_{i}^{p}\), \(f(x_{i}^{n})\) is the deep feature representation of the negative image \(x_{i}^{n}\). \(y_{i}\) is a binary label indicating whether the anchor and positive images are similar (1) or dissimilar (0), \(m\) is a margin hyperparameter, and \(N\) is the number of triplets in the batch. The margin \(m\) controls how hard the model should work to push dissimilar embeddings apart. Extending a trained model for new/unseen classes is easy because it learns to create a semantic representation of the image rather than classify it among a predetermined set of classes. One limitation of this loss is that the margin \(m\) is the same constant for all dissimilar pairs, which implicitly pushes the model to have the same distance between all dissimilar pairs, even if some are more dissimilar [160, 165]. A second limitation is that the absolute notion of similar and dissimilar pairs is not transferable from one context to another [166]. #### 7.1.8 Circle Loss The Circle Loss [167] pushes positive pairs closer and negative pairs farther away while maintaining a _circle-like_ decision boundary. This is achieved by adding margins to positive and negative pairs, enlarging the intra-class variance for negative pairs, and reducing the intra-class variance for positive pairs. By doing so, Circle Loss can effectively handle imbalanced data and complex distributions, which are common challenges in face recognition tasks. The circle loss equation can be expressed as \[\alpha_{pos_{i}} =\max(O_{pos_{i}}-s_{pos_{i}},0) \tag{64}\] \[\alpha_{neg_{j}} =\max(O_{neg_{j}}-s_{neg_{j}},0)\] \[sum_{pos} =\sum_{i}e^{-\gamma\cdot\alpha_{pos_{i}}\cdot_{pos_{i}}}\] \[sum_{neg} =\sum_{j}e^{\gamma\cdot\alpha_{neg_{j}}\cdot_{neg_{j}}}\] \[L =\log(1+sum_{pos}\cdot sum_{neg})\] Where: * \(s_{pos_{i}}\) and \(s_{neg_{j}}\) represent the pairwise similarity between the positive and negative pairs. The positive pairs belong to the same class, and the negative pairs belong to different classes. * \(O_{pos_{i}}\) and \(O_{neg_{j}}\) represent the user-defined margins for positive and negative pairs, respectively. \(O_{pos}\) is a margin that should be smaller than the expected similarity of positive pairs, while \(O_{neg}\) should be larger than the expected similarity of negative pairs. Thus, you want to choose \(O_{pos}\) and \(O_{neg}\) such that \(O_{pos}<O_{neg}\). * \(\alpha_{pos_{i}}\) and \(\alpha_{neg_{j}}\) are slack variables that ensure the positive similarities are larger than \(O_{pos_{i}}\) and the negative similarities are smaller than \(O_{neg_{j}}\). This is achieved by setting the minimum value to 0, ignoring similarities that have already met the margin requirement. * \(sum_{pos}\) and \(sum_{neg}\) are exponentiated and scaled sums of the positive and negative similarities, respectively. The exponential function ensures that all values are positive and emphasize larger values, while \(\gamma\) is a scaling Figure 5: Triplet Loss. Given an anchor image, a positive image, and a negative image, the triplet loss computes the distance between the embeddings of the anchor image and positive image and the distance between the embeddings of the anchor image and negative image. By minimizing this loss, the embeddings of the positive images get closer to each other than the embeddings of the negative images. factor that controls the rate at which the emphasis increases. The slack variables \(\alpha_{pos_{i}}\) and \(\alpha_{neg_{j}}\) are used in the exponent to give more weight to pairs far from meeting the margin requirement. * Finally, \(L\) is the Circle Loss computed as the logarithm of 1 plus the product of \(sum_{pos}\) and \(sum_{neg}\). This encourages both \(sum_{pos}\) and \(sum_{neg}\) to be small, which in turn encourages the positive similarities to be large and the negative similarities to be small. Adding one inside the logarithm ensures that the argument is always positive, and the logarithm itself helps dampen the effect of large values and reduce the effect of outliers. The circle loss has a more definite convergence target than the triplet loss because there is a single point in the (\(s_{neg}\), \(s_{pos}\)) space toward which the optimization is driven (\(O_{neg}\), \(O_{pos}\)). However, choosing (\(O_{neg}\), \(O_{pos}\)) is arbitrary. In practice, it is common to use cross-validation or a separate validation set to tune these hyperparameters. A common strategy is to start with small margins and gradually increase them. If the margins are too large, the model struggles to learn; if they are too small, it learns embeddings that are not discriminative enough. In some implementations of Circle Loss, \(O_{pos}\) and \(O_{neg}\) are not independent but related by a constant margin \(m\), which is set to ensure a sufficient gap between positive and negative pairs. In this case, we only need to tune one of the margins or the gap \(m\). #### 7.1.9 Barlow Twins Loss The Barlow Twins loss [168] is a self-supervised learning approach. The key idea is to make the outputs of a two-twin network, processing two different views of the same image, as similar as possible while reducing the redundancy between the dimensions of these representations. It encourages the network to learn highly informative features about the input and non-redundant. Given the batch size \(N\) and the dimensionality \(D\) of the embeddings, the network processes two different augmentations of the same input data, producing the embeddings \(z_{a}\) and \(z_{b}\). These embeddings are then normalized to have zero mean and unit variance along the batch dimension. The computation of the Barlow Twins loss can be formulated as follows: 1. Compute the cross-correlation matrix \(C\): \[C=\frac{1}{N}z_{a_{norm}}^{T}\cdot z_{b_{norm}} \tag{65}\] 2. Compute the difference matrix \(C_{diff}\) between the cross-correlation matrix and an identity matrix and square it: \[C_{diff}=(C-I)^{2} \tag{66}\] 3. Scale the off-diagonal elements of \(C_{diff}\) by a factor \(\lambda\): \[C_{diff_{ij}}=\begin{cases}C_{diff_{ij}},&\text{if }i=j\\ \lambda C_{diff_{ij}},&\text{if }i\neq j\end{cases} \tag{67}\] 4. Finally, the Barlow Twins loss \(L\) is the sum of all elements in the updated \(C_{diff}\): \[L=\sum_{i,j}C_{diff_{ij}} \tag{68}\] By backpropagating this loss and updating the model's parameters, the network is trained to minimize redundancy and increase the similarity between two different views of the same image, learning more robust and informative features as a result. This method does not require a fixed number of classes and does not suffer from data expansion as it does not require explicit negative examples. However, the model in the paper required large dimensionality of the final representation for good performance, and the performance is not robust to remove certain distortions to the inputs [160]. #### 7.1.10 SimSiam Loss SimSiam [169] is a self-supervised learning aimed at learning representations by pushing two views of the same image to be as similar as possible. While it is not explicitly designed for face recognition, it can be used to learn meaningful representations of faces. Given two different augmentations \(x_{1}\) and \(x_{2}\) of the same image, they are forwarded through a neural network and a prediction head to obtain the features \(z_{1},z_{2}\) and the predictions \(p_{1},p_{2}\). The loss is defined as the negative cosine similarity between the predictions and the features of the different augmentations. \[L=-\frac{1}{2}[\frac{(p_{1}^{T}z_{2})}{(||p_{1}||\cdot||z_{2}||)}+ \frac{(p_{2}^{T}z_{1})}{(||p_{2}||\cdot||z_{1}||)}], \tag{69}\] where \({}^{T}\) denotes transpose, and \(||\cdot||\) denotes the L2 norm. This loss encourages the model to make the predictions \(p_{1}\) and \(p_{2}\) as similar as possible to the features \(z_{2}\) and \(z_{1}\), respectively. An important part of the SimSiam approach is a stop-gradient operation applied to \(z_{2}\) in the first term and \(z_{1}\) in the second term of the loss function. This means that the gradients are not backpropagated through these variables during training. The stop-gradient operation is critical to avoid the model collapsing into trivial solutions where the features and the predictions are the same. The advantages of the SimSiam loss are: 1. _No Negative Pair:_ Unlike contrastive learning methods that require negative examples, SimSiam does not require any. This simplifies the model training process and can make it more efficient. 2. _Stop-Gradient Operation:_ The use of stop-gradient operation in SimSiam makes it less prone to collapsing into trivial solutions. This is a significant advantage because many self-supervised learning models struggle with this problem. 3. _Simplicity:_ SimSiam is simpler compared to other self-supervised learning methods. It uses a symmetric architecture and a simple loss function which encourages two views of the same image to have similar representations. The disadvantages include: 1. _Hyperparameter Sensitivity:_ SimSiam has a few crucial hyperparameters, such as the learning rate and weight decay, which require careful tuning to get the best performance. Incorrect settings can significantly degrade the model's performance. 2. _Dependence on Data Augmentation:_ The success of SimSiam, like many other self-supervised learning models, heavily relies on the choice and quality of data augmentations. This requires domain knowledge and potentially significant effort to determine the most effective augmentations. 3. _Non-semantic Features:_ One common issue with self-supervised learning methods, including SimSiam is that the features learned may not necessarily be semantically meaningful. They are good at capturing low-level features but may not be as effective in capturing high-level semantic information. ## 8 Image Generation Image generation in deep learning involves using artificial neural networks to generate new images. This task has been revolutionized by developing various models such as Variational Autoencoders (VAEs) [170, 171, 172, 173], Generative Adversarial Networks (GANs) [174, 175, 176, 177, 178], Normalized Flow models (NFs) [179, 180, 181, 182, 183], Energy-Based Models (EBMs) [184, 185, 186, 187, 188, 189], and Diffusion Models [190, 191, 192, 193, 194]. These models allow the generation of high-quality images that can be used in various applications such as image super-resolution [195, 196, 197, 198], denoising [199, 200, 201], inpainting [202, 203, 204, 205], and style transfer [206, 207, 208, 209]. _Variational Autoencoders (VAEs)_ are generative models that use deep learning techniques to create new data and learn latent representations of the input data. VAEs consist of two primary components: an encoder and a decoder. The encoder takes input data and compresses it into a lower-dimensional latent space, capturing the essential features of the data. This is typically a probabilistic process, producing a mean and standard deviation representing potential latent space values distribution. The decoder then takes a point from this latent space distribution and reconstructs the original data. The entire process is trained in such a way as to minimize the difference between the original and the reconstructed data, as well as to ensure the latent space approximates a standard Gaussian distribution. VAEs can generate new data by feeding the decoder points sampled from the latent space. _Generate Adversarial Networks (GANs)_ involve two neural networks, a Generator, and a Discriminator, playing a game against each other. The Generator tries to create data similar to the training data, while the Discriminator tries to distinguish between the real and generated data. Through this process, both networks improve: the Generator learns to produce increasingly realistic data, while the Discriminator becomes better at distinguishing between real and artificial data. This adversarial process continues until an equilibrium is reached, at which point the Generator is producing realistic data and the Discriminator is, at best, randomly guessing whether the data is real or generated. This equilibrium is conceptually referred to as a Nash equilibrium in game theory [210]. _Normalizing Flow Models (NFs)_ can construct complex probability distributions by transforming a simple base distribution through a sequence of invertible and differentiable transformations, or _flows_. These flows warp and twist the data space, allowing the model to generate diverse outputs. The parameters of these transformations are learned from data using maximum likelihood estimation. An advantage of Normalizing Flows over other generative models is their ability to provide an exact and tractable likelihood for a given sample, enabling efficient sampling and density estimation. However, they can be computationally intensive to train due to the need to compute and backpropagate through the Jacobian of the flows. _Energy-Based Models (EBMs)_ learn a scalar energy function to distinguish real data points from unlikely ones, assigning lower energy values to points similar to the training data and higher values otherwise. A neural network often parameterizes this function and learns from the data. Sampling in EBMs is typically done via Markov Chain Monte Carlo (MCMC) [211] methods, producing samples distributed according to the learned energy function. While EBMs can represent a wide variety of data distributions, they can be challenging to train due to the intractability of the partition function and the computational expense of MCMC sampling. _Diffusion models_ can create new data by simulating a random process, specifically a diffusion process. The process starts with a simple data distribution (e.g., Gaussian noise) and gradually transforms it towards the target data distribution, like a particle undergoing diffusion or a random walk. This transformation is controlled by a neural network trained to model the reverse process, taking real-world data and applying a series of transformations to reduce it to noise. During generation, an initial noise sample is iteratively updated using this reverse process model, running the process backward, leading to a sample from the target data distribution. This approach allows the generation of complex and high-quality data, such as images, by simulating a smooth transition from noise to the desired data. The following sections will review the common lost functions and performance metrics used for image generation. ### Image Generation Loss functions The loss function in a VAE consists of the reconstruction loss and the Kullback-Leibler (KL) (refer to section 8.1.2). The reconstruction loss measures how well the decoded data matches the original input data. The KL divergence measures how much the learned distribution in the latent space deviates from a target distribution, usually a standard normal distribution. KL-divergence is used as a regularization term to ensure that the distributions produced by the encoder remain close to a unit Gaussian, penalizing the model if the learned distributions depart from it. The most common loss function used in GANs is the adversarial loss, which is the sum of the cross-entropy loss between the generator's predictions and the real or fake labels. More recently, WGAN [212] applied the Wasserstein distance as an alternative to training GANs to improve the stability and avoid mode collapse that occurs when the generator network stops learning the underlying data distribution and begins to produce a limited variety of outputs, rather than a diverse range of outputs that accurately represent the true data distribution. Normalizing Flows are typically trained using maximum likelihood estimation. Given a dataset, the aim is to maximize the log-likelihood of the data under the model by minimizing the negative log-likelihood of the data. During training, Energy-based models (EBMs) minimize a loss function that encourages the energy function to assign lower energy values to data points from the training data and higher energy values to other points. Different types of EBMs use different loss functions, such as Contrastive Divergence (CD)[213], Maximum Likelihood Estimation (MLE)[214], and Noise-Contrastive Estimation (NCE) [215]. Diffusion models use a denoising loss function based on the Mean Absolute Error (MAE) or the Mean Squared Error (MSE) between the original and reconstructed data. In the next sections, we describe in detail each of these losses. #### 8.1.1 Reconstruction Loss The purpose of the reconstruction loss is to ensure that the decoder network of the VAE can reconstruct an input image that is as close as possible to the original image. In essence, the reconstruction loss compares the original image to the reconstructed image and penalizes any difference between the two. The reconstruction loss is calculated as the mean squared error (MSE) between the original image and the reconstructed image, which is defined as follows \[L_{recon}=\frac{1}{N}\sum_{i=1}^{N}||x_{i}-\hat{x_{i}}||^{2}, \tag{70}\] where \(x_{i}\) is the original image, \(\hat{x_{i}}\) is the reconstructed image, and \(N\) is the number of images. Another metric that can be used for reconstruction loss is binary cross-entropy (BCE), a common choice for binary images. The BCE loss can be defined as: \[L_{BCE}=-\sum_{i=1}^{n}y_{i}\log(\hat{y_{i}})+(1-y_{i})\log(1-\hat{y_{i}}), \tag{71}\] where \(y_{i}\) is the binary value of the original image and \(\hat{y_{i}}\) is the binary value of the generated image. In the context of deep learning, reconstruction loss has been used in various image generation tasks such as image restoration, image generation, and image synthesis. #### 8.1.2 Kullback-Leibler Divergence Loss The KL divergence loss, also known as the KL divergence loss, measures the difference between the predicted probability distribution and the true probability distribution of the classes [216]. The KL divergence loss ensures that the predicted probabilities are close to the true probabilities, which can be useful in cases where the true probabilities are known or can be approximated. The KL divergence loss is defined as \[KL(p||q)=\sum_{i=1}^{n}p(x_{i})log(\frac{p(x_{i})}{q(x_{i})}), \tag{72}\] where \(p(x_{i})\) is the true probability of class \(x_{i}\) and \(q(x_{i})\) is the predicted probability of class \(x_{i}\). The KL divergence loss is often used in generative models such as variational autoencoders, which aim to approximate the true data distribution. It is also used in reinforcement learning to ensure that the agent's learned policy is close to the optimal policy. One disadvantage of using the KL divergence loss is that it is sensitive to zero probabilities in the true probability distribution. This can lead to numerical instability and can make the loss function difficult to optimize. To overcome this issue, a common practice is to add a small value (e.g., \(1e-7\)) to the true probability distribution to avoid zero probabilities. #### 8.1.3 Adversarial Loss The adversarial loss is the main function used in Generative Adversarial Networks (GANs). It is based on the idea of a minimax game between the two neural networks. The adversarial loss is used to train the generator, defined as the negative log-likelihood of the discriminator's output. Goodfellow et al. first introduced the adversarial loss in [174]. The authors showed that this loss function leads to the convergence of the generator to a Nash equilibrium, where the generator produces realistic images that the discriminator cannot distinguish from real images. The adversarial loss can be defined as: \[L_{adv}(G,D)=-\mathbb{E}x\sim pdata(x)[\log D(x)]-\mathbb{E}_{z\sim p_{z}(z)} [\log(1-D(G(z)))], \tag{73}\] where \(G\) is the generator network, \(D\) is the discriminator network, \(x\) are real images from the data distribution \(p_{data}(x)\), \(z\) are random noise inputs from the noise distribution \(p_{z}(z)\), and \(G(z)\) are generated images. The adversarial loss encourages the generator to generate indistinguishable images from real images and the discriminator to correctly classify real images and generated images. The training process alternates between updating the generator and the discriminator until the generator produces realistic images. #### 8.1.4 Wasserstein loss The Wasserstein loss [212] is used in generative models, such as Generative Adversarial Networks (GANs), to measure the difference between the real and generated distributions. The Wasserstein loss is the minimum work required to transform one probability distribution into another. The amount of work is the sum of the distances between each sample multiplied by their probability mass. The Wasserstein loss can be written as \[W(p_{data},p_{gen})=\inf_{\gamma\in\Gamma(p_{data},p_{gen})}\int_{x,y}||x-y||d \gamma(x,y) \tag{74}\] where \(p_{data}\) and \(p_{gen}\) represent the real and generated distributions, respectively. \(\Gamma(p_{data},p_{gen})\) is the set of joint distributions with marginals \(p_{data}\) and \(p_{gen}\), and \(\gamma(x,y)\) is the joint distribution of \((x,y)\). The Wasserstein loss has been widely used in GANs for its ability to provide a more stable training process and to avoid the mode collapse problem, where the generator produces a limited number of similar outputs. #### 8.1.5 Negative Log-likelihood in Normalizing Flows In Normalizing Flows, the objective is to learn a transformation (or a sequence of transformations) that maps a complex data distribution to a simpler distribution, for example, a standard Gaussian. The transformations are chosen such that their Jacobian determinant is easy to compute, which allows using the change of variables formula to compute the likelihood of the data under the model. Given a data point \(x\), let's denote \(z=f_{\theta}(x)\) as the mapping of \(x\) to the base distribution under the flows parameterized by \(\theta\), and \(p_{z}(z)\) as the density of the base distribution. Then, the log-likelihood of \(x\) under the model is given by \[\log p_{\theta}(x)=\log p_{z}(f_{\theta}(x))-\log\left|\det\frac{\partial f_{ \theta}(x)}{\partial x}\right| \tag{75}\] The second term is the log absolute determinant of the Jacobian of the transformation, which corrects for the change in volume due to the transformation. The loss function that we minimize during training is the negative log-likelihood of the data. If our dataset consists of \(N\) points \(x_{1},x_{2},\ldots,x_{N}\), then the loss function \(L(\theta)\) is expressed as \[L(\theta)=-\frac{1}{N}\sum_{i=1}^{N}\log p_{\theta}(x_{i})=-\frac{1}{N}\sum_{ i=1}^{N}\left[\log p_{z}(f_{\theta}(x_{i}))-\log\left|\det\frac{\partial f_{ \theta}(x_{i})}{\partial x_{i}}\right|\right] \tag{76}\] In practice, stochastic gradient descent or a variant minimizes this loss function and learns the parameters \(\theta\) of the flows. Normalizing Flows are computationally intensive to train, due to the need to compute and backpropagate through the Jacobian of the flows. For this reason, methods such as RealNVP [179] and Glow [180] are designed so that the determinant of the Jacobian can be computed efficiently. #### 8.1.6 Contrastive Divergence Contrastive Divergence (CD) can be used in training Energy-Based Models (EBMs), specifically for estimating the log-likelihood gradient. As with RBMs, the objective is to maximize the likelihood of the data under the model. The key difference is that the energy function is defined over the visible and hidden units in EBMs, whereas it's defined over the joint configuration of visible and hidden units in RBMs. In many energy-based models, the log-likelihood gradient involves an expectation over all possible data configurations, which is generally intractable to compute exactly. CD approximates this expectation by running a Markov chain for several steps. Given a dataset consisting of N data points \(x_{1},x_{2},...,x_{N}\), the log-likelihood of the data under the model is: \[L(\theta)=\frac{1}{N}\sum_{i=1}^{N}\log p(x_{i};\theta), \tag{77}\] where \(\theta\) represents the parameters of the model, and \(p(x;\theta)\) is the probability of data point \(x\), defined as \(p(x;\theta)=\frac{e^{-E(x;\theta)}}{Z(\theta)}\), with \(E(x;\theta)\) being the energy function and \(Z(\theta)\) the partition function. The gradient of the log-likelihood for the parameters is \[\frac{\partial}{\partial\theta}L(\theta)=\frac{1}{N}\sum_{i=1}^{N}\frac{ \partial}{\partial\theta}\log p(x_{i};\theta) \tag{78}\] This gradient can be decomposed into two terms: the positive phase, which is easy to compute \[\frac{1}{N}\sum_{i=1}^{N}\frac{\partial E(x_{i};\theta)}{\partial\theta} \tag{79}\] and the negative phase, which involves an expectation over all model configurations and is generally intractable. \[-\langle\frac{\partial E(x;\theta)}{\partial\theta}\rangle_{p}, \tag{80}\] where \(\langle\rangle_{p}\) denotes an expectation with respect to the model distribution. In CD, the negative phase is approximated by running a Markov chain starting from a data point for a few steps and using the resulting sample to estimate the expectation. This leads to the CD-k algorithm, where k is the number of Gibbs sampling steps. The update to the parameters after seeing a data point \(x\) is then \[\Delta\theta=\eta(\frac{1}{N}\sum_{i=1}^{N}\frac{\partial E(x_{i};\theta)}{ \partial\theta}+\langle\frac{\partial E(x^{\prime};\theta)}{\partial\theta} \rangle_{CD}), \tag{81}\] where \(\eta\) is the learning rate, \(\langle\rangle_{CD}\) denotes an expectation to the distribution defined by the Markov chain after k steps, and \(x^{\prime}\) is the sample obtained after k steps of Gibbs sampling starting from a data point \(x\). Contrastive Divergence is often more computationally efficient than other methods for training energy-based models, such as Persistent Contrastive Divergence (PCD) [217] or Mean-Field methods [218] because it requires running the Markov chain for a few steps rather than until it reaches equilibrium. However, this introduces a bias in the estimate of the gradient of the log-likelihood. CD is a suitable choice when the bias is acceptable or can be mitigated by running the chain for more steps. ### Image Generation Metrics Some of the common metrics used for Image generation are: * Peak Signal-to-Noise Ratio (PSNR) * Structural Similarity Index (SSIM) * Inception Score (IS) [219] * Frechet Inception Distance (FID) [220] In the following sections, we will dive into each of these metrics. #### 8.2.1 Peak Signal-to-Noise Ratio (PSNR) Peak Signal-to-Noise Ratio (PSNR) is a traditional metric often used to assess the quality of image and video codecs, comparing the original and the reconstructed (compressed and then decompressed) images or videos. It is a measure of the quality of an approximation of an image. In image generation, PSNR can be used to evaluate the quality of the images generated by models, particularly for tasks like image super-resolution, denoising, inpainting, etc. Here, the PSNR is used to compare the images generated by the model to a ground truth high-quality image. The PSNR is defined in terms of the mean squared error (MSE), which for two \(m\times n\) monochrome images \(I\) and \(K\) where one of the images is considered a noisy approximation of the other is defined as \[MSE=\frac{1}{mn}\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}[I(i,j)-K(i,j)]^{2} \tag{82}\] The PSNR is defined as \[PSNR=10\cdot\log_{10}\left(\frac{(MAX_{I})^{2}}{MSE}\right), \tag{83}\] where \(MAX_{I}\) is the maximum possible pixel value of the image. For instance, for an 8-bit grayscale image, this value would be 255. PSNR provides an easily interpretable score as expressed in a logarithmic decibel scale. Higher PSNR generally indicates that the reconstruction is of high quality; however, in some cases, perceptually similar images may have low PSNR where simple pixel-wise errors do not well capture structural similarity. While PSNR could be useful in the context of tasks like super-resolution or denoising where there is a clear ground truth to compare to, it is not particularly effective for tasks like generative image modeling where the goal is to produce new, high-quality images that are not simply reconstructions of existing images [221]. More sophisticated metrics like the Inception Score or Frechet Inception Distance are typically used for these tasks. #### 8.2.2 Structural Similarity Index (SSIM) The Structural Similarity Index (SSIM) [222] can be used in image generation tasks to compare the generated images with the target (real) images. In other words, it measures how similar the synthetic images produced by a generative model are to the actual images. The SSIM index is based on three main factors: luminance, contrast, and structure. The SSIM value ranges from -1 to 1, where 1 indicates perfect similarity, and -1 indicates no similarity. To calculate the SSIM index, the first step is to compute each image's mean and standard deviation, then the cross-covariance between the two images, and the product of the standard deviations. The SSIM index is then computed as \[SSIM(x,y)=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}{(\mu_{x}^{2}+\mu_{ y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})}, \tag{84}\] where \(\mu_{x}\) and \(\mu_{y}\) are the means of images \(x\) and \(y\), \(\sigma_{x}\) and \(\sigma_{y}\) are the standard deviations of images \(x\) and \(y\), \(\sigma_{xy}\) is the cross-covariance between the two images, and \(C_{1}\) and \(C_{2}\) are constants used to avoid instability. The SSIM metric is more robust to brightness and contrast changes than other popular image quality metrics, such as the Mean Squared Error (MSE) and the Peak Signal-to-Noise Ratio (PSNR). However, using SSIM alone as an evaluation metric for generative models is insufficient because the ultimate goal of a generative model is not just to reproduce an exact copy of the input image but to understand the underlying data distribution. For this reason, SSIM is often used in conjunction with other metrics, such as Inception Score (IS) and Frechet Inception Distance (FID), to evaluate the performance of generative models more comprehensively. #### 8.2.3 Inception Score (IS) The Inception Score (IS) quantifies the quality and diversity of generated images [219]. It is computed by combining two scores: the marginal likelihood of the generated images and their quality or diversity. The marginal likelihood is calculated by feeding the generated images into an Inception-v3 classifier trained on the ImageNet dataset and taking the average of the softmax probabilities. Their overall scores determine the quality of the images, while the entropy of these scores reflects the diversity. The formula for the inception score can be expressed as \[IS=e^{(\mathbb{E}_{x\sim p_{g}(x)}[KL(p(y|x)||p(y))])}, \tag{85}\] where \(p_{g}(x)\) is the distribution of generated images, \(p(y|x)\) is the class conditional probability for each image and class, and \(p(y)\) is the marginal likelihood of each class. The Inception Score provides a single number that reflects the quality of generated images and the diversity of these objects and it is easy to calculate using pre-trained Inception models that don't require the true data distribution as input. However, the Inception Score has some limitations that make it insufficient to comprehensively understand the model performance: 1. _Inception Model Dependence:_ Relies heavily on the Inception model, which is pre-trained on the ImageNet dataset. This may limit its usefulness in domains that are very different from ImageNet, as the features learned by the Inception model might not be relevant for those domains. 2. _Mismatch with Human Perception:_ It does not always align with human perception of image quality. Images can achieve high IS by simply being diverse and recognizable, even if they don't look real. 3. _Lack of Discrimination Between Modes:_ It can be high for models that cover many modes of the true data distribution but fail to accurately capture the frequency of those modes. 4. _Doesn't Measure Mode Collapse:_ As mentioned before, a problem in training GANs is mode collapse, where the model generates a limited range of images. The Inception Score can't effectively detect mode collapse as long as the few modes that the GAN does generate are diverse enough. 5. _Unreliable for Low Number of Samples:_ When computed over a small number of samples, the Inception Score can become unreliable due to high variance. #### 8.2.4 Frechet Inception Distance (FID) Unlike the Inception Score, the Frechet Inception Distance (FID) [220] considers the statistics of the images generated by the model and compares them to the statistics of real images. It measures the distance between these two distributions in a feature space provided by a specific layer of the Inception network. Mathematically, the FID is defined as follows. Let us denote the generated images as \(X\) and the real images as \(Y\). Each dataset is passed through an Inception network, and the activations at a specific layer are extracted. The resulting activations are denoted as \(\hat{X^{\prime}}\) and \(Y^{\prime}\) respectively. Assuming that \(X^{\prime}\) and \(Y^{\prime}\) are multivariate Gaussian distributions with means \(\mu_{x}\) and \(\mu_{y}\), and covariance matrices \(\Sigma_{x}\) and \(\Sigma_{y}\), respectively, the Frechet distance between these two multivariate Gaussian distributions is defined as \[FID(X^{\prime},Y^{\prime})=||\mu_{x}-\mu_{y}||^{2}+Tr(\Sigma_{x}+\Sigma_{y}-2( \Sigma_{x}\Sigma_{y})^{1/2}), \tag{86}\] where \(Tr\) is the trace of a matrix, which is the sum of its diagonal elements, \((\Sigma_{x}\Sigma_{y})^{1/2}\) is the matrix square root of the product of the covariance matrices, which is well-defined since the covariance matrices are positive semi-definite. FID has the following desirable properties: 1. _Lower value is better:_ The lower the FID score, the closer the generated image distribution is to the real image distribution. 2. _Considers both mean and covariance:_ FID considers both the means and the covariance of the feature representations, giving it an advantage over metrics considering only one of these aspects. 3. _Less susceptible to noise:_ FID is more robust to noise compared to the Inception Score. Regarding the limitations, like the Inception Score, it relies on the Inception network and, therefore, may not perform well on tasks very different from the ImageNet dataset on which the Inception network was trained. ## 9 Discussion Throughout this paper, we have reviewed a variety of loss functions and metrics utilized in deep learning, specifically focusing on different tasks, including regression, classification, object detection, image segmentation, and face recognition. Our review underscores the importance of selecting an appropriate loss function and evaluation metric, depending on the specific task at hand and the characteristics of the data. For regression tasks, for instance, Mean Squared Error (MSE) and Mean Absolute Error (MAE) are widely used due to their simplicity and interpretability. However, there are more robust alternatives such as Huber loss or application specific such as Quantile or Poisson loss. Additionally, while RMSE and MAE are commonly used for evaluation, they may need to adequately capture the performance of models on all types of data, leading to the use of additional metrics such as \(R^{2}\) and Adjusted \(R^{2}\). In classification, it is noted that while binary cross-entropy loss is standard for binary classification tasks, options such as hinge loss and weighted binary cross-entropy can provide robustness in situations where classes are imbalanced. Furthermore, more than accuracy is required to provide a complete picture of a model's performance, especially in imbalanced datasets. This necessitates using additional metrics like precision, recall, F1-score, and AUC-ROC. The complexity increases when we consider object detection, image segmentation, and face recognition tasks. Here, loss functions are about more than just calculating the difference between the predicted and true values. For instance, the YOLO loss in object detection and image segmentation, or the Triplet Loss in face recognition, considers the relative positions and distances of multiple instances or entities. Moreover, metrics used in these tasks, such as Average Precision (AP) and Average Recall (AR) for object detection or Panoptic Quality (PQ) for Panoptic segmentation, go beyond typical accuracy-based metrics to evaluate the quality of instance identification and segmentation. As discussed, each loss function and metric has pros and cons, and their appropriateness may depend on the specific application or dataset characteristics. Understanding these trade-offs is critical for the design of effective deep learning systems. Looking forward, there are opportunities for developing new loss functions and metrics that are more robust to data anomalies, consider specific practical constraints, or are tailored to new and emerging deep learning tasks. We also see potential in automated methods that intelligently select or combine different loss functions and metrics based on the given task and data. By advancing our understanding of these critical components of deep learning, we can continue to push the boundaries of what these powerful models can achieve. ## 10 Conclusion The choice of the loss function and metric can profoundly influence the performance of a model; hence understanding their nature and implications is helpful for anyone working in deep learning. From regression to complex tasks such as object detection, face recognition, and generative models, we have highlighted the importance of utilizing appropriate loss functions and evaluation metrics. Furthermore, considering dataset characteristics, specifically class imbalances and outliers, was vital when designing and evaluating models. There is no one-size-fits-all loss function or metric for every task. This highlights the continued necessity for researchers to develop task-specific or adaptable loss functions and metrics and further refine the performance and applicability of deep learning models. A promising direction for future work is the exploration of automated methods to intelligently select or combine loss functions and metrics, thereby reducing the manual effort and potential bias involved in their selection. In addition, creating robust loss functions that can handle data anomalies and practical constraints effectively is a fertile area for exploration. The ability to accurately model and predict complex phenomena is increasingly critical in our rapidly evolving digital world. By enhancing our comprehension and application of loss functions and metrics in deep learning, we can significantly contribute to advancing this technology, paving the way for future more sophisticated, effective, and reliable models. ## 11 Acknowledgments We thank the National Council for Science and Technology (CONACYT) for its support through the National Research System (SNI) and the project SIP 20232290. ## Declaration of generative AI and AI-assisted technologies in the writing process During the preparation of this work, the authors used GPT-4 for help with wording, formatting, and styling throughout this work. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the publication's content.
2307.13941
Perceptual Quality Enhancement of Sound Field Synthesis Based on Combination of Pressure and Amplitude Matching
A sound field synthesis method enhancing perceptual quality is proposed. Sound field synthesis using multiple loudspeakers enables spatial audio reproduction with a broad listening area; however, synthesis errors at high frequencies called spatial aliasing artifacts are unavoidable. To minimize these artifacts, we propose a method based on the combination of pressure and amplitude matching. On the basis of the human's auditory properties, synthesizing the amplitude distribution will be sufficient for horizontal sound localization. Furthermore, a flat amplitude response should be synthesized as much as possible to avoid coloration. Therefore, we apply amplitude matching, which is a method to synthesize the desired amplitude distribution with arbitrary phase distribution, for high frequencies and conventional pressure matching for low frequencies. Experimental results of numerical simulations and listening tests using a practical system indicated that the perceptual quality of the sound field synthesized by the proposed method was improved from that synthesized by pressure matching.
Keisuke Kimura, Shoichi Koyama, Hiroshi Saruwatari
2023-07-26T03:39:16Z
http://arxiv.org/abs/2307.13941v1
Perceptual Quality Enhancement of Sound Field Synthesis Based on Combination of Pressure and Amplitude Matching ###### Abstract A sound field synthesis method enhancing perceptual quality is proposed. Sound field synthesis using multiple loudspeakers enables spatial audio reproduction with a broad listening area; however, synthesis errors at high frequencies called spatial aliasing artifacts are unavoidable. To minimize these artifacts, we propose a method based on the combination of pressure and amplitude matching. On the basis of the human's auditory properties, synthesizing the amplitude distribution will be sufficient for horizontal sound localization. Furthermore, a flat amplitude response should be synthesized as much as possible to avoid coloration. Therefore, we apply amplitude matching, which is a method to synthesize the desired amplitude distribution with arbitrary phase distribution, for high frequencies and conventional pressure matching for low frequencies. Experimental results of numerical simulations and listening tests using a practical system indicated that the perceptual quality of the sound field synthesized by the proposed method was improved from that synthesized by pressure matching. Keisuke Kimura,\({}^{1}\) Shoichi Koyama,\({}^{2}\) Hiroshi Saruwatari\({}^{1}\)\({}^{1}\) The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan \({}^{2}\) National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan [email protected] sound field synthesis, sound field reproduction, pressure matching, amplitude matching, spatial audio ## 1 Introduction Sound field synthesis/reproduction is a technique to recreate a spatial sound using multiple loudspeakers (or secondary sources). One of its prospective applications is spatial audio for virtual/augmented reality, which enables spatial audio reproduction with a broader listening area than in the case of conventional spatial audio techniques, such as multichannel surround sound and binaural synthesis. Sound field synthesis methods based on the minimization of the squared error between synthesized and desired sound fields, such as _pressure matching_ and _mode matching_[1, 2, 3, 4, 5, 6], have practical advantages compared with methods based on analytical representations derived from boundary integral equations such as _wave field synthesis_ and _higher-order ambisonics_[7, 8, 9, 10, 11, 12, 13], since the array geometry of the secondary sources can be arbitrary. In particular, pressure matching is widely used because of its simple implementation. A well-known issue of the sound field synthesis methods is _spatial aliasing artifacts_. That is, depending on the secondary source placement, the synthesis accuracy can significantly decrease at high frequencies, which can lead to the degradation of sound localization for listeners and distortion of timbre, or _coloration_, of source signals. Thus, the perceptual quality of the synthesized sound field can considerably deteriorate. To improve the perceptual quality, we propose a method combining _amplitude matching_, which was originally proposed for multizone sound field control [14, 15], with pressure matching. Amplitude matching is a method to synthesize the desired amplitude (or magnitude) distribution, leaving the phase distribution arbitrary, whereas pressure matching aims to synthesize both amplitude and phase distributions, i.e., pressure distribution. We apply amplitude matching to mitigate the spatial aliasing artifacts by reducing the parameters to be controlled at high frequencies, keeping the range of the listening area broad. On the basis of the human's auditory properties, the interaural level difference (ILD) is known to be a dominant cue for horizontal sound localization at high frequencies, compared with the interaural time difference (ITD) [16, 17]. Therefore, synthesizing the amplitude distribution will be sufficient for sound localization. Furthermore, by prioritizing the synthesis of the desired amplitude distribution, a flat amplitude response is reproduced as much as possible, and coloration effects can be alleviated. We formulate a new cost function combining amplitude matching for high frequencies and conventional pressure matching for low frequencies, which can be solved in a similar manner to amplitude matching. We evaluate the proposed method through numerical experiments in the frequency domain and listening experiments in a real environment. ## 2 Sound Field Synthesis Problem Let \(\Omega\subset\mathbb{R}^{3}\) be a target region for synthesizing the desired sound field. As shown in Fig. 1, \(L\) secondary sources are placed at \(\mathbf{r}_{l}\in\mathbb{R}^{3}\backslash\Omega\) (\(l\in\{1,\dots,L\}\)). The driving signal of the \(l\)th secondary source and its transfer function to the position \(\mathbf{r}\in\Omega\) at the angular frequency \(\omega\) are denoted as \(d_{l}(\omega)\) and \(g_{l}(\mathbf{r},\omega)\), respectively. Then, the synthesized pressure distribution \(u_{\mathrm{syn}}(\mathbf{r},\omega)\) (\(\mathbf{r}\in\Omega\)) is represented as \[u_{\mathrm{syn}}(\mathbf{r},\omega) =\sum_{l=1}^{L}d_{l}(\omega)g_{l}(\mathbf{r},\omega)\] \[=\mathbf{g}(\mathbf{r},\omega)^{\mathsf{T}}\mathbf{d}(\omega), \tag{1}\] where \(\mathbf{g}(\mathbf{r},\omega)\in\mathbb{C}^{L}\) and \(\mathbf{d}(\omega)\in\mathbb{C}^{L}\) are the vectors consisting of \(g_{l}(\mathbf{r},\omega)\) and \(d_{l}(\omega)\), respectively. Hereafter, \(\omega\) is omitted for notational simplicity. The objective is to synthesize the desired sound field \(u_{\mathrm{des}}(\mathbf{r})\) over the target region \(\Omega\). We define the optimization problem to obtain \(\mathbf{d}\) as follows. \[\operatorname*{minimize}_{\mathbf{d}\in\mathbb{C}^{L}}Q(\mathbf{d}):=\int_{\mathbf{r}\in \Omega}\left|\mathbf{g}(\mathbf{r})^{\mathsf{T}}\mathbf{d}-u_{\mathrm{des}}(\mathbf{r}) \right|^{2}\mathrm{d}\mathbf{r} \tag{2}\] Since this problem is difficult to solve owing to the regional integration, several methods to approximately solve it, for example, (weighted) pressure and mode matching [1, 2, 3, 4, 5, 6], have been proposed. ### Pressure Matching Pressure matching is one of the widely used sound field synthesis methods to approximately solve (2). First, the region \(\Omega\) is discretized into \(N\) control points whose positions are denoted as \(\{\mathbf{r}_{n}\}_{n=1}^{N}\). We assume that the control points are arranged densely enough over \(\Omega\). Then, the cost function for pressure matching is formulated as the minimization problem of the squared error between the synthesized and desired pressures at the control points as \[\underset{\mathbf{d}\in\mathbb{C}^{L}}{\text{minimize}}\ \|\mathbf{G}\mathbf{d}-\mathbf{u}_{ \text{des}}\|_{2}^{2}+\beta\|\mathbf{d}\|_{2}^{2}, \tag{3}\] where \(\mathbf{G}\in\mathbb{C}^{N\times L}\) is the matrix consisting of transfer functions \(\{g_{l}(\mathbf{r}_{n})\}_{n=1}^{N}\), \(\mathbf{u}_{\text{des}}\in\mathbb{C}^{N}\) is the vector consisting of the desired pressures \(\{u_{\text{des}}(\mathbf{r}_{n})\}_{n=1}^{N}\), and \(\beta\) is the regularization parameter. This least-squares problem (3) has a closed-form solution as follows: \[\hat{\mathbf{d}}=\left(\mathbf{G}^{\text{H}}\mathbf{G}+\beta\mathbf{I}\right)^{-1}\mathbf{G}^{ \text{H}}\mathbf{u}_{\text{des}}. \tag{4}\] Pressure matching is extended as weighted pressure matching [6, 18] by combining with sound field interpolation techniques [19, 20]. Another strategy to approximate \(Q(\mathbf{d})\) is to represent the sound field by spherical wavefunction expansion [21, 22], which is referred to as (weighted) mode matching [4, 5]. It is demonstrated that weighted pressure matching is a special case of weighted mode matching in [6]. ### Spatial Aliasing Artifacts On the basis of the single-layer potential [21, 23], the desired sound field can be perfectly synthesized if secondary sources are continuously distributed point sources on a surface of the target region \(\Omega\). However, owing to the discrete placement of the secondary sources, spatial aliasing artifacts are unavoidable for sound field synthesis methods. The synthesis accuracy can decrease particularly at high frequencies, which can lead to the degradation of sound localization and the coloration of source signals. The properties of spatial aliasing in analytical sound field synthesis methods have been extensively investigated [24], but spatial aliasing can also occur in numerical methods, depending on the secondary source placement [25, 26]. One of the strategies to improve the synthesis accuracy at high frequencies is to make the target region smaller [27, 5]. The challenge here is to improve the perceptual quality of the synthesized sound field by mitigating spatial aliasing artifacts in a broad listening area. ## 3 Proposed Method Based on Combination of Pressure and Amplitude Matching Even when it is difficult to synthesize the desired pressure distribution, i.e., amplitude and phase, for high frequencies, it can be considered that synthesizing only the amplitude distribution can be achieved by leaving the phase distribution arbitrary. On the basis of the human's auditory properties, ILD is known to be a dominant cue for horizontal sound localization above \(1500\ \mathrm{Hz}\), and the dependence on ITD is markedly reduced from \(1000\)-\(1500\ \mathrm{Hz}\)[16, 17], which indicates that synthesizing the amplitude distribution is sufficient for sound localization above \(1500\ \mathrm{Hz}\). This perceptual property is also used in a method for binaural rendering [28]. Furthermore, coloration effects can be alleviated by reproducing the flat amplitude response as much as possible. Therefore, we propose a method combining amplitude matching [14, 15], which is aimed to synthesize the desired amplitude at the control points, with pressure matching. By applying amplitude matching for high frequencies, the parameters to be controlled can be reduced from pressure matching, keeping the range of the target region, i.e., the number of control points. Thus, we can improve the perceptual quality of the synthesized sound field by reproducing a more accurate amplitude distribution over \(\Omega\). ### Proposed Algorithm We define the optimization problem of the proposed method as a composite of pressure and amplitude matching as \[\underset{\mathbf{d}\in\mathbb{C}^{L}}{\text{minimize}}\ J(\mathbf{d}):= \left(1-\gamma\right)\|\mathbf{G}\mathbf{d}-\mathbf{u}_{\text{des}}\|_{2}^{2}\] \[+\gamma\left\|\|\mathbf{G}\mathbf{d}\right\|-|\mathbf{u}_{\text{des}}\|_{2}^{ 2}+\beta\|\mathbf{d}\|_{2}^{2}, \tag{5}\] where \(|\cdot|\) represents the element-wise absolute value of vectors, and \(\gamma\in\mathbb{R}[(0,1])\) is the parameter that determines the balance between pressure and amplitude matching. When \(\gamma=0\), (5) corresponds to pressure matching, and when \(\gamma=1\), it corresponds to amplitude matching; therefore, \(\gamma\) should be set close to \(0\) for low frequencies and \(1\) for high frequencies. For example, \(\gamma\) can be defined as a sigmoid function of \(\omega\) with the transition angular frequency \(\omega_{\text{T}}\) and parameter \(\sigma\) as \[\gamma(\omega)=\frac{1}{1+\mathrm{e}^{-\frac{\sigma}{2\pi}(\omega-\omega_{ \text{T}})}}. \tag{6}\] Since the cost function \(J\) in (5) is neither convex nor differentiable owing to the squared error term of the amplitude matching, (5) does not have a closed-form solution. However, the alternating direction method of multipliers (ADMM) can be applied in a similar manner to the algorithm for amplitude matching [15]. First, we introduce the auxiliary variables of amplitude \(\mathbf{a}\in\mathbb{R}_{\geq 0}^{N}\) and phase \(\mathbf{\theta}\in\mathbb{R}^{N}\) such that \(\mathbf{G}\mathbf{d}=\mathbf{a}\odot\mathrm{e}^{\mathbf{\theta}}\), where \(\odot\) represents the Hadamard product. Then, (5) is reformulated as \[\underset{\mathbf{d},\mathbf{a},\mathbf{\theta}}{\text{minimize}}\ \left(1-\gamma\right)\left\|\mathbf{a} \odot\mathrm{e}^{\mathbf{\theta}}-\mathbf{u}_{\text{des}}\right\|_{2}^{2}\] \[+\gamma\left\|\mathbf{a}-|\mathbf{u}_{\text{des}}|\right\|_{2}^{2}+ \beta\|\mathbf{d}\|_{2}^{2}\] \[\mathrm{subject\ to}\ \mathbf{G}\mathbf{d}=\mathbf{a}\odot\mathrm{e}^{\mathbf{\theta}}. \tag{7}\] The augmented Lagrangian function \(L_{\rho}\) for (7) is defined as \[L_{\rho}(\mathbf{a},\mathbf{\theta},\mathbf{d},\mathbf{\lambda})\] \[=\left(1-\gamma\right)\left\|\mathbf{a}\odot\mathrm{e}^{\mathbf{\theta}} -\mathbf{u}_{\text{des}}\right\|_{2}^{2}+\gamma\left\|\mathbf{a}-|\mathbf{u}_{\text{des}}| \right\|_{2}^{2}+\beta\|\mathbf{d}\|_{2}^{2}\] \[+\Re\left[\mathbf{\lambda}^{\text{T}}(\mathbf{G}\mathbf{d}-\mathbf{a}\odot \mathrm{e}^{\mathbf{\theta}})\right]+\frac{\rho}{2}\|\mathbf{G}\mathbf{d}-\mathbf{a}\odot \mathrm{e}^{\mathbf{\theta}}\|_{2}^{2}, \tag{8}\] Figure 1: Sound field synthesis over the target region \(\Omega\) with multiple loudspeakers where \(\mathbf{\lambda}\in\mathbb{C}^{N}\) is the Lagrange multiplier, \(\Re[\cdot]\) represents the real part of a complex value, and \(\rho>0\) is the penalty parameter. Each variable is alternately updated on the basis of ADMM as \[\left(\mathbf{a}^{(i+1)},\mathbf{\theta}^{(i+1)}\right)=\operatorname*{ arg\,min}_{\mathbf{a},\mathbf{\theta}}\;L_{\rho}\left(\mathbf{a},\mathbf{\theta},\mathbf{d}^{(i)}, \mathbf{\lambda}^{(i)}\right) \tag{9}\] \[\mathbf{d}^{(i+1)}=\operatorname*{arg\,min}_{\mathbf{d}}\;L_{\rho}\left( \mathbf{a}^{(i+1)},\mathbf{\theta}^{(i+1)},\mathbf{d},\mathbf{\lambda}^{(i)}\right)\] (10) \[\mathbf{\lambda}^{(i+1)}=\mathbf{\lambda}^{(i)}+\rho\left(\mathbf{G}\mathbf{d}^{ (i+1)}-\mathbf{a}^{(i+1)}\odot\mathrm{e}^{\mathrm{j}\mathbf{\theta}^{(i+1)}}\right), \tag{11}\] where \(i\) denotes the iteration index. (9) is minimized independently for \(\mathbf{\theta}\) and \(\mathbf{a}\) as \[\mathbf{\theta}^{(i+1)}=\arg\left((1-\gamma)\mathbf{u}_{\mathrm{des}}+ \frac{\rho}{2}\left(\mathbf{G}\mathbf{d}^{(i)}+\frac{\mathbf{\lambda}^{(i)}}{\rho}\right) \right), \tag{12}\] \[\mathbf{a}^{(i+1)}=\frac{\left|2(1-\gamma)\mathbf{u}_{\mathrm{des}}+\rho \left(\mathbf{G}\mathbf{d}^{(i)}+\frac{\mathbf{\lambda}^{(i)}}{\rho}\right)\right|+2\gamma |\mathbf{u}_{\mathrm{des}}|}{\rho+2}. \tag{13}\] The update rule for \(\mathbf{d}\) is obtained by solving (10) as \[\mathbf{d}^{(i+1)}\] \[=\left(\mathbf{G}^{\mathrm{H}}\mathbf{G}+\frac{2\beta}{\rho}\mathbf{I}\right) ^{-1}\mathbf{G}^{\mathrm{H}}\left(\mathbf{a}^{(i+1)}\odot\mathrm{e}^{\mathrm{j}\mathbf{ \theta}^{(i+1)}}-\frac{\mathbf{\lambda}^{(i)}}{\rho}\right). \tag{14}\] By iteratively updating \(\mathbf{\theta}^{(i)}\), \(\mathbf{a}^{(i)}\), \(\mathbf{d}^{(i)}\), and \(\mathbf{\lambda}^{(i)}\) by using (12), (13), (14), and (11), respectively, starting with initial values, we can obtain the optimal driving signal \(\mathbf{d}\). ### Time-Domain Filter Design By using the proposed algorithm described in Sect. 3.1, we can obtain the driving signal in the frequency domain. In practice, a finite impulse response (FIR) filter is obtained by computing the inverse Fourier transform of \(\mathbf{d}\) for target frequency bins. However, if the driving signal is independently determined for each frequency, it can have discontinuities between frequency bins particularly for \(\gamma=1\) because the phase of \(\mathbf{d}\) is arbitrary. These discontinuities can lead to an unnecessarily large FIR filter length. To overcome this issue, the differential-norm penalty, which is also used in amplitude matching [15], can be applied. By introducing the subscript of the index of the frequency bin \(f\in\{1,\ldots,F\}\), we define the differential-norm penalty for the \(f\)th frequency bin as \[D(\mathbf{d}_{f})=\|\mathbf{d}_{f}-\mathbf{d}_{f-1}\|_{2}^{2}. \tag{15}\] The optimization problem for the time-domain filter design is represented as \[\operatorname*{minimize}_{\{\mathbf{d}_{f}\}_{f=1}^{F}}\sum_{f=1}^{F} \left[(1-\gamma)\left\|\mathbf{G}_{f}\mathbf{d}_{f}-\mathbf{u}_{\mathrm{des},f}\right\|_{2 }^{2}\right.\] \[\left.+\gamma\left\|\left|\mathbf{G}_{f}\mathbf{d}_{f}\right|-\left|\mathbf{ u}_{\mathrm{des},f}\right|\right\|_{2}^{2}\right]+\alpha\sum_{f=2}^{F}D(\mathbf{d}_{f})+ \beta\sum_{f=1}^{F}\|\mathbf{d}_{f}\|_{2}^{2}, \tag{16}\] where \(\alpha\) is the weight for the differential-norm penalty term. ADMM is similarly applied to solve (16), but its detailed derivation is omitted. The derivation of ADMM for amplitude matching and a technique to reduce computational complexity can be found in [15]. ## 4 Experiments We conducted experiments to evaluate the proposed method (Proposed) compared with pressure matching (PM). First, numerical experimental results are shown to evaluate the ILD of a listener and the amplitude response of the synthesized sound field. Second, the results of listening experiments using a practical system are presented. ### Numerical Experiments Numerical experiments in the frequency domain were conducted under the three-dimensional free-field assumption. The target region \(\Omega\) was a cuboid of \(1.0\;\mathrm{m}\times 1.0\;\mathrm{m}\times 0.04\;\mathrm{m}\). As shown in Fig. 2, 16 omnidirectional loudspeakers were placed along the borders of the squares of \(2.0\;\mathrm{m}\times 2.0\;\mathrm{m}\) at the heights of \(z=\pm 0.1\;\mathrm{m}\), and \(24\times 24\times 2\) control points were regularly placed inside \(\Omega\). Therefore, the total number of loudspeakers and control points were \(L=32\) and \(N=1152\), respectively. The desired sound field was a spherical wave from the point source at \((2.0,0.0,0.0)\;\mathrm{m}\). In Proposed, \(\gamma\) was set using (6) with \(\omega_{\mathrm{T}}/2\pi=2000\) and \(\sigma=0.01\) so that the phase distribution becomes arbitrary above \(2000\;\mathrm{Hz}\). The regularization parameter \(\beta\) for Proposed and PM was set as \(\|\mathbf{G}^{\mathrm{H}}\mathbf{G}\|_{2}^{2}\times 10^{-3}\). The penalty parameter \(\rho\) in (8) was \(1.0\). We evaluated the ILDs of the synthesized sound field when a listener's head was in \(\Omega\). The binaural signals at \(\omega\) for the position \(\mathbf{r}_{\mathrm{H}}\in\Omega\) and azimuth direction \(\phi_{\mathrm{H}}\in[0,2\pi)\) of the listener's head were denoted as \(b_{\mathrm{L}}(\mathbf{r}_{\mathrm{H}},\phi_{\mathrm{H}},\omega)\) and \(b_{\mathrm{R}}(\mathbf{r}_{\mathrm{H}},\phi_{\mathrm{H}},\omega)\) for the left and right ears, respectively. \(b_{\mathrm{L}}\) and \(b_{\mathrm{R}}\) were obtained by calculating the transfer function between the loudspeakers and the listener's ears using Mesh2HRTF [29, 30]. The ILD for \(\mathbf{r}_{\mathrm{H}}\) and \(\phi_{\mathrm{H}}\) was calculated in the frequency domain as \[\mathrm{ILD}(\mathbf{r}_{\mathrm{H}},\phi_{\mathrm{H}}):=10\log_{10}\frac{\sum_{ \omega}|b_{\mathrm{L}}(\mathbf{r}_{\mathrm{H}},\phi_{\mathrm{H}},\omega)|^{2}}{\sum_{ \omega}|b_{\mathrm{R}}(\mathbf{r}_{\mathrm{H}},\phi_{\mathrm{H}},\omega)|^{2}}. \tag{17}\] The evaluation measure was the normalized error (\(\mathrm{NE}\)) between the synthesized and desired ILDs (\(\mathrm{ILD}_{\mathrm{syn}}\) and \(\mathrm{ILD}_{\mathrm{true}}\), respectively) at \(\mathbf{r}_{\mathrm{H}}\) defined as \[\mathrm{NE}(\mathbf{r}_{\mathrm{H}})=\frac{\sum_{\phi_{\mathrm{H}}}| \mathrm{ILD}_{\mathrm{syn}}(\mathbf{r}_{\mathrm{H}},\phi_{\mathrm{H}})-\mathrm{ILD}_{ \mathrm{true}}(\mathbf{r}_{\mathrm{H}},\phi_{\mathrm{H}})|}{\sum_{\phi_{\mathrm{H}}}| \mathrm{ILD}_{\mathrm{true}}(\mathbf{r}_{\mathrm{H}},\phi_{\mathrm{H}})|}, \tag{18}\] where the summation for \(\phi_{\mathrm{H}}\) was calculated for \(\{0,\pi/32,\pi/16,\ldots,31\pi/32\}\;\mathrm{rad}\). The distributions of \(\mathrm{NE}(\mathbf{r}_{\mathrm{H}})\) on the \(x\)-\(y\)-plane at \(z=0\) for Proposed and PM are shown in Fig. 3. The evaluated positions were \(5\times 5\) points on the square of Figure 2: Experimental setup. Blue circles and yellow dots represent sources and control points, respectively. \(1.0\ \mathrm{m}\times 1.0\ \mathrm{m}\). The \(\mathrm{NE}\) of Proposed was smaller than that of \(\mathrm{PM}\) over the region. Note that the ITDs were accurately synthesized in both methods below \(2000\ \mathrm{Hz}\). Next, the amplitude response of the synthesized sound field was investigated. In Fig. 4, the amplitude responses at the center of \(\Omega\) of the desired sound field and synthesized sound field of Proposed and PM are plotted. Owing to the large variations in the amplitude response of PM above \(2000\ \mathrm{Hz}\), the timbre of the source signal can be highly distorted. In contrast, the almost flat amplitude response was achieved by Proposed at the center of \(\Omega\). ### Listening Experiments Listening experiments were conducted to evaluate the perceptual quality of the synthesized sound field by using a practical loudspeaker array. The numbers and positions of loudspeakers and control points were the same as those in Fig. 2. The reference loudspeaker was placed at \((2.0,0.5,0.0)\ \mathrm{m}\) as a primary sound source of the desired field. The driving signals of Proposed and PM were obtained by assuming the loudspeakers as point sources. The reverberation time \(T_{60}\) of the room was around \(0.19\ \mathrm{s}\). The perceptual quality was evaluated by using multiple stimuli with a hidden reference and anchor (MUSHRA) [31]. Test participants were asked to rate the difference between the reference and test signals of \(10\ \mathrm{s}\) on a scale from 0 to 100. The reference and test signals are summarized as follows: * Reference: The source signal played back through the reference loudspeaker. * C1/Hidden anchor: The lowpass-filtered source signal up to \(3.5\ \mathrm{kHz}\) played back through the reference loudspeaker. * C2/PM: The sound synthesized by PM and played back through the loudspeaker array. * C3/Proposed: The sound synthesized by Proposed and played back through the loudspeaker array. * C4/Hidden reference: The same as Reference. The participants' head center was approximately positioned at the center of the target region by adjusting the chair, but they were able to rotate and move their heads on the chair freely. The participants were able to listen to each test signal repeatedly. Fourteen male subjects in their 20s and 30s were included, and those who scored more than 60 on the hidden anchor, which was one participant in this test, were excluded from the evaluation. Two source signals, **Vocals** and **Instrumental**, taken from track 10 of MUSDB18-HQ [32] were investigated. Fig. 5 shows the box-and-whisker plots of the scores of each test signal. The median score of Proposed was significantly higher than that of PM for both **Vocals** and **Instrumental**. After the validation of the normality of data for C2 and C3 by the Shapiro-Wilk test, Welch's \(t\)-test was conducted at a significance level of \(0.05\). The \(p\) values for **Vocals** and **Instrumental** were \(9.1\times 10^{-4}\) and \(2.0\times 10^{-3}\), respectively; therefore, there were significant differences in mean scores between C2 and C3 in the both cases. ## 5 Conclusion We proposed a sound field synthesis method based on the combination of pressure and amplitude matching to improve perceptual quality. The cost function is defined as square errors of pressure distribution for low frequencies and amplitude distribution for high frequencies to alleviate the effects of spatial aliasing artifacts. The ADMM-based algorithm to solve this cost function is also derived. In the numerical experiments and listening experiments, it was validated that the perceptual quality of the proposed method can be improved from that of PM. Future work includes extended listening experiments to evaluate perceptual quality in more detail. ## 6 Acknowledgment This work was supported by JSPS KAKENHI Grant Number 22H03608 and JST FOREST Program Grant Number JPM-MJFR216M, Japan.
2302.00885
AOP-Net: All-in-One Perception Network for Joint LiDAR-based 3D Object Detection and Panoptic Segmentation
LiDAR-based 3D object detection and panoptic segmentation are two crucial tasks in the perception systems of autonomous vehicles and robots. In this paper, we propose All-in-One Perception Network (AOP-Net), a LiDAR-based multi-task framework that combines 3D object detection and panoptic segmentation. In this method, a dual-task 3D backbone is developed to extract both panoptic- and detection-level features from the input LiDAR point cloud. Also, a new 2D backbone that intertwines Multi-Layer Perceptron (MLP) and convolution layers is designed to further improve the detection task performance. Finally, a novel module is proposed to guide the detection head by recovering useful features discarded during down-sampling operations in the 3D backbone. This module leverages estimated instance segmentation masks to recover detailed information from each candidate object. The AOP-Net achieves state-of-the-art performance for published works on the nuScenes benchmark for both 3D object detection and panoptic segmentation tasks. Also, experiments show that our method easily adapts to and significantly improves the performance of any BEV-based 3D object detection method.
Yixuan Xu, Hamidreza Fazlali, Yuan Ren, Bingbing Liu
2023-02-02T05:31:53Z
http://arxiv.org/abs/2302.00885v1
AOP-Net: All-in-One Perception Network for Joint LiDAR-based 3D Object Detection and Panoptic Segmentation ###### Abstract LiDAR-based 3D object detection and panoptic segmentation are two crucial tasks in the perception systems of autonomous vehicles and robots. In this paper, we propose All-in-One Perception Network (AOP-Net), a LiDAR-based multi-task framework that combines 3D object detection and panoptic segmentation. In this method, a dual-task 3D backbone is developed to extract both panoptic- and detection-level features from the input LiDAR point cloud. Also, a new 2D backbone that intertwines Multi-Layer Perceptron (MLP) and convolution layers is designed to further improve the detection task performance. Finally, a novel module is proposed to guide the detection head by recovering useful features discarded during down-sampling operations in the 3D backbone. This module leverages estimated instance segmentation masks to recover detailed information from each candidate object. The AOP-Net achieves state-of-the-art performance for published works on the nuScenes benchmark for both 3D object detection and panoptic segmentation tasks. Also, experiments show that our method easily adapts to and significantly improves the performance of any BEV-based 3D object detection method. ## I Introduction Understanding the surrounding 3D environment is an essential component in autonomous driving and robotics to ensure safety and reliability. LiDAR-based 3D object detection and panoptic segmentation are two common tasks performed by the perception systems. For 3D object detection, foreground objects such as cars, pedestrians, etc., are classified and localized by 3D bounding boxes. For 3D panoptic segmentation, each point in the scene is categorized with a semantic label and points for the same foreground object are assigned a unique instance ID. For efficiency, most detection methods [1, 2, 3] attempt to extract features from a summarized representation of the scene. Some quantize LiDAR points into volumetric grids, known as voxels, and then process the voxels with a 3D Convolutional Neural Network (CNN). Others project the point cloud or 3D voxels into 2D grids in Bird's-Eye-View (BEV) or Range-View (RV) and process the grids by a 2D CNN. Furthermore, the CNNs deployed typically perform down-sampling steps to enlarge the receptive fields of convolution kernels and extract features efficiently. However, while quantization, projection, and down-sampling reduce computational cost, they result in considerable information loss about the scene. Likewise, LiDAR-based 3D panoptic segmentation methods [5, 6, 7, 8] follow similar point cloud data representation strategies. While recent 3D object detection methods mostly operate in the scale-invariant BEV plane [2, 10, 11], many 3D panoptic segmentation methods rely on the denser and more detailed object representations in RV [5, 6, 9]. Considering the strengths of each projection view and complementary goals of each perception task, [30] demonstrates that information extracted by the backbone of RV-based panoptic segmentation model can also be helpful for object detection. This approach presents a question: can object detection and panoptic segmentation networks be more integrated, so that both tasks benefit from one another? To this end, we propose the All-in-One Perception Network (AOP-Net) for LiDAR-based joint 3D object detection and panoptic segmentation. In this multi-task framework, 3D object detection and panoptic segmentation are jointly trained and take advantage of one another for performance gains. More specifically, a dual-task 3D backbone is developed to extract both detection- and panoptic-level features from the voxelized 3D space. A new 2D backbone for 3D object detection is proposed that extensively fuses Multi-Layer Perceptron (MLP) layers into CNN, enabling a larger receptive field and deeper pixel-wise feature extraction while exhibiting a similar model complexity compared to traditional 2D backbones used for detection [2, 10]. Finally, to recover lost useful features due to down-sampling, a novel Instance-based Feature Retrieval (IFR) module is proposed, which leverages the instance-level estimation from panoptic segmentation to recover object-specific features and highlight corresponding locations to guide object detection. Our contributions can be summarized into four-fold: 1) A multi-task framework is proposed for joint LiDAR-based 3D object detection and panoptic segmentation. In this method, both tasks achieve performance gains as they mutually benefit from one another. 2) A deep and efficient 2D backbone that mixes MLPs and convolution layers for 3D object detection. 3) The IFR module that augments the detection head and recovers useful discarded multi-scale features based on panoptic segmentation estimations. 4) Through experiments, we show that each new component provides effective performance gain, and that the proposed framework easily adapts to and improves the performance of any BEV-based 3D object detection method. ## II Related Work ### _3D Object Detection_ Efficient 3D object detection methods quantize the 3D space using small voxel grids and operate on the BEV plane. Then, features are extracted to encode each voxel. VoxelNet [1] designs a learnable Voxel Feature Encoder (VFE) layer to encode points inside each voxel and then exploits a 3D CNN to extract features across voxel grids. SECOND [12] proposes 3D Sparse convolution layers to reduce the computations of 3D convolution by leveraging the sparsity of voxel grids. PointPillars [2] further improves the inference speed by reducing the voxel number along the height dimension to one and using a 2D CNN to process the generated pseudo image. CenterPoint [10] is an anchor-free object detection method that addresses the challenge caused by anchor-based methods. CenterPoint designed a center-based detection head for detecting the center of 3D boxes in BEV plane. This approach significantly improves the detection accuracy as it does not need to fit axis-aligned boxes to rotated objects. ### _3D Panoptic Segmentation_ 3D panoptic segmentation methods usually extend from an RV-based semantic segmentation network, with an additional mechanism that groups foreground points into clusters, each representing a segmented instance. LPSAD [7] uses a shared encoder with two decoders, where the first decoder predicts semantic tags and the second predicts the center offset for each foreground point, and subsequently it uses an external algorithm such as BFS and HDBSCAN [14] to group nearby shifted points into the same cluster. Panoster [13] uses a learnable clustering method to assign instance labels to each point. CPSeg [6] is a cluster-free panoptic segmentation method that segments objects by pillarizing points according to their learned embeddings and finding connected pillars through a pairwise embedding comparison. ### _3D Multi-task Perception_ Few attempts have been made to leverage the complementary nature of segmentation and detection tasks. PointPainting [27] and FusionPainting [28] append semantic class scores from pretrained segmentation networks to the point cloud before feeding to a 3D object detection model. A similar method [30] to our framework was introduced recently, in which a panoptic segmentation model and an object detection model are jointly trained. Its Cascade Feature Fusion Module fuses BEV and RV features from detection and panoptic segmentation backbone, respectively. Its class-wise foreground attention module embeds predicted foreground semantic scores in detection features. In [30], although panoptic segmentation is leveraged to bring improvement to object detection, the two tasks fail to mutually benefit. ## III Method ### _Overview_ We propose a framework that jointly performs 3D object detection and panoptic segmentation as shown in Figure 1. In this multi-task method, a BEV-based 3D object detection model and an RV-based 3D panoptic segmentation model are deeply integrated, so that the performance of both tasks can improve substantially. We exploit a simplified version of CPSeg [6], a U-Net architecture with two task-specific decoders, for panoptic segmentation due to its real-time performance and high accuracy. For object detection, we rely on the detection head from the CenterPoint [10] for its superior performance. To integrate the two tasks into one unified framework, we propose a dual-task 3D backbone to extract multi-scale features from voxelized point cloud. These features are compressed and projected to the RV plane, fused with the set of features extracted directly from the RV-projected point cloud via three Convolutional Bottleneck Attention Modules (CBAM) [22], and fed to the panoptic head. This lightweight operation effectively augments the panoptic head with detection-level features. To introduce panoptic-level features to object detection, we exploit the cascade feature fusion and class-wise foreground attention modules in [30], shown as Multi-view Feature Fusion in Figure 1. The lowest resolution voxel features from the dual-task 3D backbone are projected to BEV for the object detection task. These features encode the instance- and semantic-level information besides the detection-level information. Also, inspired by [15], we propose a more effective 2D backbone that mixes MLPs with convolutional layers to process the features for the detection head. Moreover, a novel IFR module augments the detection head by leveraging the predicted instance masks to recover relevant object features that are otherwise lost during down-sampling operations in the dual-task 3D backbone. Details of the proposed modules are described below. ### _Dual-task 3D Backbone_ Shown in Figure 2, the 3D backbone exploited in our method is responsible for extracting features from 3D voxels. To efficiently transfer features from 3D backbone for the object detection task, we follow [1, 12, 10] and map 3D features in the coarsest resolution \((\frac{Z}{16}\times\frac{H}{8}\times\frac{W}{8})\) to BEV and feed them to the 2D backbone. However, in contrast to former methods, detailed object information embedded in two sets of higher resolution voxel features will be recovered later in the IFR module. Moreover, three sets of higher resolution voxel features are projected to RV, fused with features extracted directly from the RV-projected point cloud via corresponding CBAMs, and processed by CPSeg's RV encoding blocks. These multi-scale voxel-based features augment the RV-based panoptic head. Meanwhile, this augmentation also enforces the 3D backbone to develop a richer set of semantic- and instance-level features. ### _Simplified ConvMLP (SC) Backbone_ Recently, MLP-based vision backbones are receiving more attention [17, 18, 19, 16, 15] for their ability to compete or even perform better than fully convolution-based backbones in dense vision prediction tasks. Inspired by the ConvMLP [15] used in image domains, we propose a simplified version of this architecture to process the BEV-projected features from the 3D backbone before feeding them to the detection head. The simplified ConvMLP (SC) block and the overall proposed 2D backbone architecture are shown in Figure 3. Compared to the original ConvMLP block, we remove the last MLP layer and add a skip connection over the convolution layer to further ease the gradient flow. In this architecture, the MLP block enables the interaction of features in each spatial location, while the subsequent depth-wise convolution enables efficient space-wise interaction. In the backbone, consecutive Conv blocks (each consists of a convolution layer followed by batch-normalization and ReLU) are first applied to enhance features interactions spatial-wise. Then, resulting features are sent through the first set of SC blocks, down-sampled, and fed to another set of SC blocks. The outputs of these two sets of SC blocks are then matched and concatenated as the final set of the 2D features, which is sent to the detection head. Compared to the regular 2D backbone in [2, 10], the proposed 2D backbone boosts the detection performance without a steep increase in the model complexity. More specifically, compared to a regular 3x3 convolution layer, an SC block requires 54.6% less memory and 54.8% fewer FLOPs. Thus, by replacing regular convolutions with the lighter SC block, we afford to build more consecutive convolutions in a single resolution, achieving a larger receptive field without the need for further down-sampling. In addition, unlike other CNNs that employ a single 1x1 convolution layer for channel depth adjustment, this architecture employs MLP blocks extensively to emphasize on feature extraction within each BEV plane location. ### _Instance-based Feature Retrieval (IFR)_ To augment the coarse-scale features extracted by the SC backbone, discarded features during down-sampling operations in the dual-task 3D backbone can be effectively Fig. 1: Overall framework of the proposed joint 3D object detection and panoptic segmentation. The proposed modules are shown with blue color. Best viewed in color. Fig. 3: The proposed 2D backbone for the detection task. Fig. 2: Architecture of the dual-task 3D backbone in the proposed multi-task framework. Best viewed in color. leveraged. For this aim, the IFR module is proposed, shown in Figure 4. This module recovers multi-scale detailed features for each candidate object from \((\frac{Z}{2}\times\frac{H}{2}\times\frac{W}{2})\) and \((\frac{Z}{4}\times\frac{H}{4}\times\frac{W}{4})\) resolutions feature maps in the dual-task 3D backbone. Then, it constructs a new set of features to augment the detection head. First, to reduce computational complexity, on all BEV plane locations, voxel features along the height dimension are averaged to form averaged-voxels features. Then, a selection strategy is proposed to select averaged-voxels based on instance masks estimated by the panoptic head. Specifically, given the \(l\)th scale \(s_{l}\) averaged-voxels features and instance masks of the same scale on the BEV plane, the mean \(X\) and \(Y\) coordinates of each instance are calculated. This gives the mass center location for each instance. Then, from all the BEV locations that represent each instance, the \(K_{s_{l}}\) nearest averaged-voxels to each instance mass center are selected. After sampling \(K_{s_{l}}\) averaged-voxels for each instance, the relative coordinates of each sampled averaged-voxel to its instance mass center on both \(x-\) and \(y-\)axis are computed and concatenated to the corresponding feature vector as relative position embedding. This allows the IFR module to be aware of the geometry of sampled averaged-voxels for each instance. These feature vectors go through a VFE [1] and an MLP layer consecutively. Then, the resulting feature vectors for each instance are pooled using max- and average-pooling layers and concatenated. This is illustrated in the following equations: \[v_{j,s_{l}}^{i}=MLP(VFE(Concat(f_{j,s_{l}}^{i},p_{j,s_{l}}^{i}))) \tag{1}\] \[v_{s_{l}}^{i}=Concat(AvgPool(v_{j,s_{l}}^{i}),MaxPool(v_{j,s_{l}}^{i})) \tag{2}\] where \(f_{j,s_{l}}^{i}\) and \(p_{j,s_{l}}^{i}\) denote the feature vector and position embedding vector for the \(j\)th averaged-voxel belonging to \(i\)th instance in \(l\)th scale, respectively. Each resulting single feature vector \(v_{s_{l}}^{i}\) encodes and summarizes the sampled averaged-voxels features of the \(i\)th instance that it corresponds to. The extracted features of an instance in the higher resolution \(s_{l}\) are concatenated to every sampled averaged-voxel feature vector of that instance in the lower resolution \(s_{l+1}\) using a cascade connection prior to feeding to the VFE layer. This enables the lower resolution averaged-voxels of an instance to leverage the higher resolution encoded features of the same instance. Finally, the resulting encoded feature vectors of each instance in different resolutions are concatenated and distributed to all the BEV locations that correspond to the instance according to the coarse-scale instance masks. This new set of feature maps is then concatenated to the output features from the 2D backbone and fed to the detection head. By doing so, we effectively augment the detection head by recovering and processing multi-scale information that is unique for each instance and commonly lost prior to the 2D backbone. ## IV Experiments ### _Implementation Details_ The proposed framework is implemented using the PyTorch [23] and OpenPCDet [24] libraries. AOP-Net is based on the single-stage CenterPoint detection method. For panoptic segmentation, we received the original CPSeg source code [6] from the authors. The network was trained from scratch for 140 epochs with Adam optimizer on 8 Tesla V100 GPUs. The One Cycle policy was used for learning rate scheduling with an initial rate of \(10^{-3}\). Also, the weight decay was set to \(10^{-2}\). In IFR module, we used 2 mid- and high-resolution feature maps from the dual-task 3D backbone and set the \(K_{s_{1}}\) to 16 and \(K_{s_{2}}\) to 25. \(c_{1}\), \(c_{2}\), \(H\), \(W\), and \(Z\) are set to be 32, 64, 1024, 1024, and 32, respectively. The hidden ratio for MLP in the SC block, IFR's VFE, and IFR's MLP are set to be 2, 4, and 4, respectively. ### _Dataset_ nuScenes [20] is a large-scale dataset for autonomous driving that includes both 3D object detection and panoptic segmentation labels. For 3D object detection, mean Average Precision (mAP) is a metric that is used for evaluation on this benchmark. Moreover, nuScenes Detection Score (NDS) is another metric used, which is a weighted sum of mAP and box estimation quality metrics that account for translation, scale, orientation, attributes, and velocity. For 3D panoptic segmentation, we use the mean Panoptic Quality (PQ), which considers both mean Recognition Quality (RQ) and mean Segmentation Quality (SQ), to evaluate the performance. Waymo Open Dataset [21] is a large-scale 3D object detection dataset. As it lacks panoptic segmentation labels, we prepared the instance and foreground semantic labels using ground truth 3D bounding boxes, and assigned a single background class to all points outside bounding boxes. We report the mAP and the mean Average Precision weighted by Heading (mAPH) for the 3D object detection task. For Waymo, we trained the proposed model on \(20\%\) of training data and evaluated on the whole validation data. ### _Results_ #### Iv-C1 3D Object Detection In Table. I and II, we compare the evaluation results between the proposed method and CenterPoint on the nuscenes and Waymo validation sets. The AOP-Net is based on the CenterPoint first stage. As shown, the proposed method outperforms the CenterPoint in Fig. 4: The proposed Instance-based Feature Retrieval (IFR) module. Best viewed in color. both mAP and NDS scores for nuScenes significantly, and mAP and mAPH for Waymo considerably. As elaborated in ablations, improvements in the detection of large and small objects can be attributed to the SC Backbone and the IFR module, respectively. The comparison between AOP-Net and other published state-of-the-art 3D object detection methods on the nuScenes test set are shown in Table III. It can be seen that the proposed method outperforms all other methods in terms of NDS and all five error metrics that represent the box estimation quality, including the mean average errors in translation (mATE), scale (mASE), orientation (mAOE), velocity (mAVE), and attribute (mAAE). This improvement can be attributed to the guidance received from the panoptic segmentation module, both direct (exploitation of panoptic segmentation predictions in IFR) and indirect (back propagation of panoptic loss in backbones). #### Iv-C2 3D Panoptic Segmentation In Table IV, comparing AOP-Net with other state-of-the-art published methods on the nuScenes test set, we validate that the AOP-Net obtains higher mean PQ. Compared to the second row, which is a standalone simplified version of CPSeg originally incorporated in AOP-Net, the AOP-Net receives the additional injection of multi-scale detection-level features, which lead to significantly better panoptic performance. In Figure 5, the benefits of the unified multi-task framework towards panoptic segmentation are visible. In example (a), the standalone CPSeg struggles to predict the semantics of distant points, leading to three false positives and one false negative. In (b), CPSeg under-segments on the left and over-segments near the top as it is less confident about regions that are less visible behind a large body of points. In both cases, the dual-task 3D backbone in the AOP-Net provides effective multi-scale 3D features to prevent these errors. ### _Ablation Studies_ #### Iv-D1 Effect of each proposed component The contributions of AOP-Net modules are shown in Table V. It can be seen that each and a combination of these modules adapt well to the baseline and provide strong performance gains. Specifically, in Table VI, it can be seen that incorporating the dual-task 3D backbone significantly boosts performances for both tasks. In particular, the improvement of AOP-Net in panoptic segmentation is mainly attributed to this module. As the 3D backbone is conditioned on both tasks, the learned features are enriched and provide additional clues regarding foreground objects. Moreover, the 3D backbone captures features without the occlusion or scale-variant issues common for feature extraction in RV plane. When projected to RV and fused with already extracted RV-based features, these set of features are more reliable and helpful in segmenting occluded and distant objects. These factors lead to a significant improvement in both mIOU and PQ. In Table VII, we demonstrate that improvements in the detection of large class objects can be attributed to the enlarged receptive fields and more extensive channel-wise feature extraction from the SC Backbone. In Table VIII, it can be seen that IFR plays a strong role in better detecting small isolated objects. This is because IFR influences the detection head to pay more attention to multi-scale features that are relevant to foreground objects. By reintroducing this information that is otherwise lost in the down-sampling process in the 3D backbone, the detection head improves both precision (by refining possible candidates) and recall (retrieving missed objects that are better detected in RV panoptic segmentation). #### Iv-D2 Variations of ConvMLP Backbones In Table IX, a similarly sized network (in terms of # parameters) that uses Fig. 5: Comparison of instance segmentation results between CPSeg and AOP-Net. Best viewed in color. Fig. 6: Comparison of qualitative results between PointPillars and AOP-Net (PointPillars) for 3D object detection. The red and blue colors show the ground-truth and the predicted boxes, respectively. Best viewed in color. original ConvMLPs has fewer consecutive layers and lower performance. Also, comparing rows 2-4, having \(5\) and \(10\) SC blocks gives the best trade-off in terms of performance and complexity. #### Iv-B3 Other BEV-based 3D object detectors in the proposed framework To show that AOP-Net can also work with anchor-based detection methods, we performed experiments by adapting the AOP-Net to PointPillars [2] and SECOND [12]. The results of these experiments are shown in Table X. Also, we increased the model complexity of the PointPillars and SECOND and named them as Complex PointPillars and Complex SECOND. It can be seen that by simply increasing the model complexity, the performance boost is either nonexistent or limited. However, under the proposed framework, the mAP and NDS are improved remarkably. The effects of the proposed framework are prevalent in Figure 6. It can be seen that in both examples (a) and (b), due to the loss of fine-scale features during down-sampling, PointPillars fails to detect small objects. On the other hand, in the proposed method, these objects are recognized by the RV-based segmentation module and their fine-scale features are recovered by the IFR module, allowing for their detection. Moreover, in example (b), PointPillars produces two false positives from afar, while the AOP-Net is properly guided by panoptic-level information and circumvents these mistakes. ## V Conclusions We propose AOP-Net, an all-in-one perception framework for LiDAR-based joint 3D object detection and panoptic segmentation. In this framework, we design the dual-task 3D backbone to consider both semantic- and instance-level information of the scene, thereby augmenting both the BEV-based detection head and RV-based panoptic head. Also, the multi-scale 3D voxel features resulted from this backbone are used to augment the single-scale RV feature maps in the panoptic segmentation task. Moreover, a deep and efficient 2D backbone based on the simplified ConvMLP (SC) block is proposed, which results in detection improvement. Finally, to recover features lost during down-sampling operations in the dual-task 3D backbone, a novel instance-based feature retrieval (IFR) module is proposed that relies on predicted instance masks and recovers features to augment the detection backbone. Experimental results on nuScenes and Waymo datasets show strong improvements in both 3D panoptic segmentation and object detection tasks under the proposed framework, while demonstrating that the detection accuracy of any BEV-based 3D object detection can be improved using the proposed strategy.
2310.11541
MUST&P-SRL: Multi-lingual and Unified Syllabification in Text and Phonetic Domains for Speech Representation Learning
In this paper, we present a methodology for linguistic feature extraction, focusing particularly on automatically syllabifying words in multiple languages, with a design to be compatible with a forced-alignment tool, the Montreal Forced Aligner (MFA). In both the textual and phonetic domains, our method focuses on the extraction of phonetic transcriptions from text, stress marks, and a unified automatic syllabification (in text and phonetic domains). The system was built with open-source components and resources. Through an ablation study, we demonstrate the efficacy of our approach in automatically syllabifying words from several languages (English, French and Spanish). Additionally, we apply the technique to the transcriptions of the CMU ARCTIC dataset, generating valuable annotations available online\footnote{\url{https://github.com/noetits/MUST_P-SRL}} that are ideal for speech representation learning, speech unit discovery, and disentanglement of speech factors in several speech-related fields.
Noé Tits
2023-10-17T19:27:23Z
http://arxiv.org/abs/2310.11541v1
MUST &P-SRL: Multi-lingual and Unified Syllabification in Text and Phonetic Domains for Speech Representation Learning ###### Abstract In this paper, we present a methodology for linguistic feature extraction, focusing particularly on automatically syllabifying words in multiple languages, with a design to be compatible with a forced-alignment tool, the Montreal Forced Aligner (MFA). In both the textual and phonetic domains, our method focuses on the extraction of phonetic transcriptions from text, stress marks, and a unified automatic syllabification (in text and phonetic domains). The system was built with open-source components and resources. Through an ablation study, we demonstrate the efficacy of our approach in automatically syllabifying words from several languages (English, French and Spanish). Additionally, we apply the technique to the transcriptions of the CMU ARCTIC dataset, generating valuable annotations available online1 that are ideal for speech representation learning, speech unit discovery, and disentanglement of speech factors in several speech-related fields. Footnote 1: [https://github.com/moetits/MUST_P-SRL](https://github.com/moetits/MUST_P-SRL) ## 1 Introduction Modern speech technologies have moved towards end-to-end models that constitutes black box systems that do not allow for explainability of the prediction or decisions. This lack of explainability started to raise a lot of concerns in the industry because of the need of identifying causes or reasons for decisions. This lead to the advent of the concept of Explainable AI (XAI) for which the goal is to discover ways to explains why a certain prediction was made by a system. For this, one avenue is the field of representation learning which incorporate unsupervised/self-supervised learning, aiming to discover robust and meaningful representations for various tasks and analyze their relationship with expert knowledge (e.g. Tits et al. (2019, 2021)). It is well known that in Deep Learning, learning knowledge can be transferable from one task to other and Self-Supervised Learning is probably the most versatile Transfer Learning technique today. Transfer Learning Tan et al. (2018) is a widely used technique in Deep Learning for leveraging models trained on related tasks for which there exist abundant datasets towards tasks for which few labels exist. This principle has been applied successfully for speech technology application Wang and Zheng (2015) with few available data such as speech recognition for low resource languages, emotion recognition in speech Tits et al. (2018), emotional or expressive speech synthesis Tits et al. (2020, 2019) or voice conversion Zhou et al. (2022). Self-supervised learning is thus a specific form of Transfer Learning where a model is trained to learn representations of input data without the need for explicit supervision. These representations are the projection of the input data to a multidimensional space called latent space that captures information that is important for prediction of characteristics. There is however still a lot work to do to understand how these latent spaces are structured, what characteristics can be predicted, how can they be disentangled, etc. In this paper, we are particularly interested in providing a fine-grained expert annotations that can be aligned with a speech signal, allowing for exploration of relationships between speech representations and expert knowledge. To this end, our rich phonetic annotations, augmented with syllable and stress information, serve as strong supervisory signals. Moreover, these phonetic transcriptions, tied to their written form, provide an explicit correspondence between the discrete symbols and their variably pronounced forms encountered in natural language. This could facilitate the discovery of speech units directly from the data. Hence, this research can provide valuable insights and push the boundaries of current methods in automatic speech recognition, synthesis, and analysis. Conducting linguistic feature extraction, such as phonetic transcriptions, syllable separations, and word stress, plays an essential role in a multitude of fields, such as speech representation learning, speech synthesis [13, 12], speech recognition, and speaker identification. The ability to accurately mark syllable boundaries in words is fundamental for understanding language structure and its phonetic variations, which in turn aids in efficient decoding and analysis of speech data. Among its potential use-cases, applications in the realms of second language learning and more specifically computer-assisted pronunciation training (CAPT) [14] can greatly benefit from the reliable extraction, ensuring the development of effective learning materials that enhance pronunciation and overall language proficiency in learners. Nevertheless, the extraction of linguistic features poses challenges due to the inherent complexity and variability observed in natural languages. Dialectal variations, phonetic ambiguities, and inconsistencies in syllable boundaries are contributing factors that hinder the development of a reliable and consistent system for extracting linguistic features. Moreover, there is a lack of resources that offer consistent phonetic transcriptions encompassing stress marks, phone boundaries, and syllable boundaries across both pronunciation and spelling domains. In this work, our goal is to define a methodology for linguistic feature extraction (phonetic transcriptions, stress marks, automatic syllabification in text and phonetic transcription domains) that is multilingual and compatible with forced-alignment tools. We have developed a process based on existing open-source building blocks that includes different steps and checks, as well as a consensus mechanism to extract the best possible linguistic features from text. The Montreal Forced Aligner (MFA) [15] is an essential tool in our analysis for its function in phonetic alignment, providing detailed pronunciation transcriptions. It is important to note that, while MFA is commonly used to align audio signals with corresponding text transcriptions, we consider that task to be already efficiently handled by MFA's acoustic models. Our work aims to enrich this process: we focus on aligning phonetic syllabifications with graphemic representations of the corresponding words, essentially extracting and aligning units of sounds for precise syllabification across languages. We consciously designed our system to be fully compatible with the MFA, providing a complementary solution to the existing forced-alignment process. By aligning phonetic syllabifications with their corresponding graphemic representations and creating a multimodal mapping, our methodology opens up new avenues of exploration in the field of speech representation learning. ## 2 Related Work Automatic syllabification is a challenging task for natural language processing due to the ambiguity of syllable boundaries. Different techniques have been developed to address this problem, including rule-based and data-driven approaches. In this section, we review some relevant studies on automatic syllabification in English, Spanish, Italian, and Portuguese. For English, the study presented in [13] compares five different algorithms, including two rule-based approaches and three data-driven techniques. The study finds that data-driven methods outperform rule-based systems in terms of word and juncture accuracy. Furthermore, syllabification in the pronunciation domain is easier than in the spelling domain. The study also highlights the challenge of establishing a gold standard corpus for evaluation due to the lack of consensus in the entries of multiple lexical databases. However, in their experiment, they apply the two rule-based algorithms in the spelling domain without Figure 1: Block diagram of the linguistic feature extraction system described in Section 3 any adaptation, and they do not consider the use of the Sonority Sequencing Principle. The Sonority Sequencing Principle (SSP) [25] is a widely used rule for syllabification, which states that syllables are formed by increasing then decreasing sonority. It is based on the sonority hierarchy, which assigns a relative sonority value to each phone. Vowels have the highest sonority, followed by approximants (such as /r/ and /w/), fricatives, nasals, and finally stops, which have the lowest sonority. The linguistic literature identified exceptions to this principle, the main one being probably the sibilant-stop consonant cluster [1, 13, 23, 24]. Implementations of the principle with processing of these exceptions been successfully applied for automatic syllabification in several languages in the pronunciation domain with very high word accuracies [1, 15, 16]. But it has also been applied in the spelling domain with some success for some languages. In Spanish, [1] points out that syllabification follows basic rules but may deviate due to various factors, such as diphthongs or hiatuses. Some variations in syllabification are also related to geographical and dialectal criteria. Therefore, automatic syllabification in Spanish requires taking into account these variations. For Italian, [1] presents a rule-based method that uses the Sonority Sequencing Principle (SSP) and additional rules specific to Italian. The study evaluates their method on a dataset of sentences that were manually syllabified and reports an accuracy of 0.98-1 for some of the subjects. We could not find an application of SSP in the spelling domain in English. The reason is maybe because a naive application of SSP in the spelling domain would not perform very well. Many data-driven syllabification methods using different levels of complexities of machine/deep learning models, that have the potential to be applied to several languages, have been developed but mainly for the phonetic domain only [1, 13, 14, 15]. In this literature review, we did not find any method that is capable of syllabification in both pronunciation and spelling domains and study the consistency between them. In this work, we thus propose a methodology for a unified automatic syllabification and experiment it in several languages. ## 3 System The proposed methodology for linguistic feature extraction is illustrated in Figure 1. It includes several steps: text normalization, grapheme-to-phoneme (G2P) conversion, syllabification in the phonetic domain, and syllabification in the text domain. Lastly, a consistency analysis is conducted to identify words with inconsistent syllable counts, facilitating manual correction of the remaining exceptional cases. The system is designed to be multilingual and compatible with forced-alignment tools, namely _Montreal forced aligner_ (MFA). ### Text normalization The initial stage of the process involves normalizing the text, which includes handling non-standard notations that differ from actual words. The system assumes that most punctuation symbols in English are attached to words, either at the end (commas, different kinds of dots, etc.) or at the start (double quotes can be at the start and end). For acronyms, the system assumes that they are written as a sequence of capital letters without dots between them. Numerals are translated to words using a rule-based algorithm with the Python library num2words2. Footnote 2: [https://pypi.org/project/num2words/](https://pypi.org/project/num2words/) ### Grapheme-to-phoneme (g2p) conversion After normalizing the text, the system utilizes various methods to perform grapheme-to-phoneme (g2p) conversion. Phonetics is the study of the physical properties and production of speech sounds, while phonemics is concerned with the abstract and meaningful distinctions of sounds within a particular language, known as phonemes. Phonetics focus on the sounds themselves, while phonemics focus on the functional and linguistic aspects of those sounds. There exist different phonetic symbol sets categorizing speech sounds production (IPA, X-SAMPA, ARPabet) There is a language abuse in the state of the art of G2P models, as they are in fact performing the transformation of written language (graphemes) into a sequence of phonetic symbols (phones) and not phonemes. These terminologies are often used interchangeably in internet resources. In this paper we only work with phonetic transcriptions (sequence of phones). First, it looks up the word in a pronunciation dictionary. If the word is not found, the system estimates its pronunciation using a machine learning model. This two-step methodology allows the system to use high-quality transcriptions from available dictionaries while handling the problem of out-of-vocabulary words with a machine learning model. However, this method is limited in that it cannot model dependencies of pronunciation on context. The system relies on manual human correction to handle this problem. The system uses open-source resources as pronunciation dictionaries and fallback machine learning models, including the CMU pronunciation dictionary and an open-source CMU g2p model3, as well as the MFA pronunciation dictionaries and their g2p models using a carefully described IPA phone set4. Footnote 3: [https://github.com/Kyubyong/g2p](https://github.com/Kyubyong/g2p) Footnote 4: [https://mfa-models.readthedocs.io/en/latest/mfa_phone_set.html](https://mfa-models.readthedocs.io/en/latest/mfa_phone_set.html) ### Syllabification in pronunciation domain (phonetic transcriptions) Syllabification in the phonetic domain is carried out by the system, employing the Sonority Sequencing Principle (SSP). The SSP is a well-accepted principle that states that syllables are formed by organizing sounds according to their sonority, which is a measure of the relative loudness or intensity of a sound. We based our implementation on SyllabiPy5 github repository. We defined the sonority hierarchies for the different symbol sets used in this paper (CMU phone set6, MFA's IPA set, letters). Figure 2 shows sonority curve examples for three words. The top curves are in the phonetic domain, while the bottom curves are in the spelling domain (see next section for explanations about the mapping between them). Footnote 5: [https://github.com/henchc/syllabip](https://github.com/henchc/syllabip) Footnote 6: Based on the ARPABET phonetic symbol set: [https://en.wikipedia.org/wiki/ARPABET](https://en.wikipedia.org/wiki/ARPABET) The syllable breaks are determined by the local minima that have a vowel (sonority of value 5) located before themselves and after the last syllable break (or start of the word for the first syllable break). An additional rule is that a new syllable break cannot create a syllable that does not contain a vowel. In the resources used as a basis, diphthongs are annotated as single phones, where hiatuses are annotate as two separate vowels. Therefore to correctly segment hiatuses, we represent all vowels by a sequence of two sonorities: 5, then 4. This allows us to generate a syllable breaks in case of hiatuses, without influencing the rest of the segmentation. In this case, the syllable breaks position will be placed after the vowel containing the local minimum. On the contrary the syllable breaks determined by consonant local minima will be placed before them. The system handles sibilant-stop consonant clusters such as /skr/ and /spl/ thanks to the rule that a new syllable break cannot create a syllable that does not contain a vowel (mentioned earlier). As stress marks are not provided in MFA dictionaries and g2p models, we use eSpeak as an extra resource for retrieving this information. We compute a syllabified version of eSpeak transcription and extract stressed syllable index to augment the MFA transcription. ### Syllabification in spelling domain (text) In the literature, it is commonly assumed that syllabification in the text domain results in a single, definitive number of syllables. However, pronunciation dictionaries, such as the CMU or MFA pronunciation dictionaries, provide variations of pronunciation, including variations in the number of vowels and, therefore, in the number of syllables. To ensure consensus across datasets, we propose matching the number of syllables in text with the number of syllables in the pronunciation dictionary. This is consistent with the use of consensus as a valid mechanism for gathering data from manual annotators and was also used to combine datasets in (Marchand et al., 2009). We assume that the number of syllables is the same across variants of English. We proceed with syllabification in several steps. First, we detect if the word has only one vowel based on its phonetic transcription using the G2P section. This step increases accuracy and avoids imprecisions that may arise in the following steps. The second step involves looking up the word in a publicly available corpus of manually syllabified words. For English, we use a dataset of manually syllabified words7 from the Gutenberg Project. For French, we use the _Lexique3838_. We apply a systematic correction to group consonants alone in a word with the next syllable. This correction addresses the issue of the sC cluster mentioned in Section 3.3. For Spanish, we do not use any dataset and redirect everything to SSP. The third step involves processing words with more than one vowel that are _out of vocabulary_ (OOV). One could try applying SSP on the letters of the words, assuming the sonority of the letters. The performance of this method depends on the language. Specifically, this work well when the words follow a a predictable letter-to-sound mapping. To mitigate the limitation of this technique, it is also possible to add language specific rules. However, SSP on text will struggle with hiatus, diphthongs, silent letters, and other cases for which the letter-to-sound mapping assumption is violated. To overcome this difficulty, we propose an approach that aligns sonority sequences in the pronunciation domain and the spelling domain using Dynamic Time Warping (DTW) [11]. This approach allows us to benefit from the accurate prediction of syllable starts in the pronunciation domain and map them into the spelling domain. An illustration of this procedure is shown in Figure 2 with three example English words containing cases where letter-to-sound mapping is not respected: (1) _rhythm_ contains a silent \(h\), a _schwa_ sound (symbol _AH0_ in CMU set) that does not correspond to a written letter, and a consonant sound written with two letters (_th_); (2) _leaves_ containing the grapheme _ea_ as a single vowel, and a silent \(e\); (3) _oceanic_ containing the grapheme _ea_ as a hiatus. ## 4 Experiments To evaluate the quality of an Automatic Syllabification algorithm, two measures are typically used: word accuracy and juncture accuracy. Word accuracy measures the proportion of words for which the number of syllables is exactly the same as a gold standard. Juncture accuracy measures the proportion of junctures that are the same as a gold standard. In this study, we propose to measure word accuracy between the syllabified text of our methodology and the result of the application of the Sonority Sequencing Principle in the pronunciation domain. This is backed by the literature, as the number of syllables extracted in the phonetic domain is highly reliable. This measure allows for reproducibility and avoids comparison with a gold standard annotated by humans, which is also imperfect and inconsistent. Our consensus mechanism allowed us to detect errors that can complement syllabified text corpora or start corpora of edge cases for new languages. ### Distribution of number of syllables in words in natural language corpus and in a lexicon The word accuracy applied to sentences is not directly comparable to that of a lexicon of existing words in English. The reason for this is that the distribution of the number of syllables in a lexicon and in a set of sentences is very different. To illustrate this, Figure 3 shows the proportion (in %) of words for each possible number of syllables in CMU ARCTIC sentences and MFA pronunciation dictionary (en_US variation). The large proportion (\(>70\%\)) of single vowel words in sentences Figure 3: Proportions of words (in %) in CMU ARCTIC sentences and MFA pronunciation dictionary per number of syllables in the word, according to sonority principle applied in the pronunciation domain Figure 2: Illustration of the application of DTW on sonority sequences in the pronunciation and spelling domain. The blue curves are the sonority sequences, the red and green lines are the mapping links extracted from the DTW alignments. The green lines correspond to the local minima selected as syllable breaks in the phonetic domain and identifying the corresponding location in the spelling domain. The syllable break location are indicated with the vertical green pipe characters in both phonetic and spelling domains. explains why the lexicon benchmarks are more challenging than a set of sentences. We therefore provide the results for both scenarios in Section 4.2 and Section 4.3. ### Ablation Study on words An essential step in our work involves the use of SSP for direct syllabification - a method we refer to as _SSP_. It is pertinent to note that our implementation of this approach mirrors the implementation provided in the documentation of the Natural Language Toolkit (NLTK)9, a popular platform employed for multiple language processing tasks. NLTK's syllabification implementation also relies on SSP and supports various languages. This established baseline bears significance in our ablation study, where we gauge the additional contributions made by the other components of our methodology. The reader can directly spot the limitations of this method applied to text by consulting the given example in the link of the footnote with the word _sentence_. Indeed, it is syllabified in 3 syllables (_senlentce_), while it should be in 2 syllables (_senlence_). Footnote 9: [https://www.nltk.org/api/nltk.tokenize.sonority_sequencing.html](https://www.nltk.org/api/nltk.tokenize.sonority_sequencing.html) To measure the difference in performance between different languages, we performed an ablation study on English (variations GB and US based on MFA pronunciation dictionaries, as well as US with CMU pronunciation dictionary), Spanish, and French. We used a set of randomly selected 1000 words in the corresponding pronunciation dictionaries to report word accuracies in the different versions. The first step of all the versions is the same and consists of single vowel checking through a look-up in the pronunciation dictionary. Then, to be able to quantify the contributions of the technique of DTW between sonority sequencies of text and phonetics, and the contribution of using look-up in a dataset of syllabified words (when available), we compute word accuracies on 4 alternatives of the methodology, consisting in the possible component combinations: * _SSP_: we directly use SSP on the letters, we use neither the DTW technique, neither look-up in the dictionary * _lkp-SSP_: we first perform lookup in the syllabified words dataset to check if the word exist, and fallback on SSP on the letters * _SSP-DTW_: extract sonority sequences and apply DTW to associate letters to phones and use SSP to extract starts of syllables * _lkp-SSP-DTW_: we first perform lookup in the syllabified words dataset to check if the word exist, else we use SSP-DTW We report word accuracies for different versions of our methodology. The results are shown in Table 1. From the results, we can observe that the look-up in the syllabified words dataset has a positive effect over SSP (text only) for both French and English (all variations). We can also see that the SSP-DTW methodology performs better than the naive application of SSP on text, for all languages in our experiments. For English, the highest accuracy is achieved by the _lkp-SSP-DTW_ version, indicating that the use of syllable corpus lookup in conjunction with DTW methodology can significantly improve the accuracy of automatic syllabification. This is however not true for experiments in French. This might indicate that the _SSP-DTW_ methodology is more reliable in itself than the human annotations collected in the dataset used for the experiment. ### CMU ARCTIC sentences The CMU ARCTIC dataset [15] is a multi-speaker database consisting of 1132 phonetically balanced English utterances, recorded under studio conditions. The set of speakers include several accents of English. The dataset was then generated by selecting a compact subset of utterances containing at least one occurrence of every diphone (phone pairs). It was originally created to support speech synthesis research but it has been widely used in various applications since its release, including speech synthesis, voice conversion, speaker adaptation, prosody modeling, speech recognition, and linguistic studies. We therefore release the result of our unified phonetization and syllabification in text and phonetic domains to support future studies in these domains. We also think that these annotations are \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & SSP & lkp-SSP & SSP-DTW & lkp-SSP-DTW \\ \hline es\_ES & 87.6 & - & **94.0** & - \\ fr\_FR & 82.3 & 85.9 & **90.1** & 89.1 \\ en\_GB & 88.5 & 94.4 & 92.6 & **95.5** \\ en\_US & 88.5 & 93.7 & 92.3 & **94.2** \\ CMU & 89.5 & 93.6 & 93.4 & **94.7** \\ \hline \end{tabular} \end{table} Table 1: Word accuracies for different language/variations and methods useful information for speech representation learning as it could serve as data to analyze impact of contribution of different factors (speaker identity, accent, stress, rhythm), and potentially help in the disentanglement of these different factors. Furthermore, other datasets including L2-ARCTIC [22], and EmoV-DB [1] use the same transcriptions. L2-ARCTIC is a speech corpus of non-native English that is intended for research in voice conversion, accent conversion, and mispronunciation detection. The initial release of their dataset includes recordings from ten non-native speakers of English whose first languages are Hindi, Korean, Mandarin, Spanish, and Arabic, each L1 containing recordings from one male and one female speaker. Each speaker recorded approximately one hour of read speech from the CMU ARCTIC sentences. EmoV-DB consists of recordings of several speakers with different emotional categories in a parallel setup using CMU ARCTIC sentences. These sentences do not convey particular emotions in the text which would help to disentangle emotional expressiveness in speech from the textual content. The phonetization and unified syllabification described in Section 3 was applied to the 1132 CMU ARCTIC sentences. The word accuracy obtained on all the words is \(>99.8\%\). ## 5 Conclusions This study introduced a novel, multilingual methodology for linguistic feature extraction, designed to be compatible with forced-alignment tools. Our approach effectively extracted essential linguistic features, including phonetic transcriptions, stress marks, and automatic syllabification in both text and phonetic domains. The methodology integrated various techniques, such as text normalization, grapheme-to-phoneme conversion, syllabification in the phonetic and text domains, and a consensus analysis to identify inconsistencies. Our ablation study demonstrated the efficacy of the proposed methodology in automatically syllabifying words across multiple languages. The optimal performance was achieved by combining corpus lookup and Dynamic Time Warping (DTW) on sonority sequences. This approach can be further enhanced by progressively incorporating edge cases into the training dataset. By applying our methodology to the CMU ARCTIC dataset, we generated valuable data that can benefit various speech-related research domains, available online10. Our unified phonetization and syllabification annotations have the potential to advance speech representation learning and disentangle different factors in speech technologies, such as speech synthesis and speech analysis tasks. Footnote 10: [https://github.com/moetits/MUST_P-SRL](https://github.com/moetits/MUST_P-SRL) ### Limitations This paper concentrates on the intersection of phonetics and syllabification, aiming to align phonetic transcriptions with corresponding graphemes. While we mention the term _alignment_, the context in this paper refers to the alignment of phonetic transcriptions with their corresponding graphemes, a pivotal step in our methodology for accurate multilingual syllabification. Highlighting this nuance provides a correct understanding of the terminologies and approaches used in this study, and sheds light on the specific challenges and contributions of our work. Future research directions include extending the proposed methodology to additional languages and investigating the impact of our linguistic feature extraction on specific speech technology applications. Furthermore, refining the methodology by incorporating language-specific rules or addressing limitations in the consensus analysis could lead to even more accurate and robust results. While our methodology presents improvements in linguistic feature extraction and automatic syllabification, some limitations should be noted. Firstly, while we aimed to create a multilingual system, our current implementation and evaluations were focused mainly on English, French, and Spanish. Extending and evaluating our methodology across other languages, especially those with vastly different phonetic structures, remains a future challenge. Secondly, the system heavily relies on the availability and quality of pronunciation dictionaries for its grapheme-to-phoneme conversion process. As such, issues like handling out-of-vocabulary words or modeling pronunciation dependencies based on context heavily depend on manual correction, limiting the scalability of the system. Note however that the choice of MFA tools was done among other things because of the large list of languages it supports (see the pronunciation dictionaries11 and g2p models12). Footnote 11: [https://mfa-models.readthedocs.io/en/latest/dictionary/index.html](https://mfa-models.readthedocs.io/en/latest/dictionary/index.html) Footnote 12: [https://mfa-models.readthedocs.io/en/latest/](https://mfa-models.readthedocs.io/en/latest/) Thirdly, our approach to identifying and addressing inconsistencies between different syllabification resources uses a consensus mechanism which, while effective, may still retain inaccuracies inherent in these resources. Acknowledging these limitations provides valuable directions for potential future enhancements and research towards fully automated and accurate linguistic feature extraction. ## Acknowledgements This work is part of the project _REDCALL_ that is partially funded by a FIRST Entreprise Docteur program from SPW Recherche13 Footnote 13: [https://recherche.wallonie.be/](https://recherche.wallonie.be/) This project is a collaboration between Flowchase SRL and the Information, Signal and Artificial Intelligence Lab (ISIA Lab) of University of Mons in Belgium.
2303.16316
Holography of information in de Sitter space
We study the natural norm on the space of solutions to the Wheeler-DeWitt equation in an asymptotically de Sitter spacetime. We propose that the norm is obtained by integrating the squared wavefunctional over field configurations and dividing by the volume of the diff-and-Weyl group. We impose appropriate gauge conditions to fix the diff-and-Weyl redundancy and obtain a finite expression for the norm using the Faddeev-Popov procedure. This leads to a ghost action that has zero modes corresponding to a residual conformal subgroup of the diff-and-Weyl group. By keeping track of these zero modes, we show that Higuchi's norm for group-averaged states emerges from our prescription in the nongravitational limit. We apply our formalism to cosmological correlators and propose that they should be understood as gauge-fixed observables. We identify the symmetries of these observables. In a nongravitational theory, it is necessary to specify such correlators everywhere on a Cauchy slice to identify a state in the Hilbert space. In a theory of quantum gravity, we demonstrate a version of the principle of holography of information: cosmological correlators in an arbitrarily small region suffice to completely specify the state.
Tuneer Chakraborty, Joydeep Chakravarty, Victor Godet, Priyadarshi Paul, Suvrat Raju
2023-03-28T21:27:06Z
http://arxiv.org/abs/2303.16316v2
# Holography of information in de Sitter space ###### Abstract We study the natural norm on the space of solutions to the Wheeler-DeWitt equation in an asymptotically de Sitter spacetime. We propose that the norm is obtained by integrating the squared wavefunctional over field configurations and dividing by the volume of the diff-and-Weyl group. We impose appropriate gauge conditions to fix the diff-and-Weyl redundancy and obtain a finite expression for the norm using the Faddeev-Popov procedure. This leads to a ghost action that has zero modes corresponding to a residual conformal subgroup of the diff-and-Weyl group. By keeping track of these zero modes, we show that Higuchi's norm for group-averaged states emerges from our prescription in the nongravitational limit. We apply our formalism to cosmological correlators and propose that they should be understood as gauge-fixed observables. We identify the symmetries of these observables. In a nongravitational theory, it is necessary to specify such correlators everywhere on a Cauchy slice to identify a state in the Hilbert space. In a theory of quantum gravity, we demonstrate a version of the principle of holography of information: cosmological correlators in an arbitrarily small region suffice to completely specify the state. ## 1 Introduction It is known that both in AdS and in flat space, quantum gravity localizes information very differently from nongravitational quantum field theories and manifests the principle of holography of information [1; 2; 3; 4; 5; 6; 7]. In AdS, all information on a Cauchy slice is available near its boundary, as is well known from AdS/CFT but can also be shown directly from the gravitational theory. In flat space, it was shown in [1] that all information that can be obtained on future null infinity can also be obtained on its past boundary. Given this context, we seek to address the following question in this paper: how does the holography of information work in de Sitter space, where spatial slices have no boundaries? With a view to addressing this question, we study expectation values of observables that act on the space of solutions of the Wheeler-DeWitt (WDW) equation recently found in [8]. To begin with, this requires defining a norm on this space. We propose a natural norm, obtained by integrating the square of the magnitude of the wavefunctional over field configurations and dividing by the volume of the group of diffeomorphisms and Weyl transformations. We show how this redundancy can be gauge-fixed using the Faddeev-Popov procedure [9; 10]. Previously we showed [8] that, in the nongravitational limit, the space of solutions to the WDW equation reduces to the space of dS invariant states defined by Higuchi using group averaging [11; 12; 13; 14]. Higuchi defined a norm on this space by dividing the QFT norm of the states by the volume of the dS isometry group, resulting in a finite answer. Here, we show that the norm on the space of WDW solutions described above reduces to Higuchi's norm in the nongravitational limit. Our prescription also provides a systematic set of gravitational corrections to Higuchi's proposal. Using our formalism, we turn to a specific set of observables called "cosmological correlators". These observables are physically significant and have attracted significant attention in the literature [15; 16; 17; 18]. They are usually expressed in terms of a product of local operators on the late-time slice of de Sitter space. While such a product is a well-defined observable in a quantum field theory, it does not commute with the gravitational constraints. Hence, this description is not gauge invariant. We propose that cosmological correlators should be understood as _gauge-fixed_ observables. We provide a prescription to compute the matrix elements of such observables between any two states of the theory. This set of matrix elements defines a gauge-invariant operator corresponding to every cosmological correlator. We show that our gauge-fixed observables are invariant under translations and rotations, and have simple transformation properties under scaling. Crucially, this property holds in all states of the theory, and not just in the Euclidean vacuum. Consequently, the specification of these observables in any open set \(\mathcal{R}\) suffices to specify them everywhere. But the full set of cosmological correlators forms an overcomplete basis for all observables. Therefore, cosmological correlators in any arbitrary small region of the Cauchy slice are sufficient to uniquely identify the state of the theory. Cosmological correlators can also be defined in quantum field theory. But in the absence of gravity, it is possible to construct states where they coincide inside a small region but differ outside it. So the result above marks a sharp difference between the properties of gravitational and nongravitational theories. This provides the necessary generalization of the notion of holography of information to asymptotically de Sitter space. Heuristically, this result can be put on the same footing as the results on the holography of information in AdS and in flat space. There, the principle of holography of information implies that whenever a region \(\mathcal{R}\) is surrounded by its complement \(\overline{\mathcal{R}}\) then \(\overline{\mathcal{R}}\) contains all information about \(\mathcal{R}\). This is simply because when spatial slices are noncompact, \(\overline{\mathcal{R}}\) extends to infinity and so it contains all information about the state. In the present case, the spatial slices have the topology of \(S^{d}\). Therefore every region \(\mathcal{R}\) both surrounds and is surrounded by its complement. So it is natural for cosmological correlators in every region \(\mathcal{R}\) to have information about the entire state. We present the holography of information in terms of a precise mathematical result. However this does _not_ imply that a physical observer with access only to a small patch of the late-time slice can glean all information about the state using local measurements. Cosmological correlators are gauge-fixed observables that are merely labelled by a set of points. Since there are no local gauge-invariant observables in the theory, cosmological correlators also secretly correspond to nonlocal operators that cannot be measured through any strictly local process. Moreover, in dS, it is not fruitful to think in terms of external observers and so the question of what is physically observable requires us to construct a model of an observer who is part of the system. Although, we do not seek to construct such a model in this paper, it is reasonable to envisage a model in which a physical observer can access low-point gauge-fixed observables of the kind we describe. But, as in AdS and in flat space, the identification of a sufficiently complicated state, \(\mathcal{R}\) from a small region requires very high-point cosmological correlators and presumably, in any reasonable physical model, such high-point correlators are effectively inaccessible. An overview of this paper is as follows. In section 2, we provide a summary of our results, including its key technical aspects. In section 3, we discuss norms and expectation values in the space of solutions to the WDW equation. In section 4, we define cosmological correlators and study their properties. In section 5, we prove the principle of holography of information and discuss its implications. We conclude by discussing open questions in section 6. ## 2 Summary of results In a separate paper [8], we have shown that the space of solutions to the WDW equation with a positive cosmological constant, \(\Lambda\), where the spatial slices have the topology of \(S^{d}\) take on the asymptotic form \[\Psi[g,\chi]=e^{iS[g,\chi]}\sum_{n,m}\kappa^{n}\delta\mathcal{G}_{n,m}Z_{0}[g,\chi]. \tag{1}\] This result involves several pieces of notation that we explain in turn. 1. Here \(g\) is the metric on a spatial slice and \(\chi\) is a generic scalar matter field with scaling dimension \(\Delta\). The solution is valid in the limit where the cosmological constant dominates the spatial curvature scalar \(R\) (distinct from the spacetime curvature scalar), and other terms in the local energy density, everywhere on the slice. This requires the volume of the spatial slices to become asymptotically large compared to the cosmological scale. Physically, this corresponds to the late-time limit of an asymptotically de Sitter spacetime. 2. The exponent \(S[g,\chi]\) is a universal phase factor that comprises local functionals of \(g\) and \(\chi\) that diverge in the infinite volume limit, and was determined explicitly in [8]. \(e^{iS[g,\chi]}Z_{0}[g,\chi]\) is the wavefunctional corresponding to the Euclidean vacuum, or the Hartle Hawking state. 3. \(Z_{0}[g,\chi]\) is invariant under diffeomorphisms and has the Weyl transformation property of a CFT partition function \[\left(2g_{ij}\frac{\delta}{\delta g_{ij}}-\Delta\chi\frac{\delta}{\delta\chi }\right)Z_{0}[g,\chi]=\mathcal{A}_{d}Z_{0}[g,\chi]\,\] (2.2) where \(\mathcal{A}_{d}\) is an imaginary anomaly polynomial that is nonzero only in even \(d\) and is determined explicitly in [8]. 4. The property above implies that, at the cost of a phase, it is possible to make a Weyl transformation to study \(Z_{0}[g,\chi]\) in the vicinity of the flat metric, \(g_{ij}=\delta_{ij}+\kappa h_{ij}\).1 In this Weyl frame, we can expand \[Z_{0}=\exp[\sum_{n,m}\kappa^{n}\mathcal{G}_{n,m}]\.\] (2.3) Here \(\mathcal{G}_{n,m}\) is a multilinear functional of the metric fluctuation \(h_{ij}\) and the matter fluctuation \(\chi\), Footnote 1: As explained in section 4.2 of [8], this Weyl transformation is made for convenience. To obtain the wavefunctional in the physical frame, where the metric describes a deformed sphere with large volume, one must use (2.2) and obtain the correct phase. We never need to do this in what follows since the phase factor will not appear in subsequent calculations. \[\mathcal{G}_{n,m}\equiv\frac{1}{n!m!}\int d\vec{y}d\vec{z}\,h_{i_{1}j_{1}}(y_{ 1})\dots h_{i_{n}j_{n}}(y_{n})\chi(z_{1})\dots\chi(z_{m})G^{\vec{ij}}_{n,m}( \vec{x})\.\] (2.4) As in [8], we vectorize the collective indices to condense the notation: \(\vec{y}=(y_{1},\dots y_{n}),\vec{i}=i_{1}\dots i_{n},\vec{j}=j_{1}\dots j_{n},\vec{z}=z_{1}\dots z_{m}\) and \(x\) is a generic coordinate, \(\vec{x}=(\vec{y},\vec{z})\). The wavefunction coefficients \(G^{\vec{ij}}_{n,m}\) in (2.4) must obey a specific set of Ward identities analogous to those obeyed by correlators of a conformal field theory. 5. Finally, \(\delta\mathcal{G}_{n,m}\) is the _difference_ of two distinct functionals, both of the form (2.4), \(\delta\mathcal{G}_{n,m}=\mathcal{G}_{n,m}-\widetilde{\mathcal{G}}_{n,m}\). In the nongravitational limit the sum over \(n,m\) in (2.1) can be restricted to a single term. Away from this limit, the Ward identities link terms with different values of \(n\). In this paper, we will propose that the natural norm on this space of wavefunctionals is obtained by simply squaring the asymptotic wavefunctionals, integrating over all field configurations and finally dividing by the volume of the diff \(\times\) Weyl group. More generally, the expectation value of a gauge-invariant operator \(A\) is given by \[(\Psi,A\Psi)=\frac{\mathcal{N}_{1}}{\text{vol(diff$\times$Weyl)}}\int DgD\chi\, \sum_{n,m,n^{\prime},m^{\prime}}\kappa^{n+n^{\prime}}\delta\mathcal{G}^{*}_{n,m }\delta\mathcal{G}_{n^{\prime},m^{\prime}}|Z_{0}[g,\chi]|^{2}A[g,\chi]\, \tag{5}\] where \(\mathcal{N}_{1}\) is a physically unimportant normalization constant. To parse this norm, we use a gauge-fixing condition which fixes the diff \(\times\) Weyl invariance. The gauge-fixing condition we choose is \[\partial_{i}g_{ij}=0;\qquad\delta^{ij}g_{ij}=d. \tag{6}\] The corresponding ghost action has zero modes that correspond to residual global symmetries that are not fixed by the gauge choice above. The zero modes correspond precisely to the generators of the conformal group in \(d\)-dimensions: translations, rotations, dilatations and special conformal transformations. For \(d>2\), the usual form of the special conformal transformations is corrected by a metric-dependent diffeomorphism. The integrated operators (inside \(\delta\mathcal{G}_{n,m}\)) that appear in the correlator (5) can be utilized to fix these residual symmetries. We fix three of the operators to \[x_{1}=0;\qquad x_{2}=1;\qquad x_{3}=\infty. \tag{7}\] This choice, which is familiar from perturbative string theory, is enough to fix the residual conformal symmetry in all dimensions up to a residual \(\text{SO}(d-1)\) invariance that is compact and can simply be excluded by hand or integrated over. The notation \(\overline{\delta\mathcal{G}_{n,m}}\) represents the operator obtained by fixing three of the points in an integrated product of operators like (4) using (7) with the appropriate measure factor. (See (3.22) for details.) This leads to the gauge-fixed expression for the expectation value of an operator \(A\) \[(\Psi_{1},A\Psi_{2})=\sum_{n,m,n^{\prime},m^{\prime}}\kappa^{n+n^{\prime}} \langle\!\langle\,\overline{\delta\mathcal{G}^{*}_{n,m}A[g,\chi]\delta \mathcal{G}_{n^{\prime},m^{\prime}}}\,\rangle\!\rangle\, \tag{8}\] where the symbol \(\langle\!\langle\,\cdot\,\rangle\!\rangle\) stands for \[\langle\!\langle Q\rangle\!\rangle\equiv\mathcal{N}_{1}\mathcal{N}_{2}\int DgD \chi\,\delta(g_{ii}-d)\delta(\partial_{i}g_{ij})\Delta^{\prime}_{\text{FP}}\,| Z_{0}[\![g,\chi]\!]^{2}Q. \tag{9}\] Here, \(\mathcal{N}_{2}\) is another physically irrelevant constant and \(\Delta^{\prime}_{\text{FP}}\) is a restricted Faddeev-Popov determinant obtained by integrating out the ghosts except for the zero modes. At nonzero coupling the ghost determinant involves nontrivial factors of the metric. However, as \(\kappa\to 0\) these factors vanish. In the nongravitational limit, the residual group can then also be handled by simply dropping the condition (7), and instead dividing by the volume of the conformal group. The norm of a nongravitational state then becomes \[(\Psi_{\text{ng}},\Psi_{\text{ng}})=\frac{\text{vol(SO}(d-1))}{\text{vol(SO}(1, d+1))}\lim_{\kappa\to 0}\langle\!\langle\,\delta\mathcal{G}^{*}_{n,m}\delta \mathcal{G}_{n,m}\,\rangle\!\rangle. \tag{10}\] his is precisely Higuchi's prescription for the norm: the RHS is the QFT norm divided by the infinite volume of the conformal group. The factor of \(\text{vol}(\text{SO}(d-1))\) in the numerator arises due to a choice of normalization and is unimportant. Therefore our prescription leads to a derivation of Higuchi's proposal and also provides a precise prescription for how the norm should be generalized beyond \(\kappa=0\). Next, we turn to cosmological correlators. Cosmological correlators are labelled by points on the late-time slice of de Sitter space. While this makes sense in a quantum field theory, there are no local gauge-invariant observables in quantum gravity. We therefore propose that a cosmological correlator that is labelled by a product of \(p\) insertions of the metric and \(q\) insertions of the matter field, \(\mathcal{C}^{p,q}_{\overline{i}\overline{j}}(\bar{x})\), (see (4.2) for notation) corresponds to a _gauge-fixed_ observable: \[\langle\!\langle\Psi|\mathcal{C}^{p,q}_{\overline{i}\overline{j}}(\bar{x})| \Psi\rangle\!\rangle_{\text{CC}}\equiv\sum_{n,m,n^{\prime},m^{\prime}}\kappa^{ n+n^{\prime}}\langle\!\langle\delta\mathcal{G}^{*}_{n,m}\delta\mathcal{G}_{n,m} \mathcal{C}^{p,q}_{\overline{i}\overline{j}}(\bar{x})\rangle\!\rangle. \tag{11}\] Note that the right hand side depends on the choice of gauge in (6) and also that the points in (11) have not been fixed by inserting delta functions for the residual gauge transformations and the corresponding zero-mode determinant but are simply fixed by hand. The residual gauge transformations above turn into symmetries of cosmological correlators. Since special conformal transformations involve the metric fluctuation, they relate lower-point cosmological correlators to higher-point correlators. But we show that cosmological correlators are covariant under rotations, translations and dilatations in any state. Under translations and dilatations \[\langle\!\langle\Psi|\mathcal{C}^{p,q}_{\overline{i}\overline{j}}(\lambda\bar {x}+\zeta)|\Psi\rangle\!\rangle_{\text{CC}}=\lambda^{-q\Delta}\langle\!\langle \Psi|\mathcal{C}^{p,q}_{\overline{i}\overline{j}}(\bar{x})|\Psi\rangle\! \rangle_{\text{CC}}. \tag{12}\] This leads us to a remarkable result: if one is given the cosmological correlators (11) in an arbitrarily small region, then this is sufficient to determine the correlators everywhere. This Figure 1: _The residual gauge group is the Euclidean conformal group in \(d\) dimensions \(\text{SO}(1,d+1)\). Up to a compact subgroup, it can be fixed by fixing three points._ means that knowledge of cosmological correlators in an arbitrarily small region is sufficient to completely specify any pure state of the theory. For any region \(\mathcal{R}\) \[\langle\!\langle\Psi_{1}|\mathcal{C}^{p,q}_{\bar{i}\bar{j}}(\bar{x})|\Psi_{1} \rangle\!\rangle_{\rm CC}=\langle\!\langle\Psi_{2}|\mathcal{C}^{p,q}_{\bar{i} \bar{j}}(\bar{x})|\Psi_{2}\rangle\!\rangle_{\rm CC},\ \ \ \forall\bar{x}\in\mathcal{R}\ {\rm and}\ \forall p,q\implies|\Psi_{1} \rangle=|\Psi_{2}\rangle. \tag{13}\] This result provides the necessary generalization of the principle of holography of information to de Sitter space. While this result marks a clear mathematical difference between quantum field theories and quantum gravity, it should be interpreted with caution. Cosmological correlators are secretly nonlocal observables. So the result above does not imply that a physical observer can determine the entire state of the universe through local measurements. ## 3 Inner product and expectation values In this section we discuss the problem of defining a norm on the space of solutions to the WDW equation that take the form (1). We also show that in the nongravitational limit, this norm reduces to the norm defined by Higuchi. The definition of a norm also tells us how to compute expectation values of observables. ### The general problem We have determined the form of the wavefunctional in equation (1) only in the limit of large volume i.e. in the regime where the cosmological constant dominates the Ricci scalar of the spatial slice and the matter potential. Nevertheless, we expect that this information is sufficient to define a norm on the Hilbert space. The intuition is that the large-volume limit is equivalent to the late-time limit in the physical spacetime. In quantum mechanics, the norm of the state can be defined at any instant of time and does not require knowledge of the full time-evolution of the state. Therefore, we expect that the norm can be defined on the space of wavefunctionals in the large-volume limit and should not require details of the wavefunctional everywhere in the configuration space. Once the question has been reduced to that of finding the norm on states of the form (1), we find another simplification. Although the wavefunctional \(\Psi\) itself has a phase factor that is not Weyl invariant, and \(Z[g,\chi]\) might have a Weyl anomaly, \(|\Psi|^{2}\) is diff \(\times\) Weyl invariant since the phase factor cancels and the anomaly is pure imaginary. So it makes sense to study \(|\Psi|^{2}\) beyond the domain of large-volume metrics where the form (1) was originally derived. (This point is discussed in some more detail in section 4.2 of [8].) We propose that the norm of a wavefunctional \(\Psi\) is given by considering the integral of \(|\Psi|^{2}\) over all field configurations and dividing by the volume of the group of diffeomorphisms and Weyl transformations. \[(\Psi,\Psi)\equiv\frac{\mathcal{N}_{1}}{\text{vol(diff$\times$Weyl)}}\int DgD \chi\sum_{n,m,n^{\prime},m^{\prime}}\kappa^{n+n^{\prime}}\delta\mathcal{G}^{* }_{n,m}\delta\mathcal{G}_{n^{\prime},m^{\prime}}|Z_{0}[g,\chi]|^{2}. \tag{14}\] Here \(\mathcal{N}_{1}\) is an overall state-independent normalization constant that we will choose below for convenience. Now consider a diff \(\times\) Weyl invariant operator \(A[g,\chi]\) that maps states of the form (1) back to the state space. We propose that the expectation value of the operator is given by \[(\Psi,A\Psi)=\frac{\mathcal{N}_{1}}{\text{vol}(\text{diff}\times\text{Weyl})} \int DgD\chi\sum_{n,m,n^{\prime},m^{\prime}}\kappa^{n+n^{\prime}}\delta\mathcal{ G}_{n,m}^{*}\delta\mathcal{G}_{n^{\prime},m^{\prime}}|Z_{0}[g,\chi]|^{2}A[g,\chi]. \tag{20}\] Note that the knowledge of the norm for the state \((a|\Psi_{1})+b|\Psi_{2}))\), and the expectation value of \(A\) in this state, for all \(a\) and \(b\) is sufficient to determine the overlap \((\Psi_{1},\Psi_{2})\) and the matrix elements \((\Psi_{1},A\Psi_{2})\) including their phase. The proposal for the norm and expectation value, (21) and (20), is not unique but we adopt it because it is natural and simple. It might be of interest to explore alternative norms, as we briefly discuss in section 3.5. We also postpone a discussion of some subtle aspects of the proposal to section 3.5. For now, we proceed to examine the technical problem of gauge fixing the diff \(\times\) Weyl redundancy to obtain a practical method of computing the norm. In the section below, we use the Faddeev-Popov formalism to obtain a gauge-fixed expression. In Appendix C, we show that the gauge-fixed functional integral is invariant under a BRST transformation. ### Gauge-fixing conditions In order to implement the Faddeev-Popov procedure to gauge fix the functional integral, we use the following gauge-fixing conditions \[\partial_{i}g_{ij}=0;\qquad g_{ii}=d. \tag{21}\] We use the standard summation convention, so that repeated indices are summed over. The derivative that appears in (21) is an _ordinary_ partial derivative and so the gauge-fixing condition explicitly breaks both diffeomorphism invariance and Weyl invariance. With \(g_{ij}=\delta_{ij}+\kappa h_{ij}\), our choice requires \(h_{ij}\) to be traceless and transverse. In \(d=2\), the conditions (21) are equivalent to fixing \(g_{ij}\) to \(\delta_{ij}\). However, for \(d>2\) it is, in general, not possible to fix the metric to a "fiducial metric" using only diffeomorphisms and Weyl transformations. We adopt the gauge choice (21) for simplicity. In Appendix A, we discuss alternate choices of gauge that lead to the same physical results. The infinitesimal variation due to a diffeomorphism \(x^{i}\to x^{i}+\xi^{i}\) and a Weyl transformation \(g_{ij}\to e^{2\varphi}g_{ij}\) of the metric is given by \[\delta_{(\xi,\varphi)}g_{ij}=\nabla_{i}\xi_{j}+\nabla_{j}\xi_{i}+2\varphi g_{ ij}\, \tag{22}\] where \(\xi_{i}=g_{ik}\xi^{k}\). It will be convenient below to change the parameter of the Weyl transformation to implement the shift \(\varphi\to\varphi-\frac{1}{d}\nabla_{k}\xi_{k}\). The infinitesimal transformation now takes the form \[\delta_{(\xi,\varphi)}g_{ij}=(P\xi)_{ij}+2\varphi g_{ij}\, \tag{23}\] where we have defined \[\begin{split}(P\xi)_{ij}&\equiv g_{jk}\nabla_{i}\xi^{k}+ g_{ik}\nabla_{j}\xi^{k}-\frac{2}{d}g_{ij}\nabla_{k}\xi_{k}\\ &=\xi^{\ell}\partial_{\ell}g_{ij}+g_{jk}\partial_{i}\xi^{k}+g_{ik }\partial_{j}\xi^{k}-\frac{2}{d}g_{ij}g_{k\ell}\partial_{\ell}\xi^{k}\.\end{split} \tag{10}\] The shift is chosen so that the \((P\xi)_{ij}\) is traceless provided \(g_{ii}=d\). #### 3.2.1 Residual gauge transformations The gauge fixing conditions (11) do not completely fix the gauge. Since \((P\xi)_{ij}\) is traceless provided \(g_{ii}=d\), the residual symmetry corresponds to solutions of the equation \[\left(\mathcal{D}\xi\right)_{j}\equiv\partial_{i}(P\xi)_{ij}=0. \tag{11}\] Solutions of this equation are in one-to-one correspondence with the generators of \(\mathrm{SO}(1,d+1)\). However, the nature of the solutions is slightly different for \(d>2\) and for \(d=2\). It is shown in Appendix A that, for a general metric, in \(d>2\), there are \(\frac{(d+1)(d+2)}{2}\) solutions of (11). These are given by \[\begin{split}\text{translations}:&\quad\xi^{i}= \alpha^{i};\\ \text{rotations}:&\quad\xi^{i}=M^{ij}x^{j}\\ \text{dilatations}:&\quad\xi^{i}=\lambda x^{i}\\ \text{SCTs}:&\quad\xi^{i}=\left(2(\beta\cdot x)x^{i}- x^{2}\beta^{i}\right)+\beta^{j}v_{j}^{i}\end{split} \tag{12}\] where \(\lambda\), and \(M^{ij}\) denote, respectively, a number and an antisymmetric matrix and \(\alpha^{i}\) and \(\beta^{i}\) are vectors. The notable aspect of (12) is that the usual special conformal transformations are corrected as noted in [19, 20]. The matrix \(v_{j}^{i}\) depends nontrivially on the metric and vanishes when \(g_{ij}=\delta_{ij}\). In Appendix A, we present an algorithm to find \(v_{j}^{i}\) in perturbation theory. It is also shown there that although the SCT itself is modified, the algebraic structure of the residual transformations (12) remains that of \(\mathrm{SO}(1,d+1)\). Appendix A also discusses residual gauge transformations for other choices of gauge. In \(d=2\), since the conditions (11) fix \(g_{ij}=\delta_{ij}\), the correction term in the SCT always vanishes. \[v_{j}^{i}=0,\qquad\text{for }d=2. \tag{13}\] Appropriate linear combinations of the two allowed SCTs in \(d=2\) correspond to the two independent special conformal transformations that are usually described in terms of "holomorphic" and "anti-holomorphic" transformations in the discussion of string perturbation theory. #### 3.2.2 Fixing the residual symmetry To fix the residual gauge symmetry, we will take advantage of the presence of insertions in (15). We will assume that the state under consideration has at least two insertions, which implies the presence of at least four insertions in (15). In all dimensions, the residual gauge symmetry can then be fixed by setting the position of three insertions as follows: \[x_{1}=0;\qquad x_{2}=1;\qquad x_{3}=\infty. \tag{23}\] The choice of a point at the origin and another point at infinity fixes the translations and special conformal transformations. Fixing \(x_{2}\) to \(1\equiv(1,0,\ldots,0)\) fixes the dilatations and also part of the rotations. This choice does not fix the \(\mathrm{SO}(d-1)\) group of rotations of the hyperplane orthogonal to the \(0-1\) axis. But since this group is compact, it can simply be integrated over and does not lead to any divergence in the functional integral. It is convenient to impose the last condition using the coordinates \(\tilde{x}^{i}_{3}=\frac{x^{i}_{3}}{|x_{3}|^{2}}\) so that it can be written as \(\tilde{x}_{3}=0\). ### Faddeev-Popov procedure To gauge fix the functional integral for the expectation value of an operator in (15), we insert the following expression for the identity, \[1=\Delta_{\mathrm{FP}}\int\,D\xi D\varphi\,\delta\big{(}g^{(\xi,\varphi)}_{ii} -d\big{)}\delta\big{(}\partial_{i}g^{(\xi,\varphi)}_{ij}\big{)}\delta(x_{1}) \delta(x_{2}-1)\delta(\tilde{x}_{3})\, \tag{24}\] where the notation \(g^{(\xi,\varphi)}\) indicates the metric obtained upon acting on \(g_{ij}\) with the diffeomorphism parameterized by \(\xi\) and the Weyl transformation \(\varphi\). \(\Delta_{\mathrm{FP}}\) is the standard Faddeev Popov determinant that we will evaluate below. Substituting the infinitesimal transformations in (24), we can write \[\Delta_{\mathrm{FP}}^{-1}=\int\,D\xi D\varphi\,\delta\big{(}2d\varphi\big{)} \,\delta\big{(}(\mathcal{D}\xi)_{i}\big{)}\,\delta\big{(}\xi^{j}(0)\big{)}\, \delta\big{(}\xi^{j}(1)\big{)}\delta\big{(}\tilde{\xi}^{j}(\infty)\big{)}\, \tag{25}\] where, at infinity, we use the diffeomorphism in the inverted chart \[\tilde{\xi}^{i}(x)=\frac{1}{|x|^{2}}\Big{(}\xi^{i}(x)-2\big{(}x\cdot\xi\big{)} \frac{x^{i}}{|x|^{2}}\Big{)}\, \tag{26}\] which is inserted at \(x=\infty\) corresponding to \(\tilde{x}=0\). The delta function for \(\varphi\) is trivial, and one can simply integrate it out. The Faddeev-Popov determinant may be evaluated using the standard procedure of first writing the delta functions as integrals over auxiliary parameters, and then simply replacing the bosonic parameters by Grassmann numbers. This leads to an expression for \(\Delta_{\mathrm{FP}}\) in terms of a \(c\)-\(\bar{c}\) ghost action: \[\Delta_{\mathrm{FP}}=\mathcal{N}_{2}\int\,DcD\bar{c}\,e^{-S_{\mathrm{gh}}} \big{(}\prod_{j}c^{j}(0)c^{j}(1)\tilde{c}^{j}(\infty)\big{)}\, \tag{27}\] where the \(c\) ghost insertions correspond to \(\xi\) insertions in (3.12) and the ghost action (derived in appendix C) \(S_{\rm gh}\) is given by \[S_{\rm gh}=\int d^{d}x\,\tilde{c}^{j}(\mathcal{D}c)_{j}. \tag{3.15}\] The ghost action (3.14) has zero modes corresponding to the residual gauge transformations discussed previously. Some of these are soaked up by the insertion of the \(3d\) c-ghosts in the denominator. But in the ghost functional-integral (3.14), we _exclude_ the zero modes that correspond to rotations that leave the point \(x_{2}=1\) invariant. (All rotations leave the origin and the point at \(\infty\) invariant.) These zero modes correspond to the unfixed compact part of the residual symmetries and if we were to integrate over them we would obtain zero since there is nothing to soak them up. But there is no difficulty in excluding them in the functional integral since they are orthogonal to all other modes. These unfixed residual transformations also contribute a factor of \(\text{vol}(\text{SO}(d-1))^{-1}\) in \(\Delta_{\rm FP}\) but this can be absorbed in the overall normalization constant \(\mathcal{N}_{2}\). We do not keep track of the overall constant \(\mathcal{N}_{2}\). This factor always drops out of any physical computation since the same constant appears in both the norm and the expectation value and so \((\Psi,\Psi)^{-1}(\Psi,A\Psi)\) does not depend on this constant. Combining everything together, the gauge-fixed expression for the expectation value of \(A\) can be written in the following form. \[(\Psi,A\Psi) =\mathcal{N}_{1}\mathcal{N}_{2}\int DgD\chi\,DcD\bar{c}\,\sum_{n,m,n^{\prime},m^{\prime}}\kappa^{n+n^{\prime}}\delta\mathcal{G}^{*}_{n,m}A[g, \chi]\delta\mathcal{G}_{n^{\prime},m^{\prime}}|Z_{0}[g,\chi]|^{2}e^{-S_{\rm gh}} \tag{3.16}\] \[\times\delta(g_{ii}-d)\delta(\partial_{i}g_{ij})\delta(x_{1}) \delta(x_{2}-1)\delta(\tilde{x}_{3})\big{(}\prod_{i}c^{i}(0)c^{i}(1)\tilde{c}^ {i}(\infty)\big{)}\.\] It is understood that the points \(x_{1},x_{2},x_{3}\) correspond to operators that are part of \(A\) or \(\delta\mathcal{G}_{n,m}\). In Appendix C we show that the gauge-fixed integral (3.16) remains invariant under a BRST symmetry when the delta functions are implemented using auxiliary fields. Ghost determinant.The expression (3.16) can simplified by evaluating the ghost determinant. First we expand the \(c\)-ghosts using a basis of orthonormal vector fields. The correct inner product between vector fields is the one on the sphere. (See Appendix A for more discussion.) We then divide the space of vector fields into the subspace of zero modes and the subspace of nonzero modes. Since we have excluded modes corresponding to rotations that leave \((1,0,\ldots,0)\) invariant, the remaining subspace of zero modes is exactly \(3d\) dimensional. Using the index \(z\) to run over zero modes and the index \(n\) to run over the non-zero modes, we can write \[c^{j}=\sum_{z}c_{(z)}\zeta^{j}_{(z)}+\sum_{n}c_{(n)}\zeta^{j}_{(n)}. \tag{3.17}\] First, consider the contribution of the non-zero modes. This can be evaluated by neglected any \(c\) insertions outside the ghost action. This is because in the ghost action, the nonzero modes of \(c\) are always paired with a mode of \(\bar{c}\). Upon series expanding the action, further \(c\) insertions simply give zero either in the integral over the \(c\) modes or the \(\bar{c}\) modes (for further details, see [21]). Then, to obtain the non-zero mode contribution, we simply perform the integral over the ghost action to obtain a restricted FP determinant \[\int D\bar{c}Dc^{\prime}\,e^{-S_{\rm gh}}=\Delta^{\prime}_{\rm FP}\, \tag{3.18}\] where the prime label indicates that the zero modes have been excluded from the measure. Note that the above notation is somewhat deceptively compact since this restricted determinant depends on the metric fluctuation. We now turn to the zero mode contribution. The zero-mode fields are proportional to those given in (3.8) but we will fix the normalization below for convenience. We can choose \(d\) modes to correspond to translations in the \(d\)-possible directions; one mode corresponds to dilatations; \(d\) modes correspond to special conformal transformations; and \((d-1)\) modes correspond to rotations with \(M^{ij}\propto\delta^{i}_{i_{0}}\delta^{j}_{1}-\delta^{i}_{1}\delta^{j}_{i_{0}}\) with \(i_{0}\neq 1\). The index \(z\) runs over all these \(3d\) fields and we can therefore construct the \(3d\times 3d\) matrix \[M=\begin{pmatrix}\zeta^{1}_{(1)}(0)&\ldots&\zeta^{d}_{(1)}(0)&\zeta^{1}_{(1)}( 1)&\ldots&\zeta^{d}_{(1)}(1)&\tilde{\zeta}^{1}_{(1)}(\infty)&\ldots&\tilde{ \zeta}^{d}_{(1)}(\infty)\\ \zeta^{1}_{(2)}(0)&\ldots&\zeta^{d}_{(2)}(0)&\zeta^{1}_{(2)}(1)&\ldots&\zeta^ {d}_{(2)}(1)&\tilde{\zeta}^{1}_{(2)}(\infty)&\ldots&\tilde{\zeta}^{d}_{(2)}( \infty)\\ \vdots&\ldots&\vdots&\vdots&\ldots&\vdots&\vdots&\ldots&\vdots\\ \zeta^{1}_{(3d)}(0)&\ldots&\zeta^{d}_{(3d)}(0)&\zeta^{1}_{(3d)}(1)&\ldots& \zeta^{d}_{(3d)}(1)&\tilde{\zeta}^{1}_{(3d)}(\infty)&\ldots&\tilde{\zeta}^{d} _{(3d)}(\infty)\end{pmatrix}\,. \tag{3.19}\] The zero-mode determinant is \[\Delta^{0}_{\rm FP}=\det(M). \tag{3.20}\] We now find that our gauge choice leads to a simplification. The special conformal transformations depend on the metric through \(v^{i}_{j}\) as shown in (3.8). However, this dependence vanishes at infinity. Moreover, while the special conformal transformations become a constant at infinity, all other zero-mode fields vanish at infinity. Therefore \(\det(M)\) does not depend on the special conformal transformations at the points \(0\) or \(1\) and thus \(\det(M)\) is independent of the metric. By normalizing the zero-mode fields appropriately, we can simply set \[\Delta^{0}_{\rm FP}=1. \tag{3.21}\] Final answer.We now introduce some notation and present our final answer in a compact form. When three points within an integrated product of operators are fixed using the delta functions that fix the residual transformations, we denote this using a overline. For instance, \[\overline{\delta{\cal G}_{n,m}}\equiv\int\,d\bar{x}\,\delta(x_{1})\delta(x_{2 }-1)\delta(\tilde{x}_{3})G^{\bar{i}\bar{j}}_{n,m}(\bar{x})h_{i_{1},j_{1}}(x_{ 1})h_{i_{2},j_{2}}(x_{2})\ldots\chi(x_{n+1})\ldots\chi(x_{n+m}). \tag{3.22}\] The notation \(\overline{\delta{\cal G}_{n,m}A[g,\chi]\delta{\cal G}_{n,m}^{*}}\) allows for the position of any three operators in the product to be fixed. Next, in the expression for the functional integral (3.16), we choose \[{\cal N}_{1}=\frac{1}{{\cal N}_{2}}\left[\int\,DgD\chi\,\delta(g_{ii}-d)\delta (\partial_{i}g_{ij})\Delta^{\prime}_{\rm FP}|Z_{0}[g,\chi]|^{2}\right]^{-1}. \tag{3.23}\] This choice makes the product \(\mathcal{N}_{1}\mathcal{N}_{2}\) equal to the inverse of the functional integral over the wavefunctional of the Euclidean vacuum. Hence we should think of physical observables as the _ratio_ of a functional integral with operator insertions, and the functional integral over the Euclidean vacuum. Given a general product of the metric and other matter fields, \(Q\), we also define the notation \[\langle\!\langle Q\rangle\!\rangle=\mathcal{N}_{1}\mathcal{N}_{2}\int DgD \chi\,\delta(g_{ii}-d)\delta(\partial_{i}g_{ij})\Delta^{\prime}_{\text{FP}}|Z _{0}[g,\chi]|^{2}Q. \tag{3.24}\] Intuitively, the notation can be thought of as the expectation value of \(Q\) in the Euclidean vacuum although this intuition should be used with care since (see section 3.5) the vacuum itself might not be normalizable. Using this notation, we can then rewrite the gauge-fixed path integral (3.16) as \[(\Psi,A\Psi)=\sum_{n,m,n^{\prime},m^{\prime}}\kappa^{n+n^{\prime}}\!\left\langle \!\langle\overline{\delta\mathcal{G}^{*}_{n,m}A[g,\chi]\delta\mathcal{G}_{n^{ \prime},m^{\prime}}}\rangle\!\right\rangle\,. \tag{3.25}\] Note that setting \(A=1\) yields the norm. \[(\Psi,\Psi)=\sum_{n,m,n^{\prime},m^{\prime}}\kappa^{n+n^{\prime}}\!\left\langle \!\langle\overline{\delta\mathcal{G}^{*}_{n,m}\,\delta\mathcal{G}_{n^{\prime },m^{\prime}}}\rangle\!\right\rangle\,. \tag{3.26}\] The relations (3.26) and (3.25) represent our final answers in compact form. ### Nongravitational limit We now show that our expression for the norm coincides precisely with the norm proposed by Higuchi [11, 12] in the nongravitational limit. It was explained in [8] that the form of the allowed states simplifies in the nongravitational limit. More specifically, in the nongravitational limit, with the state corresponding to the Euclidean vacuum denoted by \(|0\rangle\) the allowed states take the form \[|\Psi_{\text{ng}}\rangle=\int d\bar{x}\,\delta G^{i\overline{j}}_{n,m}(\bar{y},\bar{z})h_{i_{1}j_{1}}(y_{1})\ldots h_{i_{n}j_{n}}(y_{n})\chi(z_{1})\ldots \chi(z_{m})|0\rangle\,\,. \tag{3.27}\] The simplification above is that we do not have a sum over multiple values of \(n\) that is necessary when \(\kappa\neq 0\) by the Ward identities. Now consider the nongravitational limit of the expectation value defined in (3.24), \[\langle 0|Q|0\rangle_{\text{QFT}}\equiv\lim_{\kappa\to 0}\langle\!\langle Q \rangle\!\rangle\,\,\,. \tag{3.28}\] In the nongravitational limit, the ghosts decouple from the metric. Since \(\Delta^{\prime}_{\text{FP}}\) has no dependence on the metric fluctuation in the limit \(\kappa\to 0\), it trivializes to a numerical factor. The gauge conditions still ensure that \(h_{ij}\) is transverse and traceless. Therefore, this expression instructs us to integrate the product of insertions over the matter fields and over the transverse traceless fluctuations of the metric using the \(\kappa\to 0\) limit of the wavefunctional for the Euclidean vacuum. This is precisely how one would have computed the expectation value of the product of operators in the quantum field theory, including the fluctuations of free transverse-traceless gravitons. This explains the choice of notation in (3.28). Now consider the norm of two states of the form (3.27). Our final answer for the norm in the nongravitational limit can be written as \[\begin{split}&(\Psi_{\text{ng}},\Psi_{\text{ng}})=\int d\bar{x}d \bar{x}^{\prime}\,\delta G^{\bar{1}\bar{j}}_{n,m}(\bar{x})\,(\delta G^{\bar{1 }\bar{j}}_{n,m})^{*}(\bar{x}^{\prime})\delta(x_{1})\delta(x_{2}-1)\delta(\tilde {x}_{3})\\ &\times\langle 0|h_{i^{\prime}_{1}j^{\prime}_{1}}(y^{\prime}_{1}) \ldots h_{i^{\prime}_{n}j^{\prime}_{n}}(y^{\prime}_{n})\chi(z^{\prime}_{1}) \ldots\chi(z^{\prime}_{m})h_{i_{1}j_{1}}(y_{1})\ldots h_{i_{n}j_{n}}(y_{n}) \chi(z_{1})\ldots\chi(z_{m})|0\rangle_{\text{QFT}}\end{split} \tag{3.29}\] where \(x_{1},x_{2},x_{3}\) can be any three coordinates from the \(\bar{x}=(\bar{y},\bar{z})\) or \(\bar{x}^{\prime}=(\bar{y}^{\prime},\bar{z}^{\prime})\) that appear above. We recognize that this is just the gauge-fixed version of Higuchi's proposal as we can undo the residual gauge-fixing and write this as a group average which becomes \[\begin{split}&(\Psi_{\text{ng}},\Psi_{\text{ng}})=\frac{\text{ vol}(\text{SO}(d-1))}{\text{vol}(\text{SO}(1,d+1))}\int d\bar{x}d\bar{x}^{\prime}\, \delta G^{\bar{1}\bar{j}}_{n,m}(\bar{x})\,(\delta G^{\bar{1}\bar{j}}_{n,m})^{* }(\bar{x}^{\prime})\\ &\times\langle 0|h_{i^{\prime}_{1}j^{\prime}_{1}}(y^{\prime}_{1}) \ldots h_{i^{\prime}_{n}j^{\prime}_{n}}(y^{\prime}_{n})\chi(z^{\prime}_{1}) \ldots\chi(z^{\prime}_{m})h_{i_{1}j_{1}}(y_{1})\ldots h_{i_{n}j_{n}}(y_{n}) \chi(z_{1})\ldots\chi(z_{m})|0\rangle_{\text{QFT}}\.\end{split} \tag{3.30}\] This can be derived repeating the steps in section 5.3 of [8]. We can also write this as \[(\Psi_{\text{ng}},\Psi_{\text{ng}})=\frac{\text{vol}(\text{SO}(d-1))}{\text{ vol}(\text{SO}(1,d+1))}(\Psi_{\text{ng}}|\Psi_{\text{ng}})_{\text{QFT}} \tag{3.31}\] and we recognize Higuchi's inner product. In this expression the infinite volume of the conformal group in the denominator cancels the infinite QFT norm. The additional finite factor of \(\text{vol}(\text{SO}(d-1))\) emerges because an \(\text{SO}(d-1)\) subgroup of \(\text{SO}(1,d+1)\) leaves three points invariant. Since this is an overall finite normalization constant in the norm, it is physically irrelevant. Note that it would _not_ be correct to equate (3.29) with (3.30) away from the nongravitational limit even after replacing the QFT expectation values with a gravitational expectation value of the form (3.24). First, away from this limit the form of the states shown in (3.27) is corrected. More importantly the action of a special conformal transformation on the operators that appear there is corrected due to the correction term in (3.8). Consequently, special conformal transformations relate an expectation value to another expectation values with additional metric insertions. Therefore, away from the gravitational limit the gauge-fixed integrand that appears in (3.29) cannot simply be equated with a group average. Therefore our proposal (3.26) reduces to Higuchi's proposal in the nongravitational limit but also provides a systematic method of correcting it at nonzero \(\kappa\). If, in addition to the nongravitational limit, we consider the free-field limit for matter fields in the principal series then the space of "conformal blocks" naturally provides an orthonormal basis for the Hilbert space under the norm (3.30). This interesting point is discussed further in Appendix B. ### Subtleties In the technical discussion of the norm, we have glossed over some subtleties that we now list. 1. In even dimensions, the transformation of the measure might introduce a Weyl anomaly in (3.25) and (3.26) [22]. Relatedly, in string theory, where a similar functional integral appears, the critical dimension is fixed by demanding that the Weyl anomaly vanishes. So, in even the expression for the norm might need to be improved by adding auxiliary fields to preserve diff \(\times\) Weyl invariance. However, we also note that we have some more freedom because \(|Z_{0}[g,\chi]|^{2}\) is not a local functional and therefore it might be reasonable to study nonlocal measures. We leave a deeper study of the measure to future work. For odd \(d\), which includes the case \(d=3\) of physical interest since it corresponds to an asymptotically dS\({}_{4}\) spacetime, we do not expect these issues to arise. 2. The question of the normalizability of the Hartle-Hawking wavefunctional and its relation to the nonperturbative instability of de Sitter space has been discussed in [23; 24; 25]. This is related to the question of the measure. We will not address this issue in this paper. We note that physical quantities are always related to the _ratio_ of an expression of the form (3.25) and an expression of the form (3.26), which might be better behaved. 3. The formula (3.25) requires the presence of at least three operators in the product \(\delta\mathcal{G}_{n,m}^{*}A[g,\chi]\delta\mathcal{G}_{n^{\prime},m^{\prime}}\). Therefore it cannot be used to compute the norm of the original Euclidean vacuum. (This problem is separate from the one discussed in point 2.) This suggests that the vacuum state itself is not part of the Hilbert space at all and only excitations above the vacuum are normalizable states. This issue was noted earlier by Higuchi [12] and also, recently, in [26]. It is similar to the one that arises in string perturbation theory, if one attempts to define the sphere partition function with less than three vertex operator insertions. It would be nice to understand this better, perhaps using the techniques of [27; 28; 29; 30; 31]. 4. Consider a term with a given value of \(n,m\) in the expression (3.25). This involves the "expectation value" of a product of operators integrated with coefficient functions that are conformally covariant. (Recall the definition of \(\mathcal{G}_{n,m}\) in (2.4).) It will be shown in section 4, that the expectation value also transforms in a simple fashion under the conformal group. When combined with the coefficient function this produces an integrand that is invariant under rotations, dilatations and translations and in the \(\kappa\to 0\) limit under SCTs. Fixing three points in such an integral suffices to remove an obvious divergence that comes from the volume of the conformal group. 5. Nevertheless, there might be additional divergences in (3.16) that arise due to the "collision" of operators. This issue again parallels an issue that appears in string perturbation theory. We hope that the ideas developed to deal with these divergences in that setting, including a suitable \(i\epsilon\) prescription [32], the use of string field theory techniques [33] and off-shell methods [31; 34; 35], will be effective in this setting as well. We leave further study of this issue to future work. ## 4 Cosmological correlators Cosmological correlators are of interest since they provide a leading-order approximation to the fluctuations generated during the inflationary epoch, when the universe could be approximated by a de Sitter spacetime. In this section, we will define these quantities within our framework and discuss some of their properties. ### Definition of cosmological correlators In the literature, cosmological correlators are usually computed as QFT-expectation values of the form \(\langle\chi(x_{1})\ldots\chi(x_{n})\rangle_{\rm QFT}\), where \(x_{i}\) are points on the late-time boundary of de Sitter. In a quantum field theory, the meaning of such correlators is clear. However, in a theory of quantum gravity, the product of local operators on the late-time slice does not commute with the constraints and so is not gauge invariant. For instance, under a diffeomorphism \(x^{i}\to x^{i}+\xi^{i}\), an operator insertion \(\chi(x)\) transforms as \[\chi(x)\to\chi(x)+\xi^{i}\partial_{i}\chi(x)\, \tag{4.1}\] and thus does not remain invariant. Since diffeomorphisms on the late-time slice are generated by the momentum constraint, this means that the operator \(\chi(x)\) does not commute with the momentum constraint. Likewise, it may be checked that the operator does not commute with the Hamiltonian constraint. This is expected from the well-known result [36] that gauge-invariant observables in gravity cannot be local. Nevertheless, it is possible to make sense of such operators by fixing the gauge. We propose the following definition of cosmological correlators. Let \[\mathcal{C}^{p,q}_{i\bar{j}}(\bar{x})=h_{i_{1}j_{1}}(z_{1})\ldots h_{i_{p}j_{ p}}(z_{p})\chi(y_{1})\ldots\chi(y_{q})\, \tag{4.2}\] denote a product of \(p\) metric fluctuations and \(q\) matter fluctuations. Now consider a state of the form (2.1). We propose that the cosmological correlator corresponding to the product (4.2) in the state (2.1) be defined as \[\langle\!\langle\Psi|\mathcal{C}^{p,q}_{i\bar{j}}(\bar{x})|\Psi\rangle\!\rangle _{\rm CC}=\!\!\sum_{n,m,n^{\prime},m^{\prime}}\kappa^{n+n^{\prime}}\langle\! \langle\delta\mathcal{G}^{*}_{n,m}\delta\mathcal{G}_{n,m}\mathcal{C}^{p,q}_{i \bar{j}}(\bar{x})\rangle\!\rangle\, \tag{4.3}\] using the expectation value (3.24). This can be written more explicitly as \[\langle\!\langle\Psi|\mathcal{C}^{p,q}_{i\bar{j}}(\bar{x})|\Psi\rangle\! \rangle_{\rm CC}\equiv\mathcal{N}_{1}\mathcal{N}_{2}\,\int\,DgD\chi D\bar{c}Dc ^{\prime}e^{-\mathcal{S}}\mathcal{C}^{p,q}_{i\bar{j}}(\bar{x})\, \tag{4.4}\] where, for convenience in the discussion below, we have introduced an "action" \(\mathcal{S}\) \[e^{-\mathcal{S}}\equiv e^{-S_{\rm gh}}\delta(g_{ii}-d)\delta(\partial_{i}g_{ ij})|Z_{0}[g,\chi]|^{2}\sum_{n,m,n^{\prime},m^{\prime}}\kappa^{n+n^{\prime}} \delta\mathcal{G}^{*}_{n,m}\delta\mathcal{G}_{n,m}. \tag{4.5}\] Let us discuss some features of our proposed correlator. 1. Recalling point 4 in section 3.5, our prescription for the correlator makes sense provided that the product (4.2) has at least three points. \(\delta\mathcal{G}_{n,m}\) also contains products of the form (4.2) integrated with conformally covariant functions. Therefore, if we study a \(k=p+q\)-point cosmological correlator, each term in the sum in (4.4) is an expectation value of a product of \((n+m)+(n^{\prime}+m^{\prime})+k\) operators where \((n+m)+(n^{\prime}+m^{\prime})\) operators are integrated with a conformally covariant function. It will be shown below that the expectation value is conformally covariant. (See 4.2 for a precise discussion.) Therefore a value of \(k\geq 3\) is sufficient to remove a potential divergence from the volume of the conformal group. It would be nice to understand two-point correlators, perhaps, by generalizing the methods of [27]. 2. The prescription (4.4) continues to make sense if we remove the insertions of \(\delta\mathcal{G}_{n,m}\) and consider only the vacuum state. In the vacuum state, the restriction \(k\geq 3\) does not apply. Although the vacuum is the state that is most commonly used to compute cosmological correlators, especially in the literature that makes contact with AdS/CFT [15, 37], we remind the reader that it is not normalizable when the norm is given by (3.16). #### 4.1.1 Dependence on the gauge choice The prescription (4.4) defines the expectation value of a _gauge-fixed_ operator. Since the product of operators \(\mathcal{C}^{p,q}_{\bar{i}\bar{j}}\) is not diff\(\times\)Weyl invariant, if one were to choose a different gauge (as opposed to the transverse-traceless gauge chosen above), one would obtain a different answer for the cosmological correlator. In fact, it is perfectly reasonable to make a different gauge choice, and alternative gauges are discussed in Appendix A. The transverse-traceless gauge is convenient for us since it will be shown below that the symmetries of cosmological correlators take on a simple form. In other gauges, these symmetries might be realized nonlinearly although different gauges might be suitable for different physical applications. Given a gauge choice, the prescription (4.4) defines an unambiguous conjugate-bilinear functional on two states. Therefore, there necessarily exists _some_ gauge invariant operator on the Hilbert space whose matrix elements are defined by (4.4). More precisely, with \(|\Psi\rangle=a|\Psi_{1}\rangle+b|\Psi_{2}\rangle\), we can simply define a gauge-invariant operator \(\hat{\mathcal{C}}^{p,q}_{\bar{i}\bar{j}\bar{x}}\) with matrix elements as follows \[(\Psi_{1},\hat{\mathcal{C}}^{p,q}_{\bar{i}\bar{j}\bar{x}}\,\Psi_{2})\equiv \frac{\partial}{\partial a^{*}}\frac{\partial}{\partial b}\langle\!\langle \Psi|\mathcal{C}^{p,q}_{\bar{i}\bar{j}}(\bar{x})|\Psi\rangle\!\rangle_{\rm CC }\, \tag{4.6}\] where the right hand side is defined by (4.4). The operator \(\hat{\mathcal{C}}^{p,q}_{\bar{i}\bar{j}\bar{x}}\) is not a local functional of \(\chi\) and \(h_{ij}\) and the \(\bar{x}\) are simply labels for this operator. The map between the product \(\mathcal{C}_{p,q}(\bar{x})\) and the gauge invariant operator depends on the gauge choice. However, the difference between different gauge choices manifests itself only at \(\mathrm{O}(\kappa)\). In the nongravitational limit, there is a simple gauge-invariant operator whose expectation value yields (4.4). This is given by simply taking the group average of (4.4). To see this more precisely, let \(U\) be the operator in nongravitational quantum-field theory that implements the action of the conformal group on the late-time metric and matter fluctuations. Then we have \[\hat{\mathcal{O}}^{p,q}_{\bar{i}\bar{j}\bar{x}}=\frac{1}{\text{vol}(\text{SO}(d-1) )}\int\,dU\,U^{\dagger}\mathcal{C}^{p,q}_{\bar{i}\bar{j}}(\bar{x})U,\qquad( \kappa\to 0)\, \tag{110}\] where \(dU\) is the associated Haar measure. The right hand side makes sense provided \(p+q\geq 3\). We see that \(\hat{\mathcal{O}}^{p,q}_{\bar{i}\bar{j}\bar{x}}\) is an average of an infinitely delocalized operator. In the nongravitational limit, it may be checked using (10) that the expectation value of (110) in a state of the form (10) is the same as (10). We find that \[\begin{split}(\,\Psi_{\text{ng}},\hat{\mathcal{C}}^{p,q}_{\bar{i} \bar{j}\bar{x}}\,\Psi_{\text{ng}})&=\frac{1}{\text{vol}(\text{SO} (1,d+1))}\int\,dU(0|\delta\mathcal{G}_{n,m}U^{\dagger}\mathcal{C}^{p,q}_{\bar{i },\bar{j}}(\bar{x})U\delta\mathcal{G}^{*}_{n,m}|0)_{\text{QFT}}\\ &=\langle 0|\delta\mathcal{G}_{n,m}\mathcal{C}^{p,q}_{\bar{i},\bar{j}}( \bar{x})\delta\mathcal{G}^{*}_{n,m}|0\rangle_{\text{QFT}}\\ &=\langle\Psi_{\text{ng}}|\mathcal{C}^{p,q}_{\bar{i},\bar{j}}( \bar{x})|\Psi_{\text{ng}}\rangle_{\text{QFT}}\,\end{split} \tag{111}\] where, in the second line, we use the invariance of \(|\Psi_{\text{ng}}\rangle\) under conformal transformations. At nonzero \(\kappa\) we do not know of any simple analogue of (110) that gives an explicit expression for the gauge-invariant operator whose matrix elements coincide with the gauge-fixed operator. Note also that, at nonzero \(\kappa\), one must take a linear combination of an infinite set of terms (10) with increasing values of \(p\) to construct a gauge-invariant operator of the form given in (4). ### Symmetries of cosmological correlators Cosmological correlators are defined in (10) by inserting a product of operators in the path integral weighted with a specific action. This action is invariant under the residual gauge transformation that are left unfixed in (10). We will utilize the finite action of translations, rotations and dilatations on the matter fields and the ghosts, which is given by \[\begin{split}\text{translations:}& h_{ij}(x)\to h_{ij}(x+\zeta);\quad\chi(x)\to\chi(x+\zeta);\\ & c^{i}(x)\to c^{i}(x+\zeta);\quad\bar{c}^{i}(x)\to\bar{c}^{i}(x+ \zeta);\end{split} \tag{112}\] \[\begin{split}\text{rotations:}& h_{ij}(x)\to R_{i}^{\;k}R_{j}^{\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \; Since the "action" \(\mathcal{S}\) is invariant under the transformations above, cosmological correlators transform covariantly under these transformations. Under a combined dilatation and translation we find that in any physical state \(|\Psi\rangle\) \[\langle\!\langle\Psi|\mathcal{C}^{p,q}_{\bar{i}\bar{j}}(\lambda\bar{x}+\zeta)| \Psi\rangle\!\rangle_{\rm CC}=\lambda^{-q\Delta}\langle\!\langle\Psi|\mathcal{ C}^{p,q}_{\bar{i}\bar{j}}(\bar{x})|\Psi\rangle\!\rangle_{\rm CC}. \tag{4.10}\] Under a rotation we find that \[\langle\!\langle\Psi|\mathcal{C}^{p,q}_{\bar{i}\bar{j}}(R\cdot\bar{x})|\Psi \rangle\!\rangle_{\rm CC}=R^{i^{\prime}_{1}}_{i_{1}}R^{j^{\prime}_{1}}_{j_{1}} \ldots R^{i^{\prime}_{p}}_{i_{p}}R^{j^{\prime}_{p}}_{j_{p}}\langle\!\langle \Psi|\mathcal{C}^{p,q}_{\bar{i}\bar{j}^{\prime}\bar{j}}(\bar{x})|\Psi\rangle\! \rangle_{\rm CC}. \tag{4.11}\] The symmetries of cosmological correlators should be distinguished from the symmetries of the coefficient functions (2.4) that appear in the wavefunctional. Those coefficient functions are constrained by the full conformal group, even away from \(\kappa\to 0\), as a consequence of the WDW equation. Cosmological correlators are obtained by squaring and integrating the wavefunctional with a choice of gauge. Dilatations. The inclusion of dilatations in the group of symmetries requires explanation since, in a quantum field theory, scale invariance is often broken by loop effects. So the reader might worry that UV effects might force us to use a regulator that is inconsistent with scale invariance. However, here, the residual group of symmetries involving dilatations is a subgroup of the diff \(\times\) Weyl group. The latter symmetry is a gauge symmetry of the path integral used to compute expectation values. So we expect that even if counterterms need to be added to the expression for the wavefunctional to regulate UV divergences, \(|\Psi[g,\chi]|^{2}\) will still remain diff \(\times\) Weyl invariant. Moreover, the form of the ghost action is protected by BRST symmetry. Therefore we expect that the reduced Faddeev-Popov determinant that appears in (4.4) remains invariant under these symmetries even when loop effects are included. Special conformal transformations.For \(d>2\) and away from \(\kappa\to 0\) the action of special conformal transformations is corrected as shown in (3.8). Such a transformation acts on an insertion in (4.4) via \[\delta_{\xi}\chi=\xi^{i}\partial_{i}\chi;\qquad\delta_{\xi}g_{ij}=(P\xi)_{ij}\, \tag{4.12}\] where \(P\) is defined in (3.6). But since \(\xi^{i}\) contains factors of the metric, this transformation acts nonlinearly on the fields. In Appendix A, it is shown how \(\xi^{i}\) corresponding to SCTs can be found perturbatively in terms of the metric fluctuation. Keeping this structure in mind, we see that the action of (4.12) converts a single insertion of the metric or a matter field to an infinite series that involves powers of the metric. Therefore special conformal transformations relate low-point cosmological correlators to higher-point cosmological correlators [19, 38]. Although this is an important and useful constraint on cosmological correlators, it will not be required for our purposes. In the nongravitational limit and in \(d=2\), the metric-dependent term in SCTs goes away. So, in that setting, cosmological correlators with a fixed value of \(p,q\) transform covariantly under SCTs. We note that rotations, dilatations and translations act in a simple manner on cosmological correlators because they correspond to metric-independent residual gauge-transformations left unfixed by the transverse-traceless gauge. In some choices of gauge, such as the alternative gauge discussed in Appendix A, all the residual gauge transformations are metric dependent. In such a gauge, all the symmetry transformations of cosmological correlators will change the value of \(p\). Physically, cosmological correlators are still constrained by these symmetries in such gauges. But the constraints are more complicated than (4.11) and (4.10). ### Symmetries and initial conditions Our analysis of symmetries does _not_ assume that the state in (4.4) is the Euclidean vacuum, as obtained from the Hartle-Hawking proposal. Cosmological correlators in all states have the same symmetries; and vacuum cosmological correlators do not display an enhanced symmetry group. This also means that, contrary to what is sometimes claimed, the observed approximate symmetries of cosmological correlators including scale invariance do _not_ provide evidence that our universe was in the Hartle-Hawking state during the inflationary period. If there was a period of inflation, and if the universe was well described by an excited state of the form (2.1), one would obtain cosmological correlators with the same symmetries. This strengthens the argument made in [39] that the symmetries of correlators -- which completely fix specific low-point functions -- provide a sharp test of inflation, since it removes the need for assuming a particular initial state. On the other hand, to make contact with empirical observations, it is often interesting to consider departures from the slow-roll approximation. These must be present in the real world since inflation cannot go on forever but must end before the local curvature becomes arbitrary small. To analyze these corrections in our language, would require knowledge of the state away from the large-volume limit. We are unable to make any statements about these corrections since we have not considered these subleading terms in this paper or in [8]. ## 5 Holography of information The symmetries of cosmological correlators immediately lead to a remarkable result. Let \(\mathcal{R}\) be any open subset of \(\mathbb{R}^{d}\) and \(\tilde{x}^{\prime}=(x_{1}^{\prime},\ldots,x_{p+q}^{\prime})\) be an arbitrary set of \(p+q\) points in \(\mathbb{R}^{d}\). Then we can find a set \(\bar{x}\) of \(p+q\) points in \(\mathcal{R}\) such that \[x_{k}^{\prime}=\lambda x_{k}+\zeta,\qquad x_{i}\in\mathcal{R},\qquad k=1, \ldots,p+q\, \tag{5.1}\] for some choice of \(\lambda>0\) and vector \(\zeta\). In other words, an arbitrary configuration of points can always be mapped to lie in the region \(\mathcal{R}\) with a suitable dilatation and translation. Therefore, if we are given the set of all cosmological correlators \[\{\langle\!\langle\!\langle\Psi|\mathcal{C}^{p,q}_{\bar{ij}}(\tilde{x})|\Psi\rangle \!\rangle_{\rm CC}\} \tag{108}\] for all values of \(p,q\) and all configurations of points \(x_{i}\in\mathcal{R}\), the symmetry (107) implies that this information is sufficient to determine all cosmological correlators in the state \(\Psi\) everywhere on the spatial slice. But the set of all cosmological correlators everywhere on the spatial slice are evidently enough to reconstruct all observables on the slice. This immediately leads us to the following result. **Result**.: _The set of all cosmological correlators in any open region \(\mathcal{R}\) in a state \(\Psi\) is sufficient to uniquely identify the state._ If the theory is in a mixed state, the set of cosmological correlators in \(\mathcal{R}\) is sufficient to determine the density matrix of the theory. Our result relies on the relation (107). In other gauge choices, such as the alternative gauge discussed in Appendix A.5, translations and dilatations will act on cosmological correlators by changing the value of \(p\) as the residual symmetry generators have metric-dependent corrections. Nevertheless, they still relate the set of all cosmological correlators in a region \(\mathcal{R}\) (_i.e._ cosmological correlators with all possible value of \(p,q\)) to cosmological correlators outside that region. Therefore we expect that the result above should also hold for cosmological correlators in such gauges although it is calculationally harder to obtain the value of a cosmological correlator outside \(\mathcal{R}\) using information inside \(\mathcal{R}\). ### Nongravitational limit Somewhat surprisingly, the result above remains true even as we take \(\kappa\to 0\). It was shown in [8] that the states \(\Psi_{\rm ng}\) (displayed in (3.27)) have the property that they are invariant under the de Sitter isometries, \[U|\Psi_{\rm ng}\rangle=|\Psi_{\rm ng}\rangle. \tag{109}\] Following the steps in subsection 3.4, we see that the expression for the cosmological correlator is simply \[\lim_{\kappa\to 0}\langle\!\langle\Psi_{\rm ng}|\mathcal{C}^{p,q}_{\bar{ij}}( \tilde{x})|\Psi_{\rm ng}\rangle\!\rangle_{\rm CC}=\langle\Psi_{\rm ng}|\mathcal{ C}^{p,q}_{\bar{ij}}(\tilde{x})|\Psi_{\rm ng}\rangle\!\rangle_{\rm QFT}\, \tag{110}\] where, on the right hand side, we now find simply the QFT expectation value of \(\mathcal{C}^{p,q}_{\bar{ij}}(\tilde{x})\) in the state \(|\Psi_{\rm ng}\rangle\). Using the invariance of the state under the de Sitter isometries we see that \[\langle\!\langle\Psi_{\rm ng}|\mathcal{C}^{p,q}_{\bar{ij}}(\tilde{x})|\Psi_{ \rm ng}\rangle\!\rangle_{\rm CC}=\langle\!\langle\Psi_{\rm ng}|U^{\dagger} \mathcal{C}^{p,q}_{\bar{ij}}(\tilde{x})U|\Psi_{\rm ng}\rangle\!\rangle_{\rm CC }. \tag{111}\] So, in the nongravitational limit cosmological correlators are invariant under the entire conformal group. This includes the action of special conformal transformations that do not appear in the group of symmetries at finite \(\kappa\) shown in (106). The result on the holography of information follows immediately. Physically, this analysis tells us that holography of information does not rely on the measurement of "small gravitational tails" but rather from an imposition of the gravitational Gauss law. The constraints implied by the Gauss law restrict the form of the allowed states in the theory, which is why it is possible to uniquely identify states from cosmological correlators in any open set. ### Difference between quantum field theories and quantum gravity We have shown that holography of information persists, if one takes the nongravitational limit of a theory of gravity while preserving the gravitational Gauss law. We now explain why nongravitational quantum field theories do not display this property. Starting with the Euclidean vacuum, which is still obtained by the Hartle-Hawking prescription, states in a QFT take the form \[|\psi\rangle=\int d\vec{y}d\vec{z}\,\psi^{\vec{i}\vec{j}}(\vec{y},\vec{z})h_{i_ {1}j_{1}}(y_{1})\ldots h_{i_{n}j_{n}}(y_{n})\chi(z_{1})\ldots\chi(z_{m})|0 \rangle\, \tag{100}\] where \(h_{ij}\) are transverse traceless graviton fluctuations. Here \(\psi^{\vec{i}\vec{j}}\) is an arbitrary smearing function and the only constraint is that \(|\psi\rangle\) should be normalizable under the usual QFT norm. \[\begin{split}\langle\psi|\psi\rangle_{\rm QFT}=\int d\vec{x}d \vec{x}^{\prime}\psi^{\vec{i}\vec{j}}(\vec{y},\vec{z})^{*}\psi^{\vec{i}\vec{j} ^{\prime}}(\vec{y}^{\prime},\vec{z}^{\prime})\\ \times\langle 0|h_{i^{\prime}_{1}j^{\prime}_{1}}(y^{\prime}_{1}) \ldots\chi(z^{\prime}_{m})h_{i_{1}j_{1}}(y_{1})\ldots\chi(z_{m})|0\rangle_{ \rm QFT}\.\end{split} \tag{101}\] We emphasize the difference with the Hilbert space obtained in the nongravitational limit of a gravitational theory where the states take the form (100). In (100) the smearing function is constrained by conformal symmetry, whereas in (100) it is not. Moreover, the smearing functions that appear in (100) are _disallowed_ by normalizability in (100). This is simply the statement that, apart from the vacuum, there are no states that are invariant under the de Sitter isometry group in the usual QFT Hilbert space. This can also be seen directly from the expression for the norm (101). The correlator in the Euclidean vacuum is conformally covariant because the Euclidean vacuum itself is invariant. But if this correlator were to be integrated with the smearing function that appears in (100) the entire integrand would be invariant under the action of the conformal group. Therefore, the norm would pick up a divergence proportional to the volume of the conformal group. When we consider states in the nongravitational limit of a gravitational theory and use the correct norm, this divergence is cancelled by dividing by the volume of the conformal group but there is no such factor in the ordinary QFT norm. Therefore for a generic value of \(\lambda\) and \(\zeta\) and for any QFT state except for the vacuum, \[\langle\psi|{\cal C}^{p,q}_{\vec{i}\vec{j}}(\lambda\vec{x}+\zeta)|\psi\rangle _{\rm QFT}\neq\lambda^{-q\Delta}\langle\psi|{\cal C}^{p,q}_{\vec{i}\vec{j}}( \vec{x})|\psi\rangle_{\rm QFT}. \tag{102}\] So the argument leading to the holography of information breaks down in the QFT Hilbert space. As usual, in a QFT, it is possible to prepare "split states" [40] where correlators coincide inside a region but differ outside that region. This means the following. Let \(\overline{\mathcal{R}}_{\epsilon}=\overline{\mathcal{R}\cup\epsilon}\) be the complement of the union of the region \(\mathcal{R}\) and a small "collar region", \(\epsilon\). Then given any two states of the form (110) one can find a split state with the property that when \(x_{i}\in\mathcal{R}\) and \(x^{\prime}_{i}\in\overline{\mathcal{R}}_{\epsilon}\) \[\langle\psi^{\rm split}|\mathcal{C}^{p,q}_{i\overline{j}}(\bar{x})\mathcal{C} ^{p^{\prime},q^{\prime}}_{i^{\prime}\overline{j^{\prime}}}(\bar{x}^{\prime})| \psi^{\rm split}\rangle_{\rm QFT}=\langle\psi_{1}|\mathcal{C}^{p,q}_{i\overline {j}}(\bar{x})|\psi_{1}\rangle_{\rm QFT}(\psi_{2}|\mathcal{C}^{p^{\prime},q^{ \prime}}_{i^{\prime}\overline{j^{\prime}}}(\bar{x}^{\prime})|\psi_{2}\rangle _{\rm QFT} \tag{111}\] for any choice of the operators \(\mathcal{C}^{p,q}_{ij}(\bar{x})\) and \(\mathcal{C}^{n^{\prime},m^{\prime}}_{i^{\prime}\overline{j^{\prime}}}(\bar{x }^{\prime})\). In such a split state, not only are observations in \(\overline{\mathcal{R}}_{\epsilon}\) not determined by observations in \(\mathcal{R}\), they are not even correlated. Clearly this means that the full state cannot be identified by observations in \(\mathcal{R}\). We conclude that the result on holography of information marks a clear mathematical difference between the properties of quantum field theory and quantum gravity, in terms of how such theories localize information. This difference persists in the nongravitational limit of a gravitational theory provided one consistently imposes the Gauss law while taking this limit. ### Comparison to flat space and AdS The result above can be placed in the context of similar results proved in AdS and in asymptotically flat space. There, the principle of holography of information is usually framed as follows: "the information in the bulk of a Cauchy slice is also available near its boundary." More precisely, in asymptotically flat space, it was shown in [1] that all information that is available on all of \(\mathcal{I}^{+}\) is also available on its past boundary \(\mathcal{I}^{+}_{-}\); and, in a spacetime that is asymptotically AdS, all information that is available on the timelike boundary is also available in an infinitesimal time band. In the form above, it is unclear how the principle should be generalized to dS, where a Cauchy slice has no boundary. But, consider the following alternative phrasing of this principle: "in all pure states of the theory, whenever a region, \(\mathcal{R}\), is completely surrounded by its complement, \(\overline{\mathcal{R}}\), then all the information inside \(\mathcal{R}\) is accessible in \(\overline{\mathcal{R}}\)."2 In flat space and AdS, this is trivially equivalent to the usual statement; when \(\overline{\mathcal{R}}\) surrounds \(\mathcal{R}\) then it also includes the asymptotic region near infinity. Footnote 2: We restrict to pure states to avoid situations where entanglement with an auxiliary system has produced an “island” inside \(\mathcal{R}\). The second form of the slogan generalizes naturally to dS. Since the Cauchy slices in dS are compact, every region \(\mathcal{R}\) both surrounds its complement and is surrounded by its complement. (See Figure 2.) So it is natural that cosmological correlators in every region \(\mathcal{R}\) contain all the information that is available on the Cauchy slice in a pure state. ### Higher-spin matter fields and stringy corrections In the analysis above, we have studied a massive scalar field in the matter sector. This choice was made for simplicity. It seems clear that the proof of the principle of holography of information will go through in the presence of higher-spin matter. Our results in [8] and in this paper rely only on an _asymptotic analysis_. The assumption is that the formalism of quantum field theory makes sense at asymptotic infinity. This assumption is usually taken to be valid even in the presence of stringy corrections.3 Footnote 3: Here, we do not enter into the recent debates on whether de Sitter solutions can be found within string theory [41; 42; 43]. However, there is an important difference in dS compared to AdS and flat space. In the latter setting, it is reasonable to assume that the asymptotic structure of the spacetime is not modified even nonperturbatively. Therefore the results of [1] are expected to hold even nonperturbatively. But the asymptotic structure of dS is not expected to be nonperturbatively stable [44]. Therefore, nonperturbatively, our analysis might require modifications. ### Cautionary physical remarks The principle of holography of information provides an interesting mathematical difference between quantum field theories and quantum gravity, but the result should be interpreted with care. First, as we have emphasized above, there are no _local_ gauge invariant operators in the theory. Therefore, the measurement of a cosmological correlator is secretly a nonlocal process. Cosmological correlators are labelled by a set of points in \(\mathcal{R}\); but they do not correspond to any physical observable that is strictly localized in \(\mathcal{R}\). Second, in both AdS and flat space, if one considers heavy, nonperturbative states in the bulk, then it is usually necessary to study nonperturbative correlators at infinity to identify the state. This point was already noted in [1; 4] and recently re-emphasized in [45; 46]. So, in a typical heavy classical state, mundane notions of locality are preserved at all orders in perturbation theory. This is important since it explains why we do not "see" the holography of information all around us. This does not mean that the unusual localization of information in gravity is unimportant. In its nonperturbative avatar, it is important for understanding the information paradox [3]. Figure 2: _In flat space and in AdS (left), when a region on a spatial slice, \(\mathcal{R}\) is surrounded by its complement then \(\overline{\mathcal{R}}\) extends to infinity. But in dS (right), every region \(\mathcal{R}\) surrounds and is surrounded by its complement on a sphere._ Moreover, if one studies simple states like low-energy excitations about empty AdS then the holography of information can be seen even within perturbation theory. We expect the same features to hold in dS. In a "little Hilbert space" comprising simple excitations about the Hartle-Hawking state, we expect it should be possible to identify states uniquely using only perturbative cosmological correlators. On the other hand, to identify sufficiently complicated states might require very high-point cosmological correlators. It would be interesting to work this out in more detail. ## 6 Discussion In this paper, we started by studying the norm on the space of solutions to the WDW equation obtained in [8]. The magnitude-squared of these wavefunctionals leads to a diff \(\times\) Weyl invariant functional. We defined the norm by averaging this functional over field configurations and dividing by the volume of the diff \(\times\) Weyl group. We used the Faddeev-Popov trick to make sense of this expression, leading to the final gauge-fixed expression (3.16). In the nongravitational limit, our norm reduces to the one proposed by Higuchi on the space of group-averaged states. Therefore, our procedure provides a derivation of Higuchi's prescription in the nongravitational limit and a means of understanding gravitational corrections to this prescription. In section 5, we explored the meaning of cosmological correlators. We proposed that these commonly-discussed quantities correspond to gauge-fixed observables. These observables are labelled by a set of local coordinates although their gauge-invariant description is necessarily nonlocal. We showed that, in any state of the theory, these observables are invariant under rotations, translations and dilatations of their coordinate labels. This marks a sharp difference from nongravitational quantum field theories, where cosmological correlators manifest this symmetry in the vacuum but not in other states. As a consequence of this symmetry, we showed that, in a theory of gravity, cosmological correlators in an arbitrarily small region, \(\mathcal{R}\), suffice to uniquely identify any state in the theory. These results open up several interesting questions that we now describe. Holography in de Sitter.Strictly speaking, our result on the holography of information does not allow us to obtain information about a higher-dimensional space from a lower-dimensional space since \(\mathcal{R}\) still has codimension 0. This is similar to the situation in AdS -- where arguments based on the gravitational constraints are sufficient to show that information in the bulk is available in an infinitesimal time band in the boundary, but are not sufficient to squeeze the time band to a time slice. Moreover, our results pertain to information but do not address the issue of bulk dynamics. So the natural question is whether there is some way of understanding bulk dynamics in all of de Sitter space from a lower-dimensional subregion on the late-time slice. Similar ideas were recently explored in [47]. Such a holographic duality, if it exists, should account for all states in the bulk theory. In the literature, the study of dS/CFT has often been restricted to understanding the Euclidean vacuum, obtained from the Hartle-Hawking proposal. But as we have shown the bulk theory has many other interesting states. It would also be interesting to understand the relationship of such a holographic dual to the proposal of static patch holography [48; 49]. There, it is suggested, using very different arguments, that all information about the state can be obtained from the bifurcation sphere that lies between two static patches. This sphere lies in the "bulk" of dS whereas our results have to do with the asymptotic late-time slice. So our results do not contradict this proposal, but nor do they obviously lend it support. **Observers in quantum cosmology.** An interesting conceptual question is the following. Gauge-invariant operators in gravity must be nonlocal but this is in apparent contradiction with our physical intuition that measurements are made locally. Fixing the gauge, as we did to study cosmological correlators, provides a mathematically convenient method of obtaining observables that are labelled by a set of coordinates. But it is important to develop a deeper understanding of the meaning of measurements in a cosmological setting. The usual theory of measurement [50] involves an external apparatus that is entangled with the system by the experimenter who turns on an interaction Hamiltonian. Clearly this cannot correctly describe measurements in a theory of gravity, where bulk evolution is generated by the constraints, that cannot be altered at will. Presumably, the correct framework is to study an observer who is already part of the system and where measurement happens through the _autonomous evolution_ of the system. We do not know the correct formalism to analyze this process. A simple model of an observer was recently discussed in [26] where it was argued that the algebra of observables dressed to the observer's worldline is of type II\({}_{1}\). Since we have presented the full Hilbert space and a formalism for understanding observables, it should be possible to embed the model of [26] into our analysis and make it precise. It would be interesting to work out these details. **Technical questions about the norm.** From a technical perspective, we would like to better understand the functional integral that was used to define the norm. Some subtleties, including the question of the measure, the requirement of a minimum of three operator insertions, and potential divergences due to the "collision" of operators are listed in subsection 3.5. Similar problems have been studied extensively in string perturbation theory and we hope that the techniques developed there can be applied to the functional integrals that appear in our context. **Implications for cosmology.** Our result implies that when gravitational constraints are taken into account, every physical state has the same symmetries as the vacuum. This would not be true in quantum field theory where the vacuum is singled out by its symmetries. This suggests that the approximate scale invariance observed in the early universe cannot be used to justify the Hartle-Hawking proposal. It is in fact a general consequence of the constraints in any asymptotically de Sitter spacetime, such as the early universe as predicted by inflation. ## Acknowledgments We are grateful to Simon Caron-Huot, Abhijit Gadde, Rifath Khan, Alex Maloney, Ashoke Sen and Sandip Trivedi for helpful discussions. We also acknowledge several discussions with the string theory group at ICTS-TIFR. S.R. would like to acknowledge the hospitality of the 12th Joburg Workshop on string theory, the Abu Dhabi meeting in theoretical physics and the workshop on observables in quantum gravity (IISER Mohali) where preliminary versions of these results were presented. S.R. is partially supported by a Swarnajayanti fellowship, DST/SJF/PSA-02/2016-17, of the Department of Science and Technology. J.C. is supported by the Simons Collaboration on Nonperturbative Bootstrap. Research at ICTS-TIFR is supported by the Department of Atomic Energy, Government of India, under Project Identification Nos. RTI4001. ## Appendix A Residual gauge symmetry In this Appendix, we study the residual gauge symmetry after fixing the diffeomorphism and Weyl gauge symmetries using \[\partial_{i}g_{ij}=0,\qquad\delta^{ij}g_{ij}=d. \tag{108}\] After solving for the traceless condition, the variation of the metric is a combination of a diffeomorphism and a Weyl transformation \[\delta_{\xi}g_{ij}=(P\xi)_{ij}\equiv\mathcal{L}_{\xi}g_{ij}-\frac{1}{d}g_{ij} \delta^{k\ell}\mathcal{L}_{\xi}g_{k\ell} \tag{109}\] in terms of the Lie derivative \[\mathcal{L}_{\xi}g_{ij}=\xi^{k}\partial_{k}g_{ij}+g_{ik}\partial_{j}\xi^{k}+g _{kj}\partial_{i}\xi^{k}=\nabla_{i}\xi_{j}+\nabla_{j}\xi_{i}. \tag{110}\] The residual gauge symmetry algebra corresponds to solutions of \[\partial_{i}(P\xi)_{ij}=0. \tag{111}\] The metric is written as \[g_{ij}=\delta_{ij}+\kappa h_{ij}\, \tag{112}\] which leads to the expansion \[(P\xi)_{ij}=(P_{0}\xi)_{ij}+\kappa(P_{1}\xi)_{ij}+\kappa^{2}(P_{2}\xi)_{ij}\, \tag{113}\] that is exact since no higher orders of \(\kappa\) appear. Firstly, note that in the limit \(\kappa\to 0\), the residual symmetry is \(\mathrm{SO}(1,d+1)\) because we then have \[(P_{0}\zeta)_{ij}=\partial_{i}\zeta_{j}+\partial_{j}\zeta_{i}-\frac{2}{d} \delta_{ij}\delta^{k\ell}\partial_{k}\zeta_{\ell}=0\, \tag{114}\] for any conformal Killing vector \(\zeta\). In other words, conformal Killing vectors preserve the background metric and hence trivially preserve any gauge-fixing condition. ### Translations, rotations and dilatations We will see that translations, rotations and dilatations remain residual symmetries at finite \(\kappa\). To show this, we write the explicit form \[\partial_{i}(P\xi)_{ij}=\left[\left(\partial_{k}g_{ij}+\partial_{j}g_{ik}-\frac{ 2}{d}g_{i\ell}\partial_{\ell}g_{jk}\right)\partial_{k}+\left(g_{ij}\delta_{k \ell}+g_{jk}\delta_{i\ell}-\frac{2}{d}g_{ik}g_{j\ell}\right)\partial_{k} \partial_{\ell}\right]\xi^{j}. \tag{100}\] We see that translations \(\xi^{j}=\text{const}\) are always a residual symmetry since at least one derivative acts on \(\xi^{j}\) in (100). For rotations and dilatations, the term with two derivatives vanishes so we get \[\partial_{i}(P\xi)_{ij}=\left(\partial_{k}g_{ij}+\partial_{j}g_{ik}-\frac{2}{ d}g_{i\ell}\partial_{\ell}g_{jk}\right)\partial_{k}\xi^{j}. \tag{101}\] Rotations are of the form \(\xi^{j}=M^{jk}x^{k}\) where \(M^{jk}\) is antisymmetric. So we see that \(\partial_{k}\xi^{j}=M^{jk}\) and the above expression vanishes by symmetry. For dilatations, we have \(\xi^{j}=x^{j}\) so \(\partial_{k}\xi^{j}=\delta^{j}_{k}\) and the expression vanishes using the transverse and trace conditions. As a result, we see that translations, dilatations and rotations are residual symmetries. ### Modified special conformal transformation The usual special conformal transformation takes the form \[v_{0}^{i}=2(\beta\cdot x)x^{i}-x^{2}\beta^{i}. \tag{102}\] We can check that this is not a residual symmetry as we have \[\partial_{i}(Pv_{0})_{ij}=-2\kappa d\beta^{j}h_{ij}\, \tag{103}\] which does not vanish. However, using a standard perturbative procedure, the SCT can be systematically corrected [20] to give a residual symmetry, \(\xi\), \[\xi=v_{0}+\kappa v_{1}+\kappa^{2}v_{2}+\ldots. \tag{104}\] Define the operator \[(\mathcal{D}_{0}\xi)^{j}\equiv\partial_{i}(P_{0}\xi)_{ij}=\partial^{2}\xi^{j}+ \left(1-\frac{2}{d}\right)\partial_{i}\partial_{j}\xi^{i}. \tag{105}\] The corrections \(v_{n}\) are obtained by solving the equation (100) \[(\mathcal{D}_{0}v_{n})^{j}=s_{n}^{j},\qquad n=1,2,\ldots \tag{106}\] where \[s_{1}^{j}=2d\beta^{i}h_{ij}\, \tag{107}\] and the higher order sources are determined iteratively using \[s_{n}^{j}=-\partial_{i}(P_{1}v_{n-1})_{ij}-\partial_{i}(P_{2}v_{n-2})\,\qquad n \geq 2. \tag{108}\] We can show that, provided one places physical boundary conditions on the metric fluctuation, the equation (107) always has a smooth solution \(v^{j}\) that is smooth on the sphere so that it is always possible to correct the SCT in this way. As detailed in the next section these boundary conditions constrain the metric around \(x=\infty\) to be \[h_{ij}=W_{ikj\ell}\frac{x_{k}x_{\ell}}{|x|^{4}}+O(|x|^{-3})\, \tag{108}\] where \(W_{ijk\ell}\) is a constant tensor with the symmetries and tracelessness of a Weyl tensor. As a result the first source has the fall-off \[s_{1}^{i}=2d\,\beta^{i}W_{ikj\ell}\frac{x_{k}x_{\ell}}{|x|^{4}}+O(|x|^{-3})\, \tag{109}\] and higher order sources are more suppressed as they contain additional factors of the metric. The decay of the sources at infinity guarantees that solutions for \(v^{j}\) always exist. For the leading fall-off, we obtain the solution \[v_{1}^{j}=\beta^{i}W_{ikj\ell}\frac{x_{k}x_{\ell}}{|x|^{2}}+O(|x|^{-1})=|x|^{2 }\beta^{j}h_{ij}+O(|x|^{-1})\, \tag{110}\] which is just proportional to the source at this order. For the subleading fall-offs, the solution can be written in Fourier space \[v^{j}(x)=\int\frac{d^{d}p}{(2\pi)^{d}}\,\frac{1}{p^{2}}\left(-\delta_{ij}+ \frac{2(d-2)}{d-1}\frac{p_{i}p_{j}}{k^{2}}\right)e^{ipx}\hat{s}^{i}(p)\, \tag{111}\] which is well-defined as the sources are \(s^{i}=O(|x|^{-3})\) at infinity so their Fourier transforms are \(\hat{s}^{i}(p)=O(|p|^{3-d})\) around \(p=0\). ### Boundary condition for the metric Although the physical metric is the round metric on \(S^{d}\), we have performed a Weyl transformation so that the background metric becomes flat. The Weyl factor is singular at \(x=\infty\) so we must impose an appropriate boundary condition at infinity. The physical metric takes the form \[ds^{2}=\frac{4}{(1+|x|^{2})^{2}}(\delta_{ij}+\kappa h_{ij})dx_{i}dx_{j} \tag{112}\] and we should demand that this metric be regular. In addition we impose the gauge-fixing conditions \[\partial_{i}h_{ij}=0,\qquad\delta_{ij}h_{ij}=0. \tag{113}\] The Ricci scalar of the physical metric must be a regular function on the sphere. At first order in \(\kappa\), it is a linear combination of \(\partial_{i}\partial_{j}h_{ij},x_{j}\partial_{i}h_{ij}\) and \(x_{i}x_{j}h_{ij}\). The first two terms vanish due to the transverse condition and we obtain \[R=d(d-1)(1+\kappa h_{ij}x_{i}x_{j})+O(\kappa^{2}). \tag{114}\] This must be a smooth function on the sphere which implies that \(h_{ij}x_{i}x_{j}\) must tend to a constant \(C\) at infinity. Our gauge-fixing conditions also imply that \[\partial_{\rm i}(x_{j}h_{ij})=\delta_{ij}h_{ij}+x_{j}\partial_{\rm i}h_{ij}=0\, \tag{104}\] which after integration over a ball of radius \(r\) gives by Stokes' theorem \[0=\int_{B_{r}}d^{d}x\,\partial_{\rm i}(x_{j}h_{ij})=\int_{S_{r}}d^{d-1}\Omega\, r^{d-2}x_{i}x_{j}h_{ij}={\rm vol}(S^{d-1})r^{d-2}C\qquad r\to+\infty. \tag{105}\] This implies that \(C=0\) so we find that \[\lim_{x\to\infty}h_{ij}x_{i}x_{j}=0. \tag{106}\] Additional constraints come from demanding that the metric be smooth near infinity. Expansion around infinity.The metric around \(x=\infty\) can be expanded using the inverted coordinates defined as \[\tilde{x}_{i}=\frac{x_{i}}{|x|^{2}}. \tag{107}\] The inverted metric \(\tilde{h}_{ij}\) is defined as \[ds^{2}=\frac{4}{(1+|\tilde{x}|^{2})^{2}}(\delta_{ij}+\kappa\tilde{h}_{ij})d \tilde{x}_{i}d\tilde{x}_{j}\, \tag{108}\] and is related to the original metric using \[h_{ij}=\frac{1}{|x|^{4}}(\delta_{ik}|x|^{2}-2x_{i}x_{k})(\delta_{j\ell}|x|^{2} -2x_{j}x_{\ell})\tilde{h}_{k\ell}. \tag{109}\] The expansion around \(x=\infty\) is an expansion around \(\tilde{x}=0\). The boundary condition (106) in inverted coordinates gives \[\lim_{\tilde{x}\to 0}\frac{\tilde{x}_{i}\tilde{x}_{j}\tilde{h}_{ij}}{|\tilde{x}|^{ 4}}=0. \tag{110}\] To analyze this condition, we demand that the metric be smooth and at least twice differentiable near \(\tilde{x}=0\) so that it is possible to perform a series expansion \[\tilde{h}_{ij}(\tilde{x})=H^{(0)}_{ij}+H^{(1)}_{ijk}\tilde{x}_{k}+H^{(2)}_{ ijk\ell}\tilde{x}_{k}\tilde{x}_{\ell}+\dots\, \tag{111}\] where \(H^{(n)}_{ijk\dots}\) are constant tensors. For the leading orders, the limit implies that we identically have \[H^{(0)}_{ij}\tilde{x}_{i}\tilde{x}_{j}=0,\qquad H^{(1)}_{ijk}\tilde{x}_{i} \tilde{x}_{j}\tilde{x}_{k}=0,\qquad H^{(2)}_{ijk\ell}\tilde{x}_{i}\tilde{x}_{j }\tilde{x}_{k}\tilde{x}_{\ell}=0. \tag{112}\] Taking derivatives, we obtain that \(H^{(0)}_{ij}=0\). For the linear term, we obtain constraints on \(H^{(1)}_{ijk}\) which allows us to write the transverse equation as \[0=\partial_{\rm i}h^{(1)}_{ij}=\frac{(d-1)}{2|x|^{4}}H^{(1)}_{k\ell j}x_{k}x_{ \ell}, \tag{113}\] which implies that \(H^{(1)}_{ijk}=0\). This means that \(\lim_{\tilde{x}\to 0}\tilde{\partial}_{k}\tilde{h}_{ij}=0\) so that \(\tilde{x}_{i}\) are the Riemann normal coordinates around \(\tilde{x}=0\). Thus we have \[\tilde{g}_{ij}=\tilde{g}_{ij}(0)+\frac{1}{3}\tilde{R}_{ikj\ell}(0)\tilde{x}_{k} \tilde{x}_{\ell}+O\big{(}|\tilde{x}|^{3}\big{)}\, \tag{100}\] and this fixes the term quadratic to \[H^{(2)}_{ijk\ell}=\frac{1}{3}\tilde{R}_{ikj\ell}(0). \tag{101}\] The tracelessness condition also implies that \(\tilde{R}_{ij}(0)=0\) so this is really a Weyl tensor. As a result \(H^{(2)}_{ijk\ell}\) can be any constant tensor with the same symmetries of a Weyl tensor. Conversely, we can verify that this gives a valid metric. As a result, we obtain the leading behavior of the metric at infinity \[h_{ij}=W_{ikj\ell}\frac{x_{k}x_{\ell}}{|x|^{4}}+O\big{(}|x|^{-3}\big{)},\qquad x \rightarrow+\infty\, \tag{102}\] where \(W_{ijk\ell}\) is a constant tensor with the symmetries and tracelessness of a Weyl tensor. Note that for dS\({}_{4}\), we have \(W_{ijk\ell}=0\) as there are no non-trivial Weyl tensor in \(d=3\) ; so in this case we have \(h_{ij}=O(|x|^{-3})\). ### Residual symmetry algebra The residual symmetry algebra is generated by vector fields \(\xi[g]\) which in general have metric-dependent corrections. The Lie bracket between two generators must be modified as \[[\xi_{1}[g],\xi_{2}[g]]_{\rm M}=[\xi_{1}[g],\xi_{2}[g]]-\delta_{\xi_{1}[g]} \xi_{2}[g]+\delta_{\xi_{2}[g]}\xi_{1}[g]\, \tag{103}\] where we have added the action of the transformation on the metric-dependent terms obtained from the transformation of the metric. For example, let \(\xi_{1}\) be a translation, rotation or dilatation and \(\xi_{2}\) be a modified SCT. We can write \[\xi_{2}=\zeta+v[h]\, \tag{104}\] where \(\zeta\) is the unmodified SCT and \(v[h]\) contains the metric-dependent corrections. We then have \[[\xi_{1},\xi_{2}]_{\rm M}=[\xi_{1},\zeta]+[\xi_{1},v[h]]-\delta_{\xi_{1}}v[h ]=[\xi_{1},\zeta]\, \tag{105}\] which gives the standard Lie bracket as if the SCT was unmodified. As a result, the modification at finite \(\kappa\) doesn't affect the algebra which is always the conformal algebra. The residual symmetry group is then always SO\((1,d+1)\). However, the finite \(\kappa\) corrections to the SCT modify the way this group acts on the fields. ### Alternative gauge-fixing conditions In this paper, we have presented our analysis in a Weyl gauge where the background metric for the sphere is flat. We can also consider the a similar gauge-fixing procedure where we keep the round metric. In this case, we write the metric as \[g_{ij}=\gamma_{ij}+\kappa h_{ij},\qquad\gamma_{ij}=\frac{4\delta_{ij}}{(1+|x|^{2 })^{2}}\, \tag{111}\] where \(\gamma_{ij}\) is the round metric on \(S^{d}\). The gauge fixing conditions can be taken to be \[\gamma^{jk}D_{k}g_{ij}=0,\qquad\gamma^{ij}g_{ij}=d\, \tag{112}\] where we use \(D_{i}\) for the background covariant derivative with respect to \(\gamma_{ij}\). After solving for the trace condition, the variation of the metric is \[\delta_{\xi}g_{ij}=(P\xi)_{ij}=(P_{0}\xi)_{ij}+\kappa(P_{1}\xi)_{ij}+\kappa^{2 }(P_{2}\xi)_{ij}\, \tag{113}\] and the residual symmetry is generated by solutions of \[\gamma^{jk}D_{k}(P\xi)_{ij}=0. \tag{114}\] Again we see that at \(\kappa\to 0\), the residual symmetry is generated by the CKVs as we have \[(P_{0}\xi)_{ij}=D_{i}\xi_{j}+D_{j}\xi_{i}-\frac{2}{d}\gamma_{ij}\gamma^{k\ell }D_{k}\xi_{\ell} \tag{115}\] is the conformal Killing equation on \(S^{d}\). At finite \(\kappa\), we can write \[\xi^{i}=v_{0}^{i}+\kappa v_{1}^{i}+\kappa^{2}v_{2}^{i}+\dots. \tag{116}\] Taking \(v_{0}\) to be any CKV, we can make \(\xi\) into a residual symmetry by choosing the corrections \(v_{n}\) to be solutions of \[(\widetilde{\cal D}_{0}v_{n})^{i}=s_{n}^{i},\qquad n=1,2,\dots,\qquad( \widetilde{\cal D}_{0}v)^{i}\equiv\gamma^{jk}\gamma^{i\ell}D_{j}(P_{0}v)_{k \ell}\, \tag{117}\] where the sources are given as \[s_{1}^{i}=-\gamma^{jk}\gamma^{i\ell}D_{j}(P_{1}v_{n-1})_{k\ell},\qquad s_{n}^ {i}=-\gamma^{jk}\gamma^{i\ell}\left(D_{j}(P_{1}v_{n-1})_{k\ell}+D_{j}(P_{2}v_{ n-2})_{k\ell}\right),\quad n\geq 2. \tag{118}\] The operator \(-\widetilde{\cal D}_{0}\) is Hermitian and non-negative as we have \[-\int\,d^{d}x\sqrt{\gamma}\,\gamma_{ij}v^{i}(\widetilde{\cal D}_{0}v)^{j}= \frac{1}{2}\int\,d^{d}x\sqrt{\gamma}\,\gamma^{ik}\gamma^{j\ell}(P_{0}v)_{ij}( P_{0}v)_{k\ell}\geq 0, \tag{119}\] using integration by parts. This can only vanish when \(P_{0}v=0\) so that \(v\) is a CKV. This shows that the only zero modes of \(\widetilde{\cal D}_{0}\) are the CKVs. A similar argument shows that for any vector field \(v\), \(\widetilde{\mathcal{D}}_{0}v\) is always orthogonal to the CKVs. Note that this was used by York in [51] to prove the existence of his decomposition. As a result, the operator \(-\widetilde{\mathcal{D}}_{0}\) preserves the space of vector fields orthogonal to the CKVs and is strictly positive on that space. We can see that the sources \(s_{n}^{i}\) belong to that space. Indeed for any CKV \(\zeta\), we have \[\int\,d^{d}x\sqrt{\gamma}\,\gamma_{ij}\zeta^{i}s_{n}^{j} = \int\,d^{d}x\sqrt{\gamma}\,\gamma^{ik}\gamma^{j\ell}D_{i}\zeta_{j }\left((P_{1}v_{n-1})_{k\ell}+(P_{2}v_{n-2})_{k\ell}\right)\] \[= \frac{1}{2}\int\,d^{d}x\sqrt{\gamma}\,\gamma^{ik}\gamma^{j\ell}(P _{0}\zeta)_{ij}\left((P_{1}v_{n-1})_{k\ell}+(P_{2}v_{n-2})_{k\ell}\right)\] \[= 0\,\] using integration by parts, tracelessness and symmetry of \((Pv)_{ij}\), and the fact that \(P_{0}\zeta=0\). As the operator \(\widetilde{\mathcal{D}}_{0}\) is invertible on the space of vector fields orthogonal to the CKVs, the corrections \(v_{n}\) in (101) always exist and are unique. An explicit representation can be written by decomposing the sources in eigenvectors \(\{u_{k}\}\) of \(\widetilde{\mathcal{D}}_{0}\): \[s_{n}^{i}=\sum_{k}c_{k}u_{k}^{i}, \tag{102}\] where \(\widetilde{\mathcal{D}}_{0}u_{k}=-\lambda_{k}u_{k}\) with \(\lambda_{k}>0\). This is well-defined because \(\widetilde{\mathcal{D}}_{0}\) is an elliptic operator on a compact manifold and hence has a discrete spectrum. The solution can then be written as \[v_{n}^{i}=-\sum_{k}\frac{c_{k}}{\lambda_{k}}u_{k}^{i}. \tag{103}\] We can check that the \(\mathrm{SO}(1,d+1)\) algebra is satisfied after using the modified Lie bracket (100) which takes into account the transformation of the metric-dependent corrections. More generally, we expect that for a large class of gauge-fixing conditions, \(\mathrm{SO}(1,d+1)\) should always be the residual symmetry group. This is because the CKVs preserve the background metric and it should be always possible to correct them so that they preserve the gauge conditions. The advantage of the transverse-traceless gauge used in the main text is that translations and dilatations are realized linearly. This results in simple symmetries for cosmological correlators and simplifies the proof of the holography of information. In a different gauge, the symmetries of cosmological correlators relate correlators of different orders. We expect that the holography of information will still hold in alternative gauges, since given the set of all-order cosmological correlators in a region, the residual symmetries can be used to obtain correlators outside that region. ## Appendix B Orthonormal basis of conformal blocks In this Appendix, we explain that for free fields in the nongravitational limit, the quantum gravity Hilbert space admits a basis in terms of conformal blocks or conformal partial waves. Moreover we will see that the Higuchi inner product is the natural inner product studied in the CFT literature. We consider a set of free massive scalar fields \(\chi_{k}\) in the principal series so that they have dimensions \[\Delta_{k}=\frac{d}{2}+i\nu_{k}\,\qquad k=1,2\ldots\, \tag{114}\] with \(\nu_{k}\) is real. We can define a basis of dS invariant states following section 3.4 as \[|\psi\rangle=\int d^{d}x_{1}\ldots d^{d}x_{n}\,\psi(x_{1},\ldots,x_{n}):\chi_{ 1}(x_{1})\ldots\chi_{n}(x_{n}):|0\rangle\, \tag{115}\] where, as in the main text, \(|0\rangle\) is the Hartle-Hawking state. Note that we have redefined the basis by replacing the product of operators by its normal-ordered product which simply corresponds to taking a specific linear combination of the basis elements (101). We must take \(\psi(x_{1},\ldots,x_{n})\) to transform appropriately under the conformal symmetry so that \(|\psi\rangle\) is dS invariant. This corresponds to taking \(\psi(x_{1},\ldots,x_{n})\) to have the symmetries of a CFT correlator \[\psi(x_{1},\ldots,x_{n})\sim\langle O_{1}(x_{1})\ldots O_{n}(x_{n})\rangle_{ \rm CFT}\, \tag{116}\] where \(O_{k}(x)\) is a local operator of dimension \(d-\Delta_{k}\) in a CFT\({}_{d}\). This implies that \(\psi\) can be decomposed as a sum of conformal blocks or conformal partial waves. In the example of \(n=4\), we have the decomposition \[\psi(x_{1},\ldots,x_{4})=\sum_{\Delta,J}c_{\Delta,J}\Psi^{\Delta_{1},\ldots, \Delta_{4}}_{\Delta,J}(x_{1},\ldots,x_{4})\, \tag{117}\] where the conformal partial waves \(\Psi^{\Delta_{i}}_{\Delta,J}\) are linear combinations of conformal blocks. (See [52] for details.) In the principal series, the complex conjugate operator \(\chi_{k}^{*}\) has the conjugate dimension \(\Delta_{k}^{*}=d-\Delta_{k}\) and conformal symmetry implies that we have [53, 54] \[\langle 0|\chi_{k}(x)^{*}\chi_{k}(x^{\prime})|0\rangle_{\rm QFT}=\delta^{(d )}(x-x^{\prime}). \tag{118}\] This can be derived for example from the asymptotic limit of de Sitter Green's functions. The Higuchi inner product then takes the form \[\langle\psi|\psi\rangle=\frac{\text{vol}(\text{SO}(d-1))}{\text{vol}(\text{SO }(1,d+1))}\int d^{d}x_{1}\ldots d^{d}x_{n}\,|\psi(x_{1},\ldots,x_{n})|^{2}. \tag{119}\] This is actually the natural inner product on conformal partial waves. In the example of \(n=4\), we have the orthogonality relation \[\langle\Psi^{\Delta_{i}}_{\Delta,J},\Psi^{\bar{\Delta}_{i}}_{\bar{\Delta},J} \rangle=\frac{\text{vol}(\text{SO}(d-1))}{\text{vol}(\text{SO}(1,d+1))}\int d ^{d}x_{1}\ldots d^{d}x_{4}\Psi^{\Delta_{i}}_{\Delta,J}(x_{i})\Psi^{\bar{\Delta }_{i}}_{\bar{\Delta}^{\prime},J^{\prime}}(x_{i})=n_{\Delta,J}2\pi\delta_{J,J^ {\prime}}\delta(\nu-\nu^{\prime})\, \tag{120}\] where we have written \(\Delta=\frac{d}{2}+i\nu,\bar{\Delta}^{\prime}=\frac{d}{2}-i\nu^{\prime}\) with \(\nu,\nu^{\prime}\geq 0\) and the normalization constant \(n_{\Delta,J}\) is the one given in [52] multiplied with an additional factor of \(\text{vol}(\text{SO}(d-1))\) to match our convention. This appeared recently in [52; 55; 56; 57] following earlier work [58; 59]. The case \(n=4\) has been most studied in the CFT literature but we expect that similar results exist for all \(n\). This implies that conformal partial waves provide an orthonormal basis for the quantum gravity Hilbert space of free fields in dS\({}_{d+1}\). Semi-classical dS\({}_{3}\) gravity can be formulated as a Chern-Simons theory [60]. So it would be interesting to understand the connection of the construction above to the construction of the Hilbert space of Chern-Simons theory in terms of two-dimensional conformal blocks [61]. ## Appendix C BRST invariance of inner product In this section we demonstrate that the correlator \[(\Psi,A\Psi)=\int DgD\chi\,DcD\bar{c}\,\delta(g_{ii}-d)\delta( \partial_{i}g_{ij})|\Psi[g,\chi]|^{2}A[g,\chi]e^{-S_{\text{gh}}}\, \tag{104}\] where \(|\Psi[g,\chi]|^{2}\) and \(A[g,\chi]\) are diffeomorphism and Weyl invariant, enjoys BRST symmetry as is expected of gauge fixed path integrals. In order to show this, we introduce BRST transformation for matter, metric and ghost fields. The BRST operator \(\delta_{\text{B}}\) that we define below should be distinguished from the BRST operator that would arise if we attempted to implement the gravitational constraints using the BRST formalism. Rather, it arises when we gauge fix functional integrals like (103) and (104) in order to define norms and correlators. For this reason, it does not appear that the cohomology of the BRST operator discussed in this section has any particular significance. In this Appendix, for simplicity, we do not consider the fixing of residual gauge. We will proceed in two steps. First we show BRST invariance of the ghost action containing both diffeomorphism and Weyl ghosts. In the next step, we integrate out the Weyl ghost to obtain the effective ghost action (3.15) and show that the inner product path integral (104) with this action is also BRST invariant. (See [62] for a similar procedure in the context of string theory.) ### BRST formulation We remind the reader that the gauge transformation of the fields under diff \(\times\) Weyl group is given by \[\delta_{(\xi,\varphi)}\chi =\delta_{\xi}^{\text{D}}\chi+\delta_{\varphi}^{\text{W}}\chi= \xi\cdot\partial\chi-\Delta\varphi\chi, \tag{105}\] \[\delta_{(\xi,\varphi)}g_{ij} =\delta_{\xi}^{\text{D}}g_{ij}+\delta_{\varphi}^{\text{W}}g_{ij} =\nabla_{i}\xi_{j}+\nabla_{j}\xi_{i}+2\varphi g_{ij}\, \tag{106}\] where \(\delta^{\text{D}}\) and \(\delta^{\text{W}}\) represent an infinitesimal diffeomorphism and a Weyl transformation respectively. The change in gauge fixing functions under this flow is \[\delta_{(\xi,\varphi)}(g_{ii}-d) =2\nabla_{k}\xi_{k}+2g_{ii}\varphi, \tag{107}\] \[\delta_{(\xi,\varphi)}(\partial_{j}g_{ij}) =\partial_{j}\left(\nabla_{i}\xi_{j}+\nabla_{j}\xi_{i}+2\varphi g _{ij}\right). \tag{108}\] From here we can read off the full ghost action as, \[S^{\text{full}}_{\text{gh}}=\int\,d^{d}x\,\left(2g_{ii}\bar{b}b+2\bar{b}\nabla_{k} c_{k}+2\bar{c}^{i}\partial_{j}(g_{ij}b)+\bar{c}^{i}\partial_{j}\left(\nabla_{i}c_{j}+ \nabla_{j}c_{i}\right)\right)\,\] (C.6) where \(b,\bar{b},c^{i},\bar{c}^{i}\) are the Weyl and diffeomorphism ghost anti-ghost pairs. Structure constants.Commutators of the gauge group algebra can be given through their action on \(\chi\), \[[\delta^{\text{D}}_{\zeta},\delta^{\text{D}}_{\xi}]\chi=\delta^{\text{D}}_{[ \zeta,\xi]}\chi,\qquad[\delta^{\text{W}}_{\varphi},\delta^{\text{D}}_{\xi}] \chi=-\delta^{\text{W}}_{\xi\partial\varphi}\chi,\qquad[\delta^{\text{W}}_{ \varphi},\delta^{\text{W}}_{\varpi}]\chi=0\.\] (C.7) It's easy to check that the same commutation relations hold for the action on \(g_{ij}\). \[[\delta^{\text{D}}_{\zeta},\delta^{\text{D}}_{\xi}]g_{ij}=\delta^{\text{D}}_ {[\zeta,\xi]}g_{ij},\qquad[\delta^{\text{W}}_{\varphi},\delta^{\text{D}}_{ \xi}]g_{ij}=-\delta^{\text{W}}_{\xi\partial\varphi}g_{ij},\qquad[\delta^{ \text{W}}_{\varphi},\delta^{\text{W}}_{\varpi}]g_{ij}=0\.\] (C.8) Consider the diffeomorphism and Weyl basis \(\{\hat{\delta}^{\text{D}}_{x^{i}},\hat{\delta}^{\text{W}}_{x}\}\) defined by \[\delta^{\text{D}}_{\xi}=\int\,d^{d}x\,\xi^{i}(x)\hat{\delta}^{\text{D}}_{x^{i }},\qquad\delta^{\text{W}}_{\varphi}=\int\,d^{d}x\,\varphi(x)\hat{\delta}^{ \text{W}}_{x}\.\] (C.9) Define the structure \(f(\_\_,\_)) as \[[\hat{\delta}^{\text{D}}_{y^{i}},\hat{\delta}^{\text{D}}_{z^{j}}] =\int\,d^{d}w\,f(w^{k}|y^{i},z^{j})\delta^{\text{D}}_{w^{k}},\] (C.10) \[=\int\,d^{d}w\,f(w|y,z^{i})\hat{\delta}^{\text{W}}_{w}\,,\] (C.11) \[=\int\,d^{d}w\,f(w|z,y)\hat{\delta}^{\text{W}}_{w}\.\] (C.12) The notation \(f(\_\,\_) has been overloaded so that its indexed and un-indexed arguments indicate diffeomorphism and Weyl basis indices respectively. From the commutation relations (C.7) or (C.8) we can read off the structure constants, \[f(w^{k}|y^{i},z^{j}) =\partial_{w^{i}}\delta(w-z)\delta(w-y)\delta^{k}_{j}-\partial_{w ^{j}}\delta(w-y)\delta(w-z)\delta^{k}_{i},\] (C.13) \[f(w|y,z^{i}) =-\delta(w-z)\partial_{w^{i}}\delta(w-y),\] (C.14) \[f(w|y,z) =0\.\] (C.15) BRST transformation.Let us rewrite the path integral (C.1) as \[(\Psi,A\Psi)=\int\,DgD\chi\,DcD\bar{c}\,DbD\bar{b}\,DBDB^{i}\,e^{-S_{\text{g,i.}}-S_{\text{g,f.}}-S_{\text{gh}}^{\text{full}}},\] (C.16) where have implemented the gauge fixing delta functions through the Nakanishi-Lautrup fields \(B,B^{i}\) and the gauge fixing action \[S_{\text{g,f}}=i\int\,d^{d}x\,\left(B(g_{ii}-d)+B^{i}\partial_{j}g_{ij}\right)\,\] (C.17) and indicated the rest of the gauge invariant integrand using \(e^{-S_{\text{g,i}}}\). The BRST transformation is \[\delta^{\theta}_{\rm B}\chi =\theta\left(c^{i}\partial_{i}\chi-b\Delta\chi\right), \delta^{\theta}_{\rm B}g_{ij} =\theta\left(\nabla_{i}c_{j}+\nabla_{j}c_{i}+2bg_{ij}\right), \tag{108}\] \[\delta^{\theta}_{\rm B}c^{i} =\theta c^{k}\partial_{k}c^{i}, \delta^{\theta}_{\rm B}\bar{c}^{i} =-i\theta B^{i},\] \[\delta^{\theta}_{\rm B}b =\theta c^{k}\partial_{k}b, \delta^{\theta}_{\rm B}\bar{b} =-i\theta B,\] \[\delta^{\theta}_{\rm B}B^{i} =0, \delta^{\theta}_{\rm B}B =0\.\] We have used the following formulae to obtain the ghost field transformations \[\delta^{\theta}_{\rm B}c^{k}(w) =\frac{\theta}{2}\int\,d^{d}y\,d^{d}z\,f(w^{k}|y^{i},z^{j})c^{i}( y)c^{j}(z), \tag{109}\] \[\delta^{\theta}_{\rm B}b^{k}(w) =\theta\int\,d^{d}y\,d^{d}z\,f(w|y,z^{i})b(y)c^{i}(z). \tag{110}\] BRST invariance.We shall show that the amplitude (106) is invariant under the above transformation. Let us define the operation \(\delta_{\rm B}\) via \[\delta^{\theta}_{\rm B}\equiv\theta\delta_{\rm B}.\] Firstly note that the operator \(\delta_{\rm B}\) is nilpotent on _all_ variables. That is \[\delta_{\rm B}\delta_{\rm B}g_{ij}=\delta_{\rm B}\delta_{\rm B}\chi=\delta_{ \rm B}\delta_{\rm B}(\bar{c}^{i},c^{i},\bar{b},b,B^{i},B)=0. \tag{111}\] This is a group theoretic result which can be easily checked (see for instance [63]). This further means \[\delta_{\rm B}\delta_{\rm B}\left(\text{any polynomial in field variables}\right)=0. \tag{112}\] The ghost and gauge fixing actions can be rewritten as \[S^{\rm full}_{\rm gh} =\int\,d^{d}x\,\left(\bar{b}\delta_{\rm B}(g_{ii}-d)+\bar{c}^{i} \delta_{\rm B}(\partial_{j}g_{ij})\right), \tag{113}\] \[S_{\rm g.f} =\int\,d^{d}x\,\left(-\delta_{\rm B}\bar{b}\left(g_{ii}-d\right)- \delta_{\rm B}\bar{c}^{i}\,\partial_{j}g_{ij}\right). \tag{114}\] Adding these up, \[S^{\rm full}_{\rm gh}+S_{\rm g.f}=-\delta_{\rm B}\int\,d^{d}x\,\left(\bar{b}( g_{ii}-d)+\bar{c}^{i}(\partial_{j}g_{ij})\right)\,. \tag{115}\] Since the sum is BRST exact, we have \[\delta_{\rm B}(S^{\rm full}_{\rm gh}+S_{\rm g.f})=0. \tag{116}\] ### Eliminating the Weyl ghost Now, since the \(b,\bar{b}\) ghosts in the action (106) are non-dynamical, we can simply integrate them out to get the effective ghost action \[\begin{split} e^{-\bar{S}_{\rm gh}}&=\int\,DbD\bar{ b}\,e^{-S^{\rm full}_{\rm gh}}\\ &=\int\,DbD\bar{b}\,e^{-\int\,d^{d}x\left\{\bar{b}(2g_{ii}b+2 \nabla_{k}c_{k})+2\bar{c}^{i}\partial_{j}(g_{ij}b)+\bar{c}^{i}\partial_{j}( \nabla_{i}c_{j}+\nabla_{j}c_{i})\right\}}\\ &=\int\,Db\,\delta\left(-2(g_{ii}b+\nabla_{k}c_{k})\right)e^{- \int\,d^{d}x\left\{2\bar{c}^{i}\partial_{j}(g_{ij}b)+\bar{c}^{i}\partial_{j}( \nabla_{i}c_{j}+\nabla_{j}c_{i})\right\}}\\ &=\mathcal{N}_{3}\exp\left\{-\int\,d^{d}x\,\bar{c}^{i}\partial_{ j}\left(\nabla_{i}c_{j}+\nabla_{j}c_{i}-\frac{2}{g_{ii}}g_{ij}\nabla_{k}c_{k} \right)\right\}.\end{split} \tag{117}\] Here \({\cal N}_{3}=\det(-2g_{ii})\). As we will see shortly, this is invariant under our new BRST transformation and so \({\cal N}_{3}\) reduces to an unimportant numerical constant. For these reasons we can drop it from our effective ghost action, and quote \[S_{\rm gh}=\int d^{d}x\,\bar{c}^{i}\partial_{j}\left(\nabla_{i}c_{j}+\nabla_{j}c _{i}-\frac{2}{g_{ii}}g_{ij}\nabla_{k}c_{k}\right). \tag{102}\] The gauge fixing part \(S_{\rm g.f}\) remains the same, \[S_{\rm g.f}=i\int d^{d}x\,\left(B(g_{ii}-d)+B^{i}\partial_{j}g_{ij}\right). \tag{103}\] The new BRST transformation is obtained by replacing \(b\to-\frac{1}{g_{ii}}\nabla_{k}c_{k}\) in (100). \[\delta^{\theta}_{\rm B}\chi =\theta\left(c^{i}\partial_{i}\chi+\frac{1}{g_{ii}}\nabla_{k}c_{ k}\Delta\chi\right), \delta^{\theta}_{\rm B}g_{ij} =\theta\left(\nabla_{i}c_{j}+\nabla_{j}c_{i}-\frac{2}{g_{\ell \ell}}\nabla_{k}c_{k}g_{ij}\right), \tag{104}\] \[\delta^{\theta}_{\rm B}c^{i} =\theta c^{k}\partial_{k}c^{i}, \delta^{\theta}_{\rm B}\bar{c}^{i} =-i\theta B^{i},\] \[\delta^{\theta}_{\rm B}B^{i} =0, \delta^{\theta}_{\rm B}B =0.\] Nilpotence of \(\delta_{\rm B}\).Since the transformations of \(c^{i},\bar{c}^{i},B,B^{i}\) are unchanged, their nilpotence is trivially maintained. So we only need to show the nilpotence of the transformations of \(\chi\) and \(g_{ij}\). Firstly we note that \[\delta_{\rm B}g_{ii}=0. \tag{105}\] This gives \(\delta_{\rm B}{\cal N}_{3}=0\). The modified BRST transformation can thus be interpreted as a diffeomorphism followed by a compensating Weyl transformation which preserves \(g_{ii}\). Written explicitly in terms of the ghost field, the transformation is \[\delta_{\rm B}g_{ij}=(P_{g}c)_{ij}\, \tag{106}\] where we define \[(P_{g}c)_{ij}\equiv c^{k}\partial_{k}g_{ij}+2g_{k(i}\partial_{j)}c^{k}-\frac{ 2}{g_{mm}}g_{ij}\left(g_{k\ell}\partial_{k}c^{\ell}+\frac{1}{2}c\cdot\partial g _{kk}\right)\, \tag{107}\] whose gauge-fixed version is (10). The \(b\)-ghost transformation in the full analysis is compatible with the substitution \(b\to-\frac{1}{g_{ii}}\nabla_{k}c_{k}\) in the new transformation. That is, \[\delta_{\rm B}\left(-\frac{1}{g_{ii}}\nabla_{k}c_{k}\right)=c^{i}\partial_{i} \left(-\frac{1}{g_{ii}}\nabla_{k}c_{k}\right). \tag{108}\] Now for the matter field, \[\begin{split}\delta_{\rm B}\delta_{\rm B}\chi&=\delta_{ \rm B}\left(c^{i}\partial_{i}\chi+\frac{1}{g_{ii}}\nabla_{k}c_{k}\Delta\chi \right)\\ &=\delta_{\rm B}c^{i}\partial_{i}\chi-c^{i}\partial_{i}\delta_{ \rm B}\chi+\Delta\delta_{\rm B}\left(\frac{1}{g_{ii}}\nabla_{k}c_{k}\right) \chi-\frac{\Delta}{g_{ii}}\left(\nabla_{k}c_{k}\right)\delta_{\rm B}\chi\\ &=c^{j}\partial_{j}c^{i}\partial_{i}\chi-c^{j}\partial_{j}c^{i} \partial_{i}\chi-c^{i}c^{j}\partial_{i}\partial_{j}\chi-c^{j}\partial_{j} \left(\frac{\Delta}{g_{ii}}\nabla_{k}c_{k}\right)\chi+\frac{\Delta}{g_{ii}} \nabla_{k}c_{k}c^{j}\partial_{j}\chi\\ &\quad+\delta_{\rm B}\left(\frac{\Delta}{g_{ii}}\nabla_{k}c_{k} \right)\chi-\frac{\Delta}{g_{ii}}\nabla_{k}c_{k}c_{j}\partial^{j}\chi-\frac{ \Delta^{2}}{\left(g_{ii}\right)^{2}}\nabla_{k}c_{k}\nabla_{\ell}c_{\ell}\\ &=0.\end{split} \tag{103}\] In the third line, the second and last terms cancel out due to antisymmetry of the ghost field and the fourth and sixth terms cancel out due to the relation (100). After a slightly more tedious computation of the same for the metric we get \[\begin{split}\delta_{\rm B}\delta_{\rm B}g_{ij}& =\delta_{\rm B}\left(c^{k}\partial_{k}g_{ij}+g_{ki}\partial_{j}c^ {k}+g_{kj}\partial_{i}c^{k}-\frac{2}{g_{mm}}\nabla_{k}c_{k}g_{ij}\right)\\ &=-c^{k}\partial_{k}\left(g_{\ell i}\partial_{j}c^{\ell}+g_{\ell j }\partial_{i}c^{\ell}\right)+\left(c^{m}\partial_{m}g_{ki}+g_{\ell i}\partial_{ k}c^{\ell}\right)\partial_{j}c^{k}\\ &\quad+\left(c^{m}\partial_{m}g_{kj}+g_{\ell j}\partial_{k}c^{ \ell}\right)\partial_{i}c^{k}+g_{ki}\partial_{j}\left(c^{\ell}\partial_{\ell}c ^{k}\right)+g_{kj}\partial_{i}\left(c^{\ell}\partial_{\ell}c^{k}\right)\\ &=0.\end{split} \tag{104}\] Hence, we can once again make the following assertion for the new \(\delta_{\rm B}\): \[\delta_{\rm B}\delta_{\rm B}\left(\text{any polynomial in field variables}\right)=0. \tag{105}\] Also once again, \[S_{\rm gh} =\int d^{d}x\,\bar{c}^{i}\delta_{\rm B}(\partial_{j}g_{ij}), \tag{106}\] \[S_{\rm g.f} =\int d^{d}x\left(iB(g_{ii}-d)-\delta_{\rm B}\bar{c}^{i}\, \partial_{j}g_{ij}\right)\, \tag{107}\] giving \[S_{\rm gh}+S_{\rm g.f}=-\delta_{\rm B}\int d^{d}x\,\left(\bar{c}^{i}\partial_ {j}g_{ij}\right)+i\int d^{d}x\,B(g_{ii}-d). \tag{108}\] The first part is BRST exact, and the other parts depend on \(g_{ii}\) and \(B\), both of which are BRST closed, thereby yielding \[\delta_{\rm B}\left(S_{\rm gh}+S_{\rm g.f}\right)=0. \tag{109}\] Since \(S_{\rm g.i}\) is by definition diffeomorphism and Weyl invariant, this concludes the proof of BRST invariance of the correlator (100).
2308.14673
New polarization rotation and exact TEM wave solutions in topological insulators
In the context of $\theta$ electrodynamics we find transverse electromagnetic wave solutions forbidden in Maxwell electrodynamics. Our results attest to new evidence of the topological magnetoelectric effect in topological insulators, resulting from a polarization rotation of an external electromagnetic field. Unlike Faraday and Kerr rotations, the effect does not rely on a longitudinal magnetic field, the reflected field, or birefringence. The rotation occurs due to transversal discontinuities of the topological magnetoelectric parameter in cylindrical geometries. The dispersion relation is linear, and birefringence is absent. One solution behaves as an optical fiber confining exact transverse electromagnetic fields with omnidirectional reflectivity. These results may open new possibilities in optics and photonics by utilizing topological insulators to manipulate light.
Sebastián Filipini, Mauro Cambiaso
2023-08-28T16:00:50Z
http://arxiv.org/abs/2308.14673v1
# New polarization rotation and exact TEM wave solutions in topological insulators ###### Abstract In the context of \(\theta\) electrodynamics we find transverse electromagnetic wave solutions forbidden in Maxwell electrodynamics. Our results attest to new evidence of the topological magnetoelectric effect in topological insulators, resulting from a polarization rotation of an external electromagnetic field. Unlike Faraday and Kerr rotations, the effect does not rely on a longitudinal magnetic field, the reflected field, or birefringence. The rotation occurs due to transversal discontinuities of the topological magnetoelectric parameter in cylindrical geometries The dispersion relation is linear, and birefringence is absent. One solution behaves as an optical fiber confining exact transverse electromagnetic fields with omnidirectional reflectivity. These results may open new possibilities in optics and photonics by utilizing topological insulators to manipulate light. ## I Introduction The topological magnetoelectric effect (TME) has been intensely sought after in recent decades as a definitive signal of quantum states of matter possessing topological order [1; 2; 3; 4; 5; 6; 7; 8; 9]. Topological insulators (TIs) are among the most well-known and studied cases presenting TME. These new quantum states can be found in heterostructures of elements like Bi, Se, Te, Sb, and others [10; 11; 12; 13; 14]. They exhibit conducting edge/surface states protected against disorder by time-reversal symmetry, with properties differing from those in the bulk of the material, which is gapped like conventional insulators [15; 16]. Due to their microscopic structure, 3D TIs have unique electromagnetic (EM) responses that can be described macroscopically by the axionic \(\theta\)-term \(\mathcal{L}_{\theta}=(\theta/4\pi)\mathbf{E}\cdot\mathbf{B}\)[17]. In the context of TIs, \(\theta=\frac{\alpha}{\pi}\theta_{\text{TI}}\), where \(\alpha\) is the fine-structure constant and \(\theta_{\text{TI}}\) is called the topological magnetoelectric polarizability (TMEP). Its origin is quantum-mechanical and it encodes the microscopic properties that characterize TIs. This provides a correct description of the system if an appropriate time-reversal symmetry breaking perturbation is introduced to gap the surface states, which results in the material (in its bulk and at the surface) becoming an insulator. The surface, however, is a quantum Hall insulator rather than a normal one. The latter can be achieved by adding a magnetic perturbation (applied field and/or film coating) [18; 19], or by using commensurate out- and in-plane antiferromagnetic or ferrimagnetic insulating thin films [20]. As a result, \(\theta_{\text{TI}}\) becomes quantized in odd-integer values of \(\pi\) i.e., \(\theta_{\text{TI}}=\pm(2n+1)\pi\), where \(n\in\mathbb{Z}\) and the sign is determined by the time-reversal symmetry breaking perturbation. Trivial insulators have \(\theta_{\text{TI}}=0\). In this work, \(\theta_{\text{TI}}\) will be taken as a constant parameter characteristic of each medium. For brevity we will simply write \(\theta\) and we shall refer to this theory as \(\theta\)-electrodynamics (\(\theta\)-ED) rather than axion electrodynamics. This model can also describe: general magnetoelectric media [21; 22; 23]; metamaterials when \(\theta\) is a purely complex function [24]; and Weyl semimetals when \(\theta(\mathbf{x},t)=2\left(\mathbf{b}\cdot\mathbf{x}-b_{0}t\right)\) where \(\mathbf{b}\) is the separation in momentum space between the Weyl nodes and \(b_{0}\) their separation in energy [25; 26]. In this work, we will focus on TME signals stemming from the EM response of TIs following closely the methodology of [27; 28; 29] and also similar to what has been done, for example, to study Faraday rotation [30; 31; 32; 33; 34], induced magnetic-monopole-like fields [35], and topologically induced effects in cavities and slab-waveguides [36; 37]. On the other hand, whenever a no-go theorem can be circumvented, a door into new theoretical and/or experimental possibilities is opened. In [28] it was shown that \(\theta\)-boundary value problems can evade Earnshaw's theorem, which implies that transverse electromagnetic (\(\mathcal{T}\mathcal{M}\)) fields cannot propagate in media with less than two conductors [38]. Hence, as one of the most striking effects of \(\theta\)-ED is to modify the boundary conditions (BCs) that the fields must satisfy, in this letter we pursue this idea in systems that are heavily reliant on BCs to find novel \(\mathcal{T}\mathcal{M}\) wave solutions that are impossible with topologically trivial material, and at the same time provide observable signatures of the elusive TME that are different from those previously reported in the literature. Our findings pave the way to new means of harnessing light with possible applications in photonics that are yet to be discovered. This manuscript is organized as follows. In Section II we review the basics of \(\theta\)-electrodynamics. That is to say the field equations for Maxwell's Lagrangian appended with the axion term commented above, emphasizing how the \(\theta\)-term modifies the boundary conditions that the fields must satisfy at spatial surfaces where \(\theta\) is discontinuous. The field equations are decomposed in longitudinal and transverse components as is customary for the study of field propagation in waveguides and/or optical fibers. In Section III we present the properties that the \(\mathcal{T}\mathcal{EM}\) field possesses, namely, the relation defining the transversality condition, the general dispersion relation and the phase velocity. In this section we also introduce a rotation of the plane of polarization of the EM field propagating transversely to \(\mathbf{\nabla}\theta\) that is different to Faraday or Kerr rotations. In Section IV we present explicit solutions for the \(\mathcal{T}\mathcal{EM}\) fields inside and outside a single cylindrical TI with constant \(\theta\) impinged upon by an external background EM field that serves as an asymptotic boundary condition. Most of the physics of this solution is analyzed through the field distribution as depicted in Fig. (2). In Section IV.1 we comment on the role that different polarizations of the background EM field would have on the rotating effect of the TI and on the resulting spatial distribution of the EM field. Section V introduces the idea of considering several \(\theta\)-interfaces and the possibilities in terms of the possible configurations depending on the \(\mathbf{\nabla}\theta\) at each surface. More specifically, in Section V.1 we analyze the case of two \(\theta\)-interfaces. This divides the whole space in three cylindrical regions: (a) \((0,R_{1})\); (b) \((R_{1},R_{2})\); and (c) \((R_{2},\infty)\). For the TMEP of each region we will choose them in the "antiparallel" configuration (see Fig. (b)). That is when the gradient of \(\theta\) at both layers (and in the same angular direction) are antiparallel, and to simplify the analysis, we will furthermore choose the inner and outer regions as topologically trivial, such that the geometry is basically that of a cylindrical TI shell of finite width. In Section V.2 we analyze the power transmitted in the different cylindrical regions defined by the \(\theta\)-interfaces and compare it with the power that would be transmitted through the same regions but without the TI. In Section V.3 we elaborate criteria that allows us to speak of the confining capacity of the cylindrical TI shell on exact \(\mathcal{T}\mathcal{EM}\) fields that propagate in omnidirectional manner, acting as an optical fiber. Finally in section VI we summarize our conclusions, provide some context as to the importance and relevance of finding \(\mathcal{T}\mathcal{EM}\) solutions besides the fact of providing and alternative and different electromagnetic response of TI as evidence of the topological magnetoelectric effect, and elaborate on possible extensions and applications of these ideas. Throughout the paper, the equations of \(\theta\)-ED will be written in Gaussian units. The coordinates \((\rho,\phi,z)\) are the cylindrical coordinates with \(z\) in the direction of the wave propagation and of the cylindrical surfaces. \(\rho\) and \(\phi\) are the usual ones related to the Cartesian directions in Fig. (a), i.e., \(\mathbf{\nabla}\theta\) points in the radial direction \(\hat{\mathbf{\rho}}\), and \(\hat{\mathbf{\phi}}\) is perpendicular to the latter, in the anti-clockwise direction. ## II Nondynamical \(\theta\)-electrodynamics In \(\theta\)-ED, the source-free equations do not change, but Gauss' and Ampere-Maxwell do. The \(\theta\)-ED equations are: \[\mathbf{\nabla}\cdot(\epsilon\mathbf{E}) =4\pi\rho-\mathbf{\nabla}\theta\cdot\mathbf{B}, \tag{1}\] \[\mathbf{\nabla}\cdot\mathbf{B} =0,\] (2) \[\mathbf{\nabla}\times\mathbf{E}+\frac{1}{c}\frac{\partial\mathbf{B}}{ \partial t} =0.\] (3) \[\mathbf{\nabla}\times(\mathbf{B}/\mu)-\frac{1}{c}\frac{\partial( \epsilon\mathbf{E})}{\partial t} =\frac{4\pi}{c}\mathbf{J}+\mathbf{\nabla}\theta\times\mathbf{E}+ \frac{1}{c}\dot{\theta}\,\mathbf{B}. \tag{4}\] The \(\theta\)-ED equations can be interpreted as if the field equations were not modified, but rather the constitutive relations were changed to: \(\mathbf{D}\!=\!\epsilon\mathbf{E}+\theta\mathbf{B}\) and \(\mathbf{H}\!=\!\mu^{-1}\mathbf{B}-\theta\mathbf{E}\). This makes manifest the role of \(\theta\) as the culprit of the magnetoelectric effect, nevertheless, we will work out directly from Eqs. (1) - (4) [39]. We consider \(\theta(\mathbf{x},t)\) to be constant in time and throughout each medium, with constant and finite discontinuities at the interfaces between them, namely \(\mathbf{\nabla}\theta=\tilde{\theta}_{i}\delta(f_{\Sigma_{i}}(\mathbf{x}))\hat{ \mathbf{n}}_{i}\), where \(\tilde{\theta}_{i}\equiv\theta_{i+1}-\theta_{i}\) and \(\theta_{i}\) being the value of the TMEP in medium \(i\). The interface is defined by \(f_{\Sigma_{i}}(\mathbf{x})=0\) and \(\hat{\mathbf{n}}_{i}\) is perpendicular to \(\Sigma_{i}\) going from medium \(i\) to medium \(i+1\). The \(\theta\) term does not modify the field equations in the bulk, but modifies the BCs as: \[\Delta[\epsilon\mathbf{E}_{\perp}]|_{\Sigma}=-\tilde{\theta}\mathbf{B}_{\perp }|_{\Sigma}\quad\text{and}\quad\Delta\left[\mu^{-1}\mathbf{B}_{\parallel} \right]|_{\Sigma}=\tilde{\theta}\mathbf{E}_{\parallel}|_{\Sigma} \tag{5}\] where \(\Delta[A]\equiv A_{i+1}-A_{i}\). Eqs. (5) lead to different solutions both at the interfaces and in the bulk. In this work we will focus on cylindrical geometries with coaxial symmetry, an example of which is shown in Fig. (a) and consider media separated by coaxial cylindrical surfaces \(\Sigma\) and seek monochromatic harmonic wave solutions for the EM fields with wave vector \(\mathbf{k}=k\hat{\mathbf{z}}\), such that \(\mathbf{E}(\mathbf{r},t)=\mathbf{E}(\mathbf{r}_{\perp})e^{i(kz-wt)}\) and similar for \(\mathbf{B}(\mathbf{r},t)\). The axis are oriented such that \(OZ\) coincide with the axis of the cylindrical surfaces. As is common [40], we decompose vectors in directions longitudinal and transverse to the direction of propagation and the vacuum field equations in each medium read: \[\epsilon\mathbf{\nabla}_{\perp}\cdot\mathbf{E}_{\perp}+[\tilde{\theta }\mathbf{B}_{\perp}]_{\rho} = -ik\epsilon E_{z}\,,\qquad\qquad\mathbf{\nabla}_{\perp}\cdot\mathbf{B} _{\perp}=-ikB_{z}, \tag{6}\] \[ik\mathbf{E}_{\perp}+ik_{0}\,\hat{\mathbf{z}}\times\mathbf{B}_{\perp} = \mathbf{\nabla}_{\perp}E_{z}\,,\qquad\qquad\hat{\mathbf{z}}\cdot(\mathbf{ \nabla}_{\perp}\times\mathbf{E}_{\perp})=ik_{0}B_{z}\] (7) \[ik\mathbf{B}_{\perp}-i\epsilon\mu k_{0}\,\hat{\mathbf{z}}\times \mathbf{E}_{\perp} = \mathbf{\nabla}_{\perp}B_{z}-\mu[\tilde{\theta}E_{z}\hat{\mathbf{\rho}}]\] (8) \[\hat{\mathbf{z}}\cdot(\mathbf{\nabla}_{\perp}\times\mathbf{B}_{\perp} )-\mu[\tilde{\theta}\mathbf{E}_{\perp}]_{\phi} = -i\epsilon\mu k_{0}E_{z} \tag{9}\] where \(k_{0}\equiv\omega/c\) and in our case \(\partial_{z}\theta\) and \(\dot{\theta}\) terms vanish. Since \(\mathbf{\nabla}\theta\) has support at the given interfaces only we have put e.g., \(\mathbf{\nabla}_{\perp}\theta\cdot\mathbf{B}_{\perp}=(\tilde{\theta}\mathbf{B}_{ \perp})_{\rho}|_{\Sigma}\equiv[\tilde{\theta}\mathbf{B}_{\perp}]_{\rho}\). III TEM wave solutions and rotation of the plane of polarization as a topological magnetoelectric signature In [28] it was reported for the first time that, in the context of TIs, \(\mathbf{\nabla}\theta\) could evade the restrictions imposed by Earnshaw's theorem. In this work we present explicit examples of field solutions now made available to us by the modifications introduced by \(\theta\)-ED that, at the same time, interact with the TIs in a way to produce a novel signature of the TME. The field equations (6)-(9) admit self-consistent non-trivial solutions for the transverse components of the electric and magnetic fields, i.e., \(\mathbf{E}_{\perp}\neq 0\) and \(\mathbf{B}_{\perp}\neq 0\) with \(E_{z}=B_{z}=0\) simultaneously provided: (a) \(\mathbf{B}_{\perp}\) is transverse to \(\mathbf{E}_{\perp}\): \[\mathbf{B}_{\perp}\,=\sqrt{\epsilon\mu}\,\hat{\mathbf{z}}\times\mathbf{E}_{ \perp}. \tag{10}\] (b) The dispersion relations is: \[c^{2}\,k^{2}=\omega^{2}\,\mu\epsilon, \tag{11}\] where \(\partial_{z}\theta\) and \(\hat{\theta}\) terms, that are present in the general case, vanish, since we are working with the restricted \(\theta\) for which these terms are taken to be zero. (c) correspondingly, the phase velocity \(v_{p}(k)\equiv\omega(k)/k\) is: \[v_{p}(k)=\frac{c}{\sqrt{\mu\epsilon}}, \tag{12}\] (d) and also, the optical properties across adjacent media satisfies [41]\(\epsilon_{i+1}\mu_{i+1}=\epsilon_{i}\mu_{i}\). Given Eqs. (10) and (11) and our restriction for the TME, that is \(\partial_{z}\theta=0=\hat{\theta}\), we observe that the fields propagate with continuous wavenumber, without birefringence, and with a dispersion relation as in a dispersion-free medium [42] : \(k=|\mathbf{k}|=k_{0}\sqrt{\mu\epsilon}\), i.e., as free \(\mathcal{T}\mathcal{L}\mathcal{M}\) waves in an \((\epsilon,\mu,\theta)\)-medium. In what follows we will drop the subscript \(\perp\) of the fields. The BCs imposed by the \(\theta\)-interface produce a discontinuity of \(\mathbf{E}\) field across the interface that results in a rotation of the polarization of the field that is solely due to \(\mathbf{\nabla}\theta\) across the interface. This situation is depicted in Figs. (1.b, c). Note the importance of the sign of \(\hat{\theta}\) and that only the radial component of \(\mathbf{E}\) is discontinuous. At any given point of the \(\theta\)-interface, the directions of the \(\mathbf{E}\) satisfy: \[\tan\gamma_{i+1}=\tan\gamma_{i}\frac{1}{(1+2Z_{\theta}\tan\gamma_{i})}, \tag{13}\] where \(Z=\sqrt{\mu/\epsilon}\) is the impedance, \(Z_{\theta}\equiv\tilde{\theta}Z/2\), and \(\gamma_{i}\), \(\gamma_{i+1}\) and are the angles between the normal to the \(i\)-th interface and \(\mathbf{E}\) on either side of it. Faraday and Kerr rotation effects, have indeed been predicted in the context of TIs as signals of the TME [30; 33; 34; 35; 36; 37; 38; 39; 43; 44; 45; 46; 47; 48]. The rotation we find here is also an interesting signature of the TME, but owing to the \(\mathcal{T}\mathcal{L}\mathcal{M}\) nature of our solution, it differs radically from the latter. Contrary to Faraday rotation, this one is not generated by a component of the \(\mathbf{B}\)-field along the direction of propagation (because the fields are transverse). Neither is it due to birefringence, as we have \(ck=\omega\sqrt{\mu\epsilon}\), nor is it a property of the polarization of the reflected field. This is a novel prediction of the EM response of TIs leading to a new way to observe the TME, and it is a consequence of exact \(\mathcal{T}\mathcal{L}\mathcal{M}\) wave solutions that had not been exploited up until now and are certainly very intensively sought for [49; 50; 51; 52; 53]. If one considers a geometry with several coaxial cylindrical layers, the cumulative effect is different for the parallel and antiparallel configurations of Fig. (1.b,c), because the effect is sensitive to \(\tilde{\theta}\). We now pass to analyze particular cylindrical configurations. For the sake of separating the \(\theta\)-effect from other possible optical effects, in the remaining we will consider \(Z=1\). One cylindrical \(\theta\)-interface Consider an infinitely long TI cylinder of radius \(R\), characterized by \(\theta\) in a homogeneous medium, both with the same \(Z\). We seek solutions such that asymptotically away from the TI cylinder, the EM field be a plane wave with linear polarization (LP), say, in the \(\hat{\mathbf{y}}\)-direction, i.e., \(\lim_{\rho\rightarrow\infty}\mathbf{E}(\rho,\phi,z,t)=E_{0}\,e^{i(kz-\omega t)} \,\hat{\mathbf{y}}\). The \(\mathcal{TEM}\) fields that solve Eqs. (6)-(9) and satisfy the BCs of Eq. (5), in each media \(\mathcal{M}_{i}\), for \(i=1,2\) are \(\mathbf{E}_{i}=E_{0}\hat{\mathbf{y}}+E_{0}\mathbf{E}_{i}^{\theta}\), where: \[\mathbf{E}_{1}^{\theta} = -\kappa(\hat{\mathbf{x}}+Z_{\theta}\hat{\mathbf{y}})\,, \tag{14}\] \[\mathbf{E}_{2}^{\theta} = \kappa\ell^{2}[(Z_{\theta}\sin\phi+\cos\phi)\hat{\boldsymbol{ \rho}}+(\sin\phi-Z_{\theta}\cos\phi)\hat{\boldsymbol{\phi}}], \tag{15}\] with \(\kappa=Z_{\theta}/(1+Z_{\theta}^{2})\) and \(\ell=R/\rho\). As \(\mathbf{B}\) is given by Eq. (10), in the sequel we will mostly refer to the electric field. If \(\tilde{\theta}=0\) there is no interface whatsoever and the interior and exterior solutions are identical to \(E_{0}\,e^{i(kz-\omega t)}\,\hat{\mathbf{y}}\) as they must. If \(\tilde{\theta}\neq 0\) and \(E_{0}\neq 0\) the total field is non-trivial, but if \(E_{0}=0\) there is no solution at all. Therefore, our solution is reliant on the "background" field (we prefer to call this background rather than external so as not to confuse it with the field outside or exterior to the TI). The claim is not that this \(\mathcal{TEM}\) solution exists solely due to the \(\theta\)-interface, but rather that due to it, a solution exists in all space that cannot be otherwise obtained with, for example, all-dielectric materials, and it acquires new and non-trivial features that are attributable to \(\theta\) alone that can lead to new observables signatures of TMEP. In Fig. (2) we show streamlines of the \(\mathbf{E}\)-field. The density plots represent the spatial distribution of the temporal average of the total Poynting vector, relative to that of the background field, \(\langle S_{z0}\rangle=cE_{0}^{2}/8\pi Z\). By total we mean the contribution to the Poynting coming from the total EM field, that is the superposition of the background EM field and the \(\theta\)-induced one. Observe that due to the \(\mathcal{TEM}\) nature of the EM field, the Poynting vector only has longitudinal \(z\) component. Inside the cylindrical TI, the electric field is uniform, and, due to the rotation of Eq. (13), the plane of polarization rotates by a fixed amount. As expected, the magnitude of the effect is minute, however, for feasible values of the TMEP parameter the rotation of the polarization plane is measurable with present day techniques. This rotation is given by \(\cos\varphi_{\text{int}}=\hat{\mathbf{E}}_{1}\cdot\hat{\mathbf{y}}=(1+Z_{ \theta}^{2})^{-1/2}\). For example, for \(\theta_{\text{TI}}=3\pi,11\pi,19\pi\) and \(27\pi\) respectively, this effect results in a rotation of the polarization plane by \(0.63,2.30,3.97\) and \(5.63\) degrees respectively. This rotation is entirely due to the TMEP of the cylindrical TI, it is of a completely different nature than Faraday or Kerr rotation, and thus, it is yet another application of \(\theta\)-ED that provides an alternative method to measure a signal of the TME. The temporal averages of the total relative Poynting vector in each region are given by: \[\langle S_{z1}^{\theta}\rangle/\langle S_{z0}\rangle = 1-\kappa Z_{\theta}, \tag{16}\] \[\langle S_{z2}^{\theta}\rangle/\langle S_{z0}\rangle = 1+\kappa\big{[}Z_{\theta}\ell^{4}+2\ell^{2}(\sin 2\phi-Z_{ \theta}\cos 2\phi)\big{]}. \tag{17}\] Away from the TI's surface, the power per unit area varies as: (a) an anisotropic term that goes as \(\rho^{-2}\), and (b) also by an isotropic term that goes as \(\rho^{-4}\). The patterns of the relative Poynting vector (\(\mathcal{S}_{z}^{\theta}(\rho,\phi)\equiv\langle S_{z\theta}\rangle/\langle S_ {z0}\rangle\)) in Fig (2) reveal other interesting features. It appears that \(\mathcal{S}_{z}^{\theta}(\rho,\phi)\) is distributed with a sort of quadrupolar structure in the \(XY\)-plane being slightly amplified/diminished alternately in each quadrant. Also, it seems its value at the \(y=0\) and \(x=0\) planes were equal to 1. This, however, is only an illusion due to the smallness of \(\theta\)-effect. The contour plots in Fig. (2) show lines of constant \(\mathcal{S}_{z}^{\theta}(\rho,\phi)\) and illustrate this. The physics is neither left-right nor up-down symmetric, and it all is due to the interplay between the BCs and the fact that the background EM wave is comprised of \(\mathbf{E}\)- and \(\mathbf{B}\)-field vectors in the \(+\hat{\mathbf{y}}\) and \(-\hat{\mathbf{x}}\) directions respectively. A correct interpretation of the patterns of \(\mathcal{S}_{z}^{\theta}(\rho,\phi)\) resides on the symmetries of Eq. (17). One immediately sees that \(\mathcal{S}_{z}^{\theta}(\rho,\phi)=\mathcal{S}_{z}^{\theta}(\rho,\phi+n\pi)\) for \(n=1,2,\dots\) and also \(\mathcal{S}_{z}^{\theta}(\rho,\phi)-\mathcal{S}_{z}^{\theta}(\rho,-\phi)=4\kappa \ell^{2}\sin 2\phi=\mathcal{S}_{z}^{\theta}(\rho,\phi)-\mathcal{S}_{z}^{\theta} (\rho,\pi-\phi)\). Therefore, though for fixed \(\rho\) the relative Poynting is equal at antipodal points in \(\phi\), it is not left-right nor up-down symmetric. Furthermore, the relative Poynting, as a function of \(\phi\) has maxima and minima defined by the directions \(\phi_{\pm}=\arctan(Z_{\theta}\pm\sqrt{1+Z_{\theta}^{2}})\) respectively. These directions correspond to the lines (not drawn) on Fig. (2) of extremal intensities. Given the asymmetries above, evaluated at the boundary, \(|\mathcal{S}_{z}^{\theta}(R,\phi_{+})-1|>|\mathcal{S}_{z}^{\theta}(R,\phi_{-} )-1|\), however, with respect to the directions of minimum and maximum value, the relative Poynting is indeed symmetric, namely, \(\mathcal{S}_{z}^{\theta}(\rho,\phi_{\pm}+\alpha)=\mathcal{S}_{z}^{\theta}(\rho, \phi_{\pm}-\alpha)\). This asymmetric field distribution can be understood self-consistently, order-by-order in \(\theta\), in terms of the induced topological surface charge densities. The jump in \(\theta\) across the boundary times the \(\mathbf{B}\)-field normal to the cylinder generates a discontinuity of the \(\mathbf{E}\)-field that acts as a topological surface charge density \(\sigma_{\theta}(\Sigma)=-\frac{1}{4\pi}(\tilde{\boldsymbol{\theta}}\mathbf{B} \cdot\hat{\boldsymbol{\rho}})|_{\Sigma}\). Along with it, there is an induced (topological) surface current density \(\mathbf{K}_{\theta}(\Sigma)=\frac{c}{4\pi}(\tilde{\boldsymbol{\theta}}\hat{ \boldsymbol{\rho}}\times\mathbf{E})|_{\Sigma}\)[54]. The total electric field \(\mathbf{E}_{1,2}^{\theta}\) (and the corresponding \(\mathbf{B}_{1,2}^{\theta}\)) can be understood as an infinite superposition of the fields induced by these topological surface charge densities and currents. The infinite sum, in fact converges and lead precisely to the fields in Eq. (14,15). Further details in [55]. ### The role of the polarization of the background field If the background field has right-handed or left-handed circular polarizations (RCP/LCP) the amount of the polarization rotation inside the TI is the same and in each case it is in the same sense the background field rotates. At a given time and for appropriately chosen initial conditions (or phase) the total field structure for the CP background field and the patterns of the Poynting vector are the same as for the LP. The temporal averages differ considerably, though. For the CP background field, the pattern of the external Poynting is isotropic only with a \(\sim\widetilde{\theta}^{2}\rho^{-4}\) dependence [56]. To understand this, we realize that the Poynting has an interaction term \(2E_{0}\mathbf{\hat{y}}\cdot E_{0}\mathbf{E}^{\theta*}\) in either regions interior and exterior to the TI. Going back to our discussion of the induced topological polarization charges, for a CP background field, these \(\sigma_{\theta}\) will also tend to redistribute following the direction of the \(\mathbf{E}\) field, which is rotating itself so there is no misalignment between the background field and that produced by the induced charges, thus they remain orthogonal to each other at all times. ## V Several coaxial cylindrical \(\theta\)-interfaces The precise form of the repeated effects with several coaxial cylindrical \(\theta\)-interfaces depend on the different radii at which the \(\theta\)-interfaces lie, on \(\mathbf{\nabla}\theta\) at each layer and possibly on the polarization of the background EM field. With respect to the directions of \(\mathbf{\nabla}\theta\) there as several possible configurations. For simplicity we analyze the case of two \(\theta\)-interfaces and focus on the antiparallel configuration, as depicted in Fig. (1.b), respectively. The study of more \(\theta\)-interfaces is left for elsewhere [55]. ### Two coaxial \(\theta\)-interfaces in antiparallel configuration Consider now two coaxial cylindrical \(\theta\)-interfaces in antiparallel configuration with the same background EM field as above. The geometry is as in Fig. (1.a), with \(\theta=\theta_{2}\neq 0\) for \(R_{1}\leq\rho<R_{2}\) and zero elsewhere. Region 1 defined by \(0\leq\rho<R_{1}\) Figure 2: The color map is a density plot of the temporal average of the total Poynting vector in the interior and exterior regions relative to that of the background EM field, \(\langle S_{z0}\rangle=cE_{0}^{2}/8\pi Z\). The streamlines are the total \(\mathbf{E}\) field-lines. The contour plots show contours of constant relative Poynting vector. Panels (a-d) correspond to \(Z=1\) and \(\theta=3\pi,11\pi,19\pi,27\pi\), respectively. is the internal vacuum; region 2, defined by \(R_{1}\leq\rho<R_{2}\), is the TI's bulk; and region 3, defined by \(R_{2}\leq\rho\), is the external vacuum. The ratio \(\chi=R_{1}/R_{2}\) will be useful. In regions \(i=1,2,3\) the total electric field is actually \(\mathcal{TEM}\) and can be written as \(\mathbf{E}_{i}=E_{0}\mathbf{\hat{y}}+E_{0}\mathbf{E}_{i}^{0}\) and the \(\theta\) contributions are: \[\mathbf{E}_{1}^{\theta} =-\Theta_{\chi}Y\theta_{2}\,\mathbf{\hat{y}}\,, \tag{18}\] \[\mathbf{E}_{2}^{\theta} =\Theta_{\chi}\Big{[}2\frac{R_{1}^{2}}{\rho^{2}}(\cos\phi\hat{ \boldsymbol{\rho}}+\sin\phi\hat{\boldsymbol{\phi}})+2\hat{\mathbf{x}}-Y\theta _{2}\hat{\mathbf{y}}\Big{]}\,,\] (19) \[\mathbf{E}_{3}^{\theta} =\Theta_{\chi}Y\frac{R_{2}^{2}}{\rho^{2}}\Big{[}\sin\phi(\theta_ {2}\hat{\boldsymbol{\rho}}-2\hat{\boldsymbol{\phi}})-\cos\phi(\hat{ \boldsymbol{\rho}}+\theta_{2}\hat{\boldsymbol{\phi}})\Big{]}, \tag{20}\] where \(Y=Z(1-\chi^{2})\) and \(\Theta_{\chi}=Z\theta_{2}/(4+YZ\theta_{2}^{2})\). Despite the value of \(\theta_{2}\), the field in region 1 is uniform, with the same polarization as the asymptotic background field and \(\chi\) determines how much is the intensity diminished in region 1. For \(\chi=1\) (i.e., \(R_{1}=R_{2}\)) there is no additional \(\theta\)-contribution to the total field in region 1, as it should. In Figs. (3 a, b) we show the density plots of the temporal average of the Poynting vector relative to the background, and \(\mathbf{E}\)-field streamlines, corresponding to Eqs. (18, 19, 20), for \(Z=1\) and \(\theta_{2}=27\pi\). For smaller \(\theta\) the same effects arise, but fainter. In (a) \(\chi_{a}=0.45\) and in (b) \(\chi_{b}=0.82\), respectively. In either case, in TI's bulk the field is similar in its asymmetric quadrupolar-like distribution, as for one \(\theta\)-layer, but, it is inverted with respect to the exterior region. The successive application of the BCs and the geometry give rise to a distribution of induced topological charges that generates the corresponding total electric field. ### Transmitted power in each region as a function of \(\theta\) and the geometry of the system In Fig. (3.c), for different values of \(\theta_{2}\), we compare the power transmitted in region 1: \[P_{1}^{\theta_{2}}(\chi)=\int_{0}^{R_{1}}\langle S_{z\theta}\rangle ds\,, \tag{21}\] to the power transmitted in that same region by the background field: \(P_{1}^{\theta_{2}=0}\) (in solid black line). For fixed \(\chi\), we see that \(P_{1}\) is smaller for higher values of \(\theta\) and, for a given \(\theta\), \(P_{1}\) scales with \(R_{1}\) (which is rather trivial as the bigger/smaller \(R_{1}\) the bigger/smaller the area pierced by the Poynting). The differences \(\Delta P_{i}(\chi)\equiv P_{i}^{\theta_{2}}-P_{i}^{\theta_{2}=0}\) for \(i=1,2\) serve as a means to quantify the "confining" properties of the system. The quantities \(P_{2}^{\theta,0}(\chi)\) correspond to the transmitted power by the EM field through the region \(R_{1}\leq\rho\leq R_{2}\) with TI (\(\theta\)) and without (0), respectively, namely: \[P_{2}^{\theta_{2}}(\chi)=\int_{R_{1}}^{R_{2}}\langle S_{z\theta}\rangle ds\,. \tag{22}\] Their \(\chi\) dependence allows to find optimal configurations. For example, for any given \(\theta\), there is a critical \(\chi=\chi_{1M}\) that minimizes \(\Delta P_{1}\). The inset to Fig. (3.c) shows this difference and the values \(\chi_{a}\) and \(\chi_{1M}\) for \(\theta_{2}=27\pi\). Rather surprisingly, in the \(R_{1}\to R_{2}\) limit, a radial electric anisotropic field residing only at \(\rho=R_{2}\) remains while in regions 1 and 3 the total electric field is exactly equal to the background field. ### Geometry optimzation and confinement of the \(\mathcal{TEM}\) field inside the TI In Fig. (3.d), for different values of \(\theta_{2}\), we compare the power transmitted in region 2, \(P_{2}^{\theta_{2}}(\chi)\), to that transmitted in the same region by the background field, \(P_{2}^{\theta_{2}=0}\) (in solid black line). Now, we observe that for a given \(\theta_{2}\), there exists a \(\chi=\chi_{2}^{*}(\theta_{2})\) for which the power transmitted in the TI's bulk begins to be greater than the power transmitted in region 2 (\(R_{1}\leq\rho\leq R_{2}\)) if the TI were not there, namely: \(\Delta P_{2}(\chi_{2}^{*})>0\). This occurs when the \(\theta_{2}\neq 0\) curves cross the solid black line (\(\theta_{2}=0\)). Regardless the value of \(\theta_{2}\) such an intersection always occurs, but it is more evident for larger values of \(\theta_{2}\) (compare the red (dashed) curve to the blue (dotted-dashed) or green (smaller dotted-dashed) curves. Furthermore, this gain can also be maximized, i.e., for that given \(\theta_{2}\), there exists a \(\chi=\chi_{2M}(\theta_{2})\) such that for \(\chi_{2}^{*}<\chi_{2M}<1\), the difference \(\Delta P_{2}\) is maximized. In Fig. (3.d), we have chosen \(\chi=\chi_{2M}(27\pi)\), precisely to show the maximum gain in region 2, for which: \[\text{max}(P_{2}^{\theta_{2}=27\pi}(\chi))=P_{2}^{\theta_{2}=27\pi}(\chi_{2M}) =1.01\times P_{2}^{\theta_{2}=0}. \tag{23}\] This means a 1% of power gain in region 2 with the TI compared to the case if the TI were not there. The values \(\chi_{b}=\chi_{2M}\), \(\chi_{2}^{*}\) and \(\Delta P_{2}\) and are shown as inset to Fig. (3.d). Also, the bigger the \(\theta_{2}\) the largest the gain, however, the closer \(\chi_{2M}\) must be to 1, i.e., higher yields occur for higher \(\theta_{2}\) and through thinner TI sheaths. A priori, one could have expected that for a given \(\theta_{2}\) that configuration that minimizes the power transmitted through region 1 (inner vacuum core) is the same configuration that maximizes the power transmitted through region 2 (inside the TI). Namely, the expectation that the power gain through the TI is at the expense of the loss of power in the inner vacuum core, as if the TI sucked power from the inner shells only. Rather surprisingly, this is not the case. In fact, for \(\theta_{1}=0\) we can show that there is no \(\chi\) that minimizes \(P_{1}^{\theta_{2}}\) and simultaneously maximizes \(P_{2}^{\theta_{2}}\), implying that both geometry optimization procedures described are in fact independent. To contrast the explanation above, the physical reason for this would then be that not only does the TI confines the EM field in its bulk by depleting the EM field in the inner vacuum, but does so with the field exterior to the TI too. This is why the density plot of the Poynting distribution, for a fixed angular direction and fixed external radius \(R<\rho\), is fainter in Fig. (3.b) than it does in Fig. (3.a). ## VI Summary and outlook In this study, we find purely transverse electromagnetic (\(T\mathcal{K}\mathcal{M}\)) fields propagating parallel to the axis of cylindrical topological insulating (TI) media, that are not possible with topologically trivial materials alone. The media considered were a single TI or several coaxial cylindrical TI layers. These \(\mathcal{T}\mathcal{K}\mathcal{M}\) fields propagate both outside the cylindrical TIs as asymptotic free solutions and inside each of the geometries with linear dispersion relation as in a free medium, without cut-off frequencies and without birefringence. Finite discontinuities of the topological magnetoelectric parameter (TMEP) at the interface between each layer, impose boundary conditions that result in a rotation of the polarization plane of the EM field. This rotation is different from Figure 3: In all cases \(Z=1\). In (a) and (b), \(\theta_{1}=0=\theta_{3}\) and \(\theta_{2}=27\pi\) inside the TI. In (a) \(\chi=0.45\) and in (b) \(\chi=0.82\). In (c,d) we show the power transmitted through regions 1 and 2: \(P_{1}^{\theta_{2}}\) and \(P_{2}^{\theta_{2}}\), respectively for \(R_{2}=10\mu m\). For \(\theta_{2}=0\), the corresponding powers: \(P_{1,2}^{\theta_{2}=0}\), are shown in solid black line. For \(\theta_{2}=27\pi\), in the inset of (c) we show \(\Delta P_{1}\). The vertical lines are \(\chi_{a}=0.45\) which defines the geometry of the configuration in (a), and \(\chi_{1M}\) that maximizes the difference. The inset of (d) shows \(\Delta P_{2}\) with the values \(\chi_{2}^{\star}\) and \(\chi_{2M}\) shown. previously reported Faraday or Kerr rotations for TIs, attesting to a new observable signature of the topological magnetoelectric effect (TME). In the case of a single \(\theta\)-layer case, the field exhibits a peculiar asymmetric quadrupolar distribution in the plane perpendicular to the TI. The magnitude of the rotation in the TI core range from \(\approx 0.63\) degrees (11 mrad) to \(\approx 5.63\) degrees (98 mrad) for \(\theta_{\Pi}=3\pi,27\pi\) respectively. This well within experimental sensitivity, and as a polarimetric signal of the TME it is actually competitive with respect to other topological magnetoelectric rotations of the plane of polarization of light in TIs [34], even coming close to the enhanced rotation effects reported in [33]. These, however, as we have emphasized, are Faraday and/or Kerr rotations and ours, though of similar magnitude, are of a different nature. For the case of 2 \(\theta\)-layers, the field propagates along the TI sheath as in a optical fiber, but in an omnidirectional way. Its confinement can be controlled varying the geometry (\(\chi\)) and the value of \(\theta\). We mentioned that in \(\theta\)-ED, the space of solutions enlarges given that Earnshaw's theorem no longer applies [57]. Here we find a non-trivial \(\mathcal{TEM}\) wave solution confined in a topological insulator sheath. This solution is possible due to the modification of the BCs by the topological magnetoelectric properties of TIs. In ordinary Maxwell theory, such a solution is impossible. Hosting \(\mathcal{TEM}\) wave solutions in optical fibers is highly cherished in optics and photonics. Our transverse EM field solutions are dispersion-free and have a linear dispersion relation. This implies that wave packets do not spread during propagation and that there is no cut-off frequency. Additionally, by having no conductors at all, Ohmic losses are reduced. Furthermore, \(\mathcal{TEM}\) waves propagate in an omnidirectional manner, i.e., the EM field propagates inside the TI sheath without undergoing total internal reflection at the TI's walls. This contrasts TE or TM propagation, in which the field undergoes successive internal reflections and the incident angle cannot exceed the critical one above which the EM field no longer reflects but rather gets refracted outside the fiber. This attribute is highly appealing for miniaturized devices, as it allows the TI-optical fiber to be bent in any angle. Our results point towards new directions for light manipulation purposes and for studying new manifestations of the TME. Looking ahead, these results could be more appealing by dispensing of the background asymptotic EM field. By adapting the methodology of [58], one could analyze the possibility to confine the \(\mathcal{TEM}\) fields in a finite region. To disentangle the TME from other optical effects, we have kept \(Z=\sqrt{\mu/\epsilon}=1\). However, some of the observables we have found are proportional to \(Z\tilde{\theta}\) or \(Z^{2}\tilde{\theta}^{2}\), therefore one could explore the conditions under which a certain TI could acquire epsilon-near-zero (ENZ) behavior to enhance the effects. Lastly, analytical solutions with several TI cylinders are cumbersome. Preliminary numerical calculations indicate that an ad hoc array of several parallel TI cylinders would result in a considerable gain of observable signatures of the TME, due to an enhancement of the Poynting vector by means of superposition. These and other open questions will be dealt with in [55]. ## VII Acknowledgements We thank L. F. Urrutia and A. Martin-Ruiz for useful comments, and Instituto de Ciencias Nucleares at UNAM, for the hospitality during early stages of this work. Both authors also acknowledge support from the project CONACyT CF/2019/428214. S. F. has been funded by Scholarship Program/BECAS DOCTORADO UNAB. M. C. has been funded by DGI-UNAB Project DI-16-20/REG.
2305.14857
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer
Despite remarkable advancements in few-shot generalization in natural language processing, most models are developed and evaluated primarily in English. To facilitate research on few-shot cross-lingual transfer, we introduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across 54 languages in a sequence-to-sequence format and provides a fixed set of few-shot examples and instructions. BUFFET is designed to establish a rigorous and equitable evaluation framework for few-shot cross-lingual transfer across a broad range of tasks and languages. Using BUFFET, we perform thorough evaluations of state-of-the-art multilingual large language models with different transfer methods, namely in-context learning and fine-tuning. Our findings reveal significant room for improvement in few-shot in-context cross-lingual transfer. In particular, ChatGPT with in-context learning often performs worse than much smaller mT5-base models fine-tuned on English task data and few-shot in-language examples. Our analysis suggests various avenues for future research in few-shot cross-lingual transfer, such as improved pretraining, understanding, and future evaluations.
Akari Asai, Sneha Kudugunta, Xinyan Velocity Yu, Terra Blevins, Hila Gonen, Machel Reid, Yulia Tsvetkov, Sebastian Ruder, Hannaneh Hajishirzi
2023-05-24T08:06:33Z
http://arxiv.org/abs/2305.14857v1
# BUFFET: Benchmarking Large Language Models ###### Abstract Despite remarkable advancements in few-shot generalization in natural language processing, most models are developed and evaluated primarily in English. To facilitate research on few-shot cross-lingual transfer, we introduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across 54 languages in a sequence-to-sequence format and provides a fixed set of few-shot examples and instructions. BUFFET is designed to establish a rigorous and equitable evaluation framework for few-shot cross-lingual transfer across a broad range of tasks and languages. Using BUFFET, we perform thorough evaluations of state-of-the-art multilingual large language models with different transfer methods, namely in-context learning and fine-tuning. Our findings reveal significant room for improvement in few-shot in-context cross-lingual transfer. In particular, ChatGPT with in-context learning often performs worse than much smaller mT5-base models fine-tuned on English task data and few-shot in-language examples. Our analysis suggests various avenues for future research in few-shot cross-lingual transfer, such as improved pre-training, understanding, and future evaluations. ## 1 Introduction Recent advances in NLP primarily focus on the English language Blasi et al. (2022). Due to the lack of sufficient training data in most of the world's languages Yu et al. (2022), prior work explores direct transfer of pretrained language models to new languages after fine-tuning on resource-rich languages (_zero-shot cross-lingual transfer_, Hu et al.2020). Transferring after training a model on a few examples (_few-shot cross-lingual transfer_) often boosts performance, especially in languages that are distant from the source language Lauscher et al. (2020); Hedderich et al. (2020). In English, zero- or few-shot learning via in-context learning is an active area of research Beltagy et al. (2022); Schick and Schutze (2021); Shin et al. (2020). In this learning paradigm, one prompts a large language model (LLM) with few-shot demonstrations or natural language instructions to adapt to a new task, without any parameter updates. Yet, few-shot transfer across languages is still under-explored Lin et al. (2021) in a wide range of tasks and languages. Moreover, it is unclear how effectively in-context learning performs in comparison to widely-used fine-tuning-based transfer methods under a comparable setup. This work introduces a new benchmark called BUFFET: **B**enchmark of **U**nified **F**ormat **FE**w-shot **T**ransfer Evaluation (Figure 1) to enable rigorous evaluations and advance research on few-shot cross-lingual transfer. Similar to a rich buffet, BUFFET curates a diverse mix of tasks: 15 different tasks--including classification, structured prediction, and natural language generation--across 54 languages. BUFFET has several unique characteristics that are not present in prior multi-task multilingual benchmarks (summarized in Table 1): Figure 1: BUFFET includes unified diverse tasks in the same format, covering many typologically diverse languages. It enables a fair comparison across models, transfer methods, and languages and facilitates large-scale analysis across different learning setups. * BUFFET provides a fixed set of few-shot examples for training and validation, allowing for fair comparisons across LMs and transfer methods. * BUFFET includes datasets annotated in each language or covering under-represented languages, which are often not included in existing multi-task benchmarks. * BUFFET combines diverse tasks into a unified text-to-text format and provides a set of English and machine-translated instructions for each task, removing the burdens of task-specific architecture changes or prompt engineering. Using this new benchmark, we extensively evaluate the current state-of-the-art multilingual large language models (LLMs), including mT5 (Xue et al., 2021), mT0 (Muennighoff et al., 2022), BLOOMZ (Muennighoff et al., 2022), and ChatGPT (Ouyang et al., 2022), using both fine-tuning and in-context learning approaches. In particular, BUFFET enables us to investigate the following research questions: **(RQ1) Is in-context learning competitive with fine-tuning in few-shot cross-lingual transfer?** Notably, given the same small numbers of examples in the target languages, in-context learning on LLMs (including ChatGPT, the most powerful model we evaluate in this work) often under-performs much smaller specialized mT5-base models, as shown in Figure 1 (bottom left). **(RQ2) How well do different transfer methods perform across tasks and languages?** The performance gap between in-context learning-based baselines and fine-tuning-based baselines is more significant in under-represented languages (Figure 1 bottom center). On NLI in indigenous languages of the Americas, ChatGPT or mT0-11B using in-context learning performs barely above random, while 580 million parameter mt5-base fine-tuned models retain strong performance. On the contrary, these LLMs perform well on generative tasks where a smaller task-specific model struggles, demonstrating their superiority in generating fluent text for diverse languages without abundant training data. **(RQ3) How does the choice of transfer setup affect different transfer strategies?** BUFFET also enables us to perform an in-depth and extensive analysis of the effects of diverse demonstrations and instructions on the downstream transfer quality. Our observations indicate that the choice of few-shot training examples has a substantial influence on a model's performance, particularly, with greater variability in in-context learning, compared to fine-tuning. We note that optimal transfer settings may differ across models. For example, instruction-tuned models often face challenges in effectively utilizing few-shot samples and their performance deteriorates as the number of demonstrations increases, possibly because they are optimized for the zero-shot instruction-tuned training scheme. This highlights the need for a standardized benchmark to facilitate fair comparisons and further studies to assess such transfer dynamics in non-English data. Grounded in our analysis, we suggest avenues for future research in few-shot cross-lingual transfer for both dataset creation and model development. Our data and code are available online.1 Footnote 1: [https://buffetfs.github.io/](https://buffetfs.github.io/) ## 2 Background and Related Work ### Problem Formulation Due to the lack of annotated training data in many languages (Blasi et al., 2022; Yu et al., 2022; Joshi et al., 2020), transferring models trained on resource-rich languages (e.g., English) to other languages has been actively studied in multilingual NLP. In this paper, our main focus is on **few-shot cross-lingual transfer**(Lauscher et al., 2020), where a model is adapted using only a limited number of training or validation examples in the target language \(L\). Another popular paradigm is **zero-shot cross-lingual transfer**(Artetxe et al., 2020; Hu et al., 2020) from English, where a model has access to training sets or instructions in English but not in the target language. Various transfer methods have been investigated in the field, including the in-context learning methods (Section 2.3). Yet, limited research explores different transfer methods _under comparable condi \begin{table} \begin{tabular}{l|c c c c} \hline \hline & Multi-ling. & Few-S & Gen. & Low-R \\ \hline XTREME & ✓ & & & \\ XTREME-R & ✓ & & & \\ XGLUE & ✓ & & ✓ & \\ CrossFit & & ✓ & ✓ & \\ MEGA* & ✓ & ✓ & & \\ BUFFET & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the existing benchmarks based on their multilinguality (Multi-ling.), few-shot task formulation (Few-S), availability of generative tasks (Gen.), and coverage of low-resource languages (Low-R). \({}^{*}\) indicates concurrent work. tions_. With our new benchmark, BUFFET, we facilitate fair comparisons between models and learning methods, establishing a basis for studying the dynamics of few-shot transfer across various languages (Section 2.2). ### Benchmarks for Cross-lingual Transfer To enable a scalable and rigorous evaluation across multiple tasks, prior work has proposed multi-task benchmarks that unify diverse existing datasets. XTREME (Hu et al., 2020), XTREME-R (Ruder et al., 2021) and XGLUE (Liang et al., 2020) focus on zero-shot transfer of models fine-tuned on English datasets. Despite English-based few-shot evaluation benchmarks, such as CrossFit (Ye et al., 2021), in few-shot cross-lingual transfer, we lack a standardized evaluation benchmark to facilitate the comparison of models and learning methods at scale. BUFFET provides the first large-scale few-shot cross-lingual transfer suits to address the gap. Importantly, to mitigate the effects of the high-performance variance in few-shot cross-lingual transfer (Zhao et al., 2021), we curate and aggregate results from multiple fixed \(k\)-shot training instances for each task and language. Concurrent with our work, MEGA (Ahuja et al., 2023) conducts experiments of few-shot cross-lingual transfer with a focus on classification and question answering tasks. BUFFET unifies diverse tasks including both discriminative and generative tasks. We also include datasets covering languages under-represented in prior work (e.g., African and indigenous languages). Table 1 summarizes the key differences between BUFFET and prior benchmarks. ### Methods for Cross-lingual Transfer Fine-tuning-based approaches.Multilingual pre-trained models (Devlin et al., 2019; Xue et al., 2021; Conneau et al., 2020) have the ability to adapt to new languages with no or few training instances in a target language (Conneau et al., 2020; Hu et al., 2020; Wu and Dredze, 2019). Lauscher et al. (2020) and Hedderich et al. (2020) report that particularly in languages that are distant from the source language, further fine-tuning model on few-shot samples greatly improves performance. Cross-lingual in-context learning.In-context learning (Brown et al., 2020) aims at making an LM learn a new task by conditioning on a task description (instruction) and training examples (demonstrations). Despite active research on context learning (Schick and Schutze, 2021; Min et al., 2022), most prior work focuses only on English. Recent work (Lin et al., 2021; Muennighoff et al., 2022) introduces pre-trained LMs trained on more multilingual pre-trained corpora or translated datasets and shows improved results. While prior evaluations often focus on classification or translation tasks (Zhu et al., 2023; Vilar et al., 2022), more recently Shi et al. (2023), evaluate the use of instructions, demonstrations, and rationales in different languages across multiple reasoning tasks. However, how much LLMs with respect to in-context learning compete with the aforementioned fine-tuned approaches in a _comparable_ setup and at scale has yet to be investigated, as they often use a large number of training examples in target languages (Bang et al., 2023). We demonstrate even with a small number of training examples, fine-tuning methods are competitive with in-context learning for cross-lingual transfer. ## 3 Benchmark: BUFFET We introduce a new standardized few-shot cross-lingual evaluation benchmark: BUFFET (**B**enchmark of **U**nified **F**ormat **F**ew-shot **T**ransfer Evaluation). BUFFET unifies diverse NLP tasks and provides fixed sets of few-shot samples per task to facilitate consistent comparisons (Table 2). ### Design Principles We create the BUFFET benchmark to establish a rigorous and equitable evaluation framework for few-shot cross-lingual transfer across a broad range of tasks and languages. We adhere to the following design principles with our benchmark. Standardized few-shot samples.BUFFET provides three different training and validation sets of \(k\)-shots (e.g., \(k=32\)) per task for a non-classification task, or per class for a classification task, for each language. Task diversity.Existing cross-lingual benchmarks often focus on classification or retrieval (Hu et al., 2020; Ruder et al., 2021; Liang et al., 2020). BUFFET encompasses a broad range of task types, such as classification, generation, extraction, and structured prediction tasks. By converting all tasks into the same text-to-text format, we eliminate the need for task-specific model modifications or template conversions. Language diversity.BUFFET covers 54 typologically diverse languages, spanning 24 language families, including under-represented languages (e.g., indigenous languages of the Americas, African languages). The 36 out of 54 languages are not Indo-European languages. A full list of languages is available in Appendix Table 5. Beyond evaluations on translated data.Prior few- or zero-shot evaluations were often conducted on widely-used datasets translated from English (e.g., XNLI; Conneau et al.2018, XCOPA; Ponti et al.2020). Those datasets might exhibit undesired biases, such as translation artifacts or unnatural topic distributions Clark et al. (2020); Artetxe et al. (2020). We collect both translation-based datasets and datasets that are annotated directly in each language (Table 2, Data curation). ### BUFFET Construction Process Following Ye et al. (2021), we unify all tasks into the same text-to-text format, where a model is expected to directly generate the desired outputs given diverse inputs Raffel et al. (2020). For each dataset in BUFFET, we unify instance representations of _instruction_, \(k\)-shot _instances_ for training and validation. Each training instance consists of an input and output. Figure 2 shows an overview. Section 3.2.1 provides the outline of the unification, and Section 3.2.2 provides a task-specific process. #### 3.2.1 Unification Process Few-shot instance selection.By default, we use all of the languages included in the original datasets. For automatically aligned datasets with many test languages, such as XLSUM or WikiANN, we filter out languages that are not included in any other BUFFET datasets following suggestions by Yu et al. (2022).2 For each language in each dataset, we randomly sample \(k\)-shot instances (or _demonstrations_) for training and validation sets using the same random seeds.3 With large-scale automatically aligned datasets, we randomly sample 1,000 test instances in XLSUM and WikiANN and 2,000 test instances for Amazon Review, to reduce inference time costs across many languages and multiple sets of demonstrations. Footnote 2: On XLSUM, we further reduce the number of languages to reduce the inference costs while maintaining language diversities. Footnote 3: We use 100, 13, and 21 as seed numbers, following Ye et al. (2021). Once we sample the instances, we fix the training and validation sets. Instruction selection.We use English instructions from SuperNaturalInstructions Wang et al. (2022) and PromptSource Bach et al. (2022). \begin{table} \begin{tabular}{l|l l l l l l l} \hline \hline Tasks & Dataset & Output & \(|L|\) & \(k\) & Metric & Domain & Data curation \\ \hline NLI & XNLI & 3-way class & 14 & 16 & acc. & misc. & translation \\ & Americas NLI & 3-way class & 10 & 16 & acc. & misc. & translation \\ & Parsi NLU & 3-way class & 1 & 16 & acc. & misc. & in-language \\ & OCNLI & 3-way class & 1 & 16 & acc. & misc. & in-language \\ & KLUE-NLI & 3-way class & 1 & 16 & acc. & misc. & in-language \\ & PAW-SX & 2-way class & 6 & 7 & acc. & Wikipedia & aligned \\ Sentiment & Indic-NLU-sent. & 2-way class & 14 & 16 & acc. & e-commerce & translation \\ Analysis & Amazon Review & 2-way class & 5 & 16 & acc. & e-commerce & in-language \\ Commonsense & XCOPA & multi-choice & 11 & 16 & acc. & misc. & translation \\ Reasoning & XWinograd & multi-choice & 4 & 8 & acc. & misc. & translation \\ QA & TyDIAQ & span & 8 & 8 & F1 & Wikipedia & in-language \\ Named Entity & WikiANN & names \& tags & 33 & 32 & F1 & Wikipedia & aligned \\ Recognition & MasakhAnNER & names \& tags & 9 & 32 & F1 & News & in-language \\ \hline Summarization & XLSUM & summary & 12 & 1 & ROUGE & News & aligned \\ Question Generation & TyDi QA-QG & question & 8 & 8 & BLEU & Wikipedia & in-language \\ \hline \hline \end{tabular} \end{table} Table 2: **The eight target tasks built upon 15 existing datasets in BUFFET. \(|L|\) indicates the number of languages, and \(k\) indicates the total number of training instances. We include datasets that are diverse in terms of output format, tasks, and domains. We also include datasets that are curated by translation, in-language annotation (in-language) and automatically aligned (aligned) following Yu et al. (2022).** Figure 2: BUFFET includes 15 datasets, which are unified into the same single text-to-text format. Among multiple annotated instructions, we sample the first instruction for a similar task that suits our text-to-text scheme. For some tasks, we modify the original instruction to make labels consistent with the names used in BUFFET4 or to remove task-specific dependencies in the input data field. See Appendix Table 6 for the full list of instructions. Footnote 4: For example, an instruction for PAWS-X says the class names are “repeated/not repeated” while in BUFFET we use “duplicated/not_duplicated” as labels, so we change the labels in the original instruction. Instruction translation.Despite rapid progress of instruction-tuning in English LLMs (Wei et al., 2022; Sanh et al., 2022; Mishra et al., 2022; Wang et al., 2022), cross-lingual setups still lag behind due to a lack of instructions in the target languages. Prior work often translates instructions for the target tasks (Lin et al., 2021; Shi et al., 2023). We provide translated instructions for 15 datasets in 54 target languages, translated by NLLB (Costa-jussa et al., 2022), and manually translate the instructions into five languages.5 Footnote 5: Manual translations are performed by bilingual volunteers. #### 3.2.2 Tasks and Dataset Curation We first select eight popular NLP tasks and, for each task, we identify available datasets using a careful survey of multilingual datasets by Yu et al. (2022). Appendix Table 6 shows examples. Natural language inference.Natural Language Inference (NLI) involves determining the logical relationship (i.e., entailment, contradiction, neutral) between two text fragments, i.e., a premise and a hypothesis. In addition to the widely used XNLI (Conneau et al., 2018), we gather NLI datasets that are annotated in each language or designed to cover extremely under-represented languages: AmericansNLI (Ebrahimi et al., 2022), ParsiNLU-Entailment(Khashabi et al., 2021), KLUELNI (Park et al., 2021), and OCNLI (Hu et al., 2020). We use 16 examples for each class. Paraphrase detection.Paraphrase detection is the task of identifying whether two sentences have/do not have the same meaning (duplicate or not duplicated). We adopt PAWS-X (Yang et al., 2019) and include 16 shots for each class as few-shot training and validation data. Sentiment analysis.Binary sentiment analysis identifies whether a text (e.g., a product review from Amazon) expresses positive or negative sentiment towards a topic. We use the Multilingual Amazon Review dataset(Keung et al., 2020) and IndicNLU-Sentiment(Aggarwal et al., 2022). For the former, we discard the neutral class (the reviews with a score of 3) and assign reviews with scores of 4 and 5 to the positive class and reviews with scores of 1 and 2 to the negative class. For both datasets, we sample 16 demonstrations per class. Commonsense reasoning.We use two common-sense reasoning datasets, XCOPA (Ponti et al., 2020) and XWinograd(Muennighoff et al., 2022). Given a sentence and two options, a model selects one of the option labels, (A) or (B), based on which is better suited to the given context. Due to the smaller scale of the datasets, we sample 16 and 8 training instances in total for XCOPA and XWinograd, respectively. Question answering.Question Answering (QA) is the task of answering a question given a paragraph, where the answer is a sub-span of the paragraph. We use TyDiQA-GoldP(Clark et al., 2020), which we refer to as TyDiQA for simplicity. Due to the longer average input length, we limit the number of exemplars to 8. Named entity recognition.Named Entity Recognition (NER) is a representative sequence labeling task, where a system detects and classifies named entities in an input sentence. We adopt WikiANN (Pan et al., 2017) and MasakhaNER(Adelani et al., 2021). Though WikiANN covers 216 languages, we exclude languages that are covered only by WikiANN or XLSUM due to the aforementioned issues. We convert the task into a text-to-text format, where given an input sentence, a model extracts all named entities with named entity tags:6\(<\)location\(>\), \(<\)person\(>\), and \(<\)organization\(>\).7 We use 32 instances overall for few-shot transfer. Footnote 6: This is more challenging than the standard sequence labeling setup since the model must reproduce the entity spans and generate appropriate tags. For example, the output for “Obama served as the 44th president of the United States.” would be “Obama \(<\)person\(>\) United States \(<\)location\(>\). Summarization.We use the XLSumHasan et al. (2021) dataset to benchmark models' ability to generate a summary given a news article. Due to the context window limit, we use only 1 shot for training in this task. Question generation.Question generation generates a question according to a given input passage and a corresponding answer Xiao et al. (2021). We convert the TyDiQA-GoldP dataset into a question generation task, which we refer to TyDiQA-QG. Given the gold paragraph and an answer, the system generates the original question. We use 8 examples for few-shot training. ### BUFFET Evaluation #### 3.3.1 Evaluation Metrics Table 2 (Metric) lists task-specific metrics. To mitigate the variance from different few-shot samples, for each language included in each task, we take the average of a model's performance given three different sets of \(k\)-shot instances. Subsequently, each dataset score is calculated as a macro-average of the per-language score Clark et al. (2020). Finally, following Liang et al. (2020), we take two separate average scores: (a) **Avg. class** score of all classification and QA tasks, and (b) **Avg. generation** score of all generation tasks. #### 3.3.2 BUFFET-Light Conducting a comprehensive evaluation covering a wide range of languages and tasks in BUFFET, while undoubtedly necessary, can be a time-consuming process. We introduce BUFFET-light, which contains a representative subset of languages and tasks for a rapid assessment even in resource-limited scenarios. We carefully select languages and datasets to ensure that we cover a diverse range of languages and output formats, assuming limited resources. See the overview of BUFFET-light in Appendix Section A.2. ## 4 Benchmarking LMs on BUFFET ### Transfer Methods In this study, we investigate various transfer methods with and without parameter updates. To assess the benefit of \(k\)-shot training examples in the target language, we also conduct experiments on zero-shot transfer methods. We assume that the model can optionally use instructions in the target language or another language, or full training sets in a high-resource language like English. This assumption is reasonable given the abundance of labeled datasets in high-resource languages Yu et al. (2022); Joshi et al. (2020) and the cheaper costs of instruction annotations. Table 3 provides an overview of different approaches, categorized according to the optional inputs they use during training or inference. **Fine-tuning (methods with parameter updates).** We explore several transfer approaches that require parameter updates. * **Target fine-tuning (Target FT)** trains models on few-shot samples for each language. * **English fine-tuning (English FT)** trains models on a source language (i.e., English) only and uses no target language data. * **English+Target fine-tuning (Eng.+TGT. FT)** first trains models on large-scale English datasets and then fine-tunes models on few-shot samples of target languages. **In-context learning (methods without updates).** We explore several in-context learning methods. * **English in-context learning (English ICL)** uses English instructions and demonstrations in the target languages. * **Target ICL (Target ICL)** uses both instructions and demonstrations in the target language. * **Zero-shot English In-context learning (Z-EICL)** uses only English instructions without \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Training Demos} & \multicolumn{2}{c}{Instructions} \\ **Transfer** & EN & Target & EN & Target \\ \hline Target FT & & \(k\) & & \\ English FT & \(N\) & & & \\ Eng.+TGT. FT & \(N\) & \(k\) & & \\ \hline English ICL & & \(k\) & ✓ & \\ Target ICL & & \(k\) & ✓ & \\ Z-EICL & & & ✓ & \\ \hline \hline **Transfer** & Pretraining & & LMs \\ \hline Fine-tuning & Unlabeled & & mT5-base \\ In-c. Learning & Unlabeled & & BLOOM, mT5-xxl \\ In-c. Learning & + Instruction & BLOOM-7B, mT5-xxl \\ & Tuning & & ChatGPT \\ \hline \hline \end{tabular} \end{table} Table 3: **Comparison of different few-shot and zero-shot transfer methods, based on the resources they use. The top section requires parameter updates via fine-tuning (FT), and the bottom uses ICL with no updates. \(k\) = k-shot examples; \(N\) = full training data; ✓ = instruction language. The bottom half lists the models evaluated in this work. The blue-colored rows are instruction-tuned models.** demonstrations (neither in English nor in the target language), as in zero-shot transfer. Unlike in English, where abundant instructions and instance annotations are available, for many languages we often lack annotated instructions Wang et al. (2022). We use machine-translated instructions in BUFFET as the main baseline. ### Language Models A key aspect of language models is their pretraining strategies. In addition to conventional pretraining using unlabeled corpora Devlin et al. (2019); Brown et al. (2020), instruction-tuning has been actively studied; this approach trains an LLM on a massive number of tasks with instructions Muennighoff et al. (2022); Ouyang et al. (2022); Wei et al. (2022). In this work, we evaluate six diverse models pretrained with different strategies (Table 3). Models for fine-tuning.Due to the high costs of fine-tuning for every \(k\)-shot setting, we experiment with an efficient yet competitive mT5-base with 580 million parameters Xue et al. (2021). Models for in-context learning.We experiment with BLOOM-7B (7 billion parameters; Scao et al. (2022) and mT5-xxl (13 billion parameters; Xue et al. 2021). We also experiment with their instruction-tuned variants: BLOOMZ-7B and mT0-xxl Muennighoff et al. (2022), as well as the current state-of-the-art ChatGPT (gpt-3.5-turbo; Ouyang et al. 2022). Note that these models are trained on some of the datasets included in BUFFET. We do not exclude such overlapping datasets, but we indicate such seen tasks with * in the main result table.8 Footnote 8: It is unclear which datasets ChatGPT is trained on. ### Experiment Details Fine-tuning.In all settings, we fine-tune models on few-shot samples for 300 epochs for Target FT and 200 epochs for Eng.+Tgt. FT. When fine-tuning LMs on large-scale English datasets (for both Eng.+Tgt. FT and English FT), we train for three epochs. We use representative English datasets following Hu et al. (2020): SQuAD Rajpurkar et al. (2016) for QA, MNLI Williams et al. (2017) for NLI, PAWS Zhang et al. (2019) for paraphrase detection, XLSUM Hasan et al. (2021) for summarization, COPA Arun and Balakrishnan (2018) for XCOPA, Winograd for XWinograd, the Amazon Multilingual Review English set for sentiment analysis, and the TyDiQA-QG English set for question generation. In-context learning.We prompt LLMs with instructions and \(k\)-shot demonstrations available in BUFFET. Different models have different maximum context window sizes: mT0 only accepts up to 1024 tokens, while BLOOMZ and ChatGPT accept up to 2048 and 4096, respectively. We add training instances up to the maximum token length for each model and discard instances that do not fit the context window. We found that mT0 often performs well-given zero or smaller numbers of few-shot samples. We use 4-shots for mT0 English ICL and Target ICL by default. We use greedy decoding for predictions. For tasks with a fixed set of pre-specified answer candidates, we compute the probability of option tokens by iterating all options except for ChatGPT without access to token probabilities. Due to the high inference costs, we evaluate ChatGPT only on BUFFET-Light, ## 5 Results and Analysis ### Main Results Table 4 shows aggregated results of fine-tuned and in-context learning-based LMs on BUFFET. We show full experiment results on each task in the Appendix. Below, we summarize the key findings. LLMs with in-context learning often lag behind much smaller fine-tuned models.While in-context learning has shown remarkable performance in English, our comparison shows that few-shot cross-lingual transfer via in-context learning remains challenging; English ICL using BLOOM, BLOOMZ (7 billion) and mT0 (13 billion) often under-perform mt5-base (580 million) fine-tuned on English datasets (English FT or Eng.+Tgt. FT). However, when abundant English task data is not available, mT5-based fine-tuning methods (Target FT, or Eng.+Tgt. FT on XCOPA and XWinograd) often perform poorly and are outperformed by English ICL or Target ICL baselines. This implies that when lacking task-specific training data, prompting LLMs can be more effective. Instruction-tuning helps in zero-shot but may not generalize for few-shot settings.Table 10 demonstrates that the zero-shot performance of instruction-tuned models is significantly higher than the zero-shot performance of non-instruction-tuned models: On average, both mT0-xxl and BLOOMZ-7B Z-EICL, demonstrate significantly better performance compared to their non-instruction tuned counterparts, namely mT5-xxl and BLOOM-7B Z-EICL, with margins of 12.7 and 23.9 points in Avg. class, respectively. It is worth noting that while the performance improvements on seen tasks contribute to these gains (indicated by *), mT0-xxl Z-EICL exhibits substantial advancements on unfamiliar tasks. This further confirms the effectiveness of instruction-tuning in zero-shot transfer, as discussed in prior studies (Muennighoff et al., 2022; Wei et al., 2022; Mishra et al., 2022). However, our study also highlights a surprising performance deterioration when moving from zero-shot to few-shot settings for instruction-tuned models: across tasks, mT0 performs worse in few-shot settings than in zero-shot settings (English ICL v.s. Z EICL). BLOOMZ shows performance gains from few-shot demonstrations; BLOOMZ E ICL achieves 44.3, outperforming BLOOMZ Z EICL by 5 points in Avg. class score. Yet, it also exhibits large performance declines on the tasks that are used during their instruction-tuning (TyDiQA, PAWS-X). Our hypothesis is that such instruction-tuned models are optimized to execute a new task solely based on an instruction, with no prior demonstrations (Muennighoff et al., 2022), and may struggle to learn in context from few-shot demonstrations. We conduct controlled experiments in Section 5.2 for further analysis. Zero- or few-shot transfer remains challenging in under-represented languages.Figure 3 illustrates the performance of models on NER (WikiANN and MasakhaNER), NLI (XNLI, AmericansNLI), and QA (TyDiQA) tasks across different languages. The languages are sorted based on the token availability in the mC4 corpus,9 with high-resource languages positioned on the left side. Our results indicate that the zero- or few-shot \begin{table} \begin{tabular}{l|c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Output} & \multicolumn{2}{c}{Classification} & \multicolumn{2}{c}{Multiple Choice} & \multicolumn{2}{c}{Span} & Str. & \multicolumn{2}{c}{Generation} & \multicolumn{2}{c}{Avg.} \\ & Tasks & NLI & Sent. & PWX & XCPA & XWGD & TyDi & NER & QG & Summ. & class & gen \\ \hline Random & & 33.3 & 50.0 & 50.0 & 50.0 & 50.0 & – & – & – & – & – & \\ \hline Tgt. FT & mT5 & 35.0 & 67.2 & 47.7 & 44.1 & 48.8 & 5.2 & 33.4 & 3.2 & 2.5 & 40.7 & 2.9 \\ Eng. FT & mT5 & 49.9 & 89.8 & 77.5 & 0.0 & 0.0 & 66.8 & 39.0 & 3.8 & 6.2 & 55.5 & 5.0 \\ Eng.+Tot. & mT5 & **51.8** & **91.0** & **77.8** & 49.5 & 48.5 & **69.5** & **47.8** & 12.5 & **11.8** & **61.2** & **12.2** \\ \hline Eng. ICL & BLOOM & 32.1 & 81.7 & 42.2 & 50.2 & 51.0 & 54.7 & 24.2 & 9.3 & 3.4 & 45.0 & 6.4 \\ & mT5 & 35.7 & 50.0 & 42.2 & 50.4 & 47.5 & 0.2 & 0.0 & 0.0 & 0.4 & 31.7 & 0.2 \\ \hline & BLOOMZ & 31.5 & 86.3* & 48.5* & 50.4 & 54.2 & 65.8* & 25.5 & 13.5 & 8.3* & 47.5 & 10.9 \\ & mT0 & 36.2 & 72.1* & 60.6* & 50.5 & 60.3 & 73.6* & 7.9 & 16.1 & 3.4* & 46.3 & 9.7 \\ & ChatGPT\(\dagger\) & **54.5** & 91.1 & 68.6 & **76.7** & 73.3 & 68.1 & 45.4 & **21.2** & 5.4 & **64.6** & 13.3 \\ \hline Tör. ICL & BLOOM & 27.9 & 80.5 & 46.5 & 49.9 & 51.8 & 11.8 & 23.4 & 11.2 & 3.6 & 40.4 & 7.4 \\ & mT5 & 35.7 & 50.0 & 42.2 & 49.8 & 45.2 & 0.2 & 0.0 & 0.0 & 0.4 & 31.5 & 0.2 \\ & BLOOMZ & 32.0 & 61.7* & 52.5* & 49.7 & 55.5 & 63.1* & 23.4 & 9.1 & 8.0* & 43.4 & 8.5 \\ & mT0 & 36.2 & 72.1* & 60.6* & 50.5 & 60.3 & 73.6* & 7.9 & **16.1** & 3.4* & 46.3 & 9.7 \\ & ChatGPT\(\dagger\) & 48.2 & **91.5** & 68.2 & 74.3 & **73.4** & 68.0 & 44.8 & 21.1 & 11.4 & 62.7 & **16.3** \\ \hline Z-EICL & BLOOM & 33.3 & 37.2 & 42.3 & 50.0 & 47.1 & 4.3 & 0.0 & 14.0 & 6.3 & 29.2 & 10.1 \\ & mT5 & 35.1 & 49.8 & 42.2 & 50.7 & 55.5 & 2.2 & 0.0 & 0.1 & 4.8 & 32.5 & 0.6 \\ \hline & BLOOMZ & 33.5 & 51.6* & 57.8* & 51.8 & 51.0 & 83.2* & 11.2 & 9.5 & 4.3* & 41.9 & 6.9 \\ & mT0 & 48.5 & 90.0* & 90.6* & **63.8** & **61.0** & 80.1* & 0.0 & 10.2 & 12.0* & 56.4 & 11.1 \\ \hline \hline \end{tabular} \end{table} Table 4: **Overall experiment results in BUFFET**. Note that to enable comparison between ChatGPT (only tested on BUFFET-Light) and other methods, we present BUFFET-Light results, and the overall results on BUFFET are available in Table 10. The blue-colored rows are instruction-tuned models, and we added \({}^{*}\) symbols next to the scores for the tasks on which the models have been trained. “Random” shows random baseline performance. **Bold** fonts indicate the best results for each task, among the models that are not directly trained on the task. When ChatGPT achieves the best results, we also note the second-best number from the models that are not trained on the task, acknowledging the possibility that ChatGPT may have encountered a similar task during training. transferability of the model is often constrained in understudied languages. In NER and NLI tasks, a noticeable decrease occurs in performance from high-resource to low-resource languages. It's important to note that several languages included in MasakhaNER or Americas NLI are not part of the pretraining process. Models such as mT5 English FT or ChatGPT English ICL exhibit strong performance in high-resource languages. However, their performance significantly drops in less-represented languages. For instance in Aymara (aym), ChatGPT achieves slightly higher performance than a random baseline, outperformed by mT5 Eng.+Tgt. FT by 13%. mT5 Eng.+Tgt. FT also significantly outperforms mT5 English FT in lower-resource languages, as indicated by the performance gap between the orange and blue lines in Figure 3. Notably, mT5 Eng.+Tgt. FT outperforms mT5 English FT by 30% in Hausa on MasakhaNER. This indicates that fine-tuning with only \(k\) instances in target languages can still greatly helps in less-represented languages. We also observe performance drops in Finnish, Korean, and Russian for BLOOM and BLOOMZ in TyDiQA. Finnish, Korean, and Russian are excluded from BLOOM pretraining,10 which we attribute to these performance drops. Conversely, mT5 fine-tuning-based methods consistently display strong performance across languages. Interestingly, in Bengali, which is often considered less represented, BLOOMZ achieves performance comparable to mT5 fine-tuned models. We also observe the same trends in BLOOMZ. These results suggest pretraining setup may strongly affect downstream task performance even after instruction tuning. Footnote 10: [https://huggingface.co/bigscience/bloom](https://huggingface.co/bigscience/bloom) **ChatGPT has strong generation capabilities but requires careful instruction design.** As discussed, though ChatGPT significantly outperforms other LLMs with in-context learning, its performance often lags behind fine-tuning-based methods in some discriminative tasks, particularly in less-represented languages. ChatGPT, however, significantly outperforms fine-tuned models on tasks that require target language generations (e.g., question generation, QA) with the exception of summarization (XLSUM). On XLSUM, we found that ChatGPT often generates semantically correct summarizations in English rather than in the input article language, resulting in low ROUGE-2 scores. We do not observe that phenomenon in other LLMs (e.g., BLOOMZ); we show some ChatGPT output examples in the Appendix Table 25. Though more prompt engineering can boost ChatGPT's performance in summarization (Huang et al., 2023), we use the same prompts throughout the evaluations for a fair comparison. We also observe that when instructions are given in the target language, ChatGPT often outputs a summary in the language, as shown in improved XLSUM performance in ChatGPT Target ICL. ### Analysis **Performance variance among different \(k\) shots.** Figure 4 shows model performance across the three different \(k\)-shots and reveals a significant performance disparity in many of the tasks and languages. We observe the significant variance in fine-tuning-based transfer across different \(k\)-shot samples, confirming (Zhao et al., 2021). Importantly, we show that in-context learning is even _more sensitive_ to different demonstrations than few-shot fine-tuning. For instance, for Amazon Reviewer, the standard deviation for BLOOM E-CIL Figure 3: **Model performance across three tasks, NLI, NER, and QA, displayed for various languages. The languages are sorted based on token availability in mC4, with the left side representing high-resource languages. All methods show performance deteriorations in lower-resource languages (right side), with larger drops in English-ICL methods. Additional fine-tuning in target languages is more effective in less-represented languages.** and mT5 Eng.+Tgt. fine-tuning is 2.2 and 0.2, respectively. We also analyze whether a demonstration set \(k\) that achieves the best performance with a model also leads to the optimal performance for another model. Specifically, we compare the best \(k\)-shots for each task and language for BLOOM and BLOOMZ English ICL. We found that in 49.7% of the cases, their optimal \(k\)-shot demonstrations differ. These results emphasize the difficulty of comparing model performance in the absence of standardized \(k\)-shot samples. On the bright side, these results provide insights into potential approaches for identifying optimal demonstrations that can enhance few-shot ICL performance. The effects of varying number of \(k\).Figure 5 demonstrates the impact of increasing the number of few-shot samples for in-context learning and fine-tuning, on four tasks: TyDiQA, TyDiQA-QG, WikiANN, and Amazon Review. Full results on the four tasks in a subset of the languages are available in Appendix D.3. Specifically, we vary the number of few-shot demonstrations, including 1, 4, and 8 (for the tasks with more than 8 shots), and assess the performance of BLOOM English ICL, BLOOMZ English ICL, mT0 English ICL and mT5 Eng.+Tgt. FT. Increasing the number of few-shot examples has a notable positive impact on fine-tuning (mT5 fine-tuning) across different tasks. Similarly, non-instruction-tuned BLOOM also benefits from the inclusion of few-shot samples on most of the tasks. However, for instruction-tuned models (mT0 and BLOOMZ), we observe a significant decline in performance when additional demonstrations are added, which aligns with the findings in Table 4. Specifically, on mT0, we observe that the zero-shot performance surpasses the few-shot performance on TyDiQA and Amazon Review. Surprisingly, even on previously unseen tasks such as TyDiQA-QG and WikiANN, the addition of more than four demonstrations leads to a significant decline in performance. It is worth noting that mT0 and BLOOMZ were exclusively trained with instructions and did not utilize demonstrations during training (Muennighoff Figure 4: **Model performance across different \(k\)-shot demonstrations for QA (TyDiQA), NER (WikiANN), and sentiment analysis (IndicSentiment, AmazonReview). Each circle indicates performance given different \(k\)-shot demonstrations. There’s a significant performance gap caused by the choice of demonstrations, which is often larger in ICL methods.** Figure 5: **Demonstration scaling experiments on TyDiQA (Russian), TyDiQA-QG (Arabic), WikiANN (Vietnamese), and Amazon Review (Chinese) for four different models. \(x\)-axis indicate the number of \(k\) demonstrations. While fine-tuning and ICL with pretrained LMs often benefit from fine-tuning, few-shot ICL with instruction-tuned models can result in performance deterioration.** et al., 2022). We hypothesize that this training approach may cause the models to overfit the zero-shot instruction-based in-context learning scenario, thereby hindering their ability to effectively learn in-context information through few-shot demonstrations. Wei et al. (2022) also find that while few-shot demonstrations mitigate high variance of the zero-shot inference with instructions only, the optimal zero-shot performance with the best template often outperforms the best few-shot performance. Effects of model scaling on few-shot in-context cross-lingual transfer.Figure 6 shows BLOOM-560 million, 1 billion, and 7 billion performance on a subset of the tasks. The transfer method is English ICL. As the model scales, the overall performance on few-shot in-context learning significantly improves, as found in English Brown et al. (2020), indicating that models' cross-lingual few-shot transfer performance via in-context learning may improve as the model size increases. These findings are consistent with the results reported by Lin et al. (2021) on a set of classification tasks. ## 6 Conclusion and Discussion In this work, we introduce BUFFET, a few-shot cross-lingual transfer benchmark that encompasses a diverse range of discriminative and generative tasks across a variety of typologically distinct languages. Through our comprehensive evaluation, involving six different transfer methods and various LLMs, we offer valuable insights into the strengths and limitations of these transfer methods and LLMs. Our analysis reveals that while LLMs utilizing in-context learning excel in generation tasks, they are often surpassed by smaller fine-tuned models specifically trained for target tasks. Furthermore, our findings highlight significant performance variations dependent on different transfer setups (e.g., demonstrations). Moving forward, our findings suggest the following exciting opportunities for future research in the field of few-shot learning transfer across diverse languages: Improve multilingual instruction tuning.Although instruction tuning can be beneficial for both zero-shot transfer, certain models, such as mT0, may become overly specialized for zero-shot instruction-tuning scenarios, leading to lower average few-shot performance than the optimal zero-shot performance. Although these models demonstrate impressive zero-shot performance, even on tasks they haven't encountered before (such as XCOPA), they face challenges when it comes to tasks that involve generating outputs in less commonly used formats (like structured predictions). We believe that developing multilingual instruction-following models capable of effectively utilizing both instructions and demonstrations is crucial. Recent studies demonstrate that incorporating both instructions and demonstrations during instruction-tuning on English data can enhance the model's performance Chung et al. (2022), allowing it to learn within context Min et al. (2022). This type of training may potentially mitigate the issue of overfitting to specific formats. Hence, it is necessary to explore various instruction-tuning setups to further improve few-shot in-context learning, with a focus on _cross-lingual transfer_. Additionally, while high-quality human-translated instructions are effective, numerous instruction repositories are still dominated by English instructions. Therefore, community efforts to increase the availability of multilingual instructions may assist in the development of more generalizable multilingual large-language models. Overcome data scarcity using LLMs.Our research reveals that smaller task-specific fine-tuned models, with intermediate training in English, can still outperform ChatGPT on discriminative tasks that require strict output formats. Conversely, ChatGPT outperforms fine-tuned models on tasks that necessitate more open-ended generations, such as question generation. In recent studies, InstructGPT Ouyang et al. (2022) has exhibited the ability to generate high-quality generations in English, even outperforming humans on some tasks Goyal et al. (2022). This impressive capacity for flexible generations has prompted active investiga Figure 6: **Model scaling experimental results. We conduct experiments on four sub-tasks and use three BLOOM models, BLOOM-560M, 1B, and 7B.** tions into generating training instances from such LLMs, which have predominantly focused on English Wang et al. (2022); Honovich et al. (2022). Some preliminary attempts have been made to explore task-specific data generation in certain target tasks, such as question answering Agrawal et al. (2022). However, there remains limited exploration on how to generate diverse task instructions and outputs for a variety of typologically diverse languages. We believe that using LLMs to generate data offers a promising solution to obtaining more annotated data for under-represented languages. Understand transfer dynamics in cross-lingual in-context learning.The impact of various instructions and demonstrations has been extensively examined in the context of English in-context learning, highlighting critical concerns such as sensitivity to prompt order Lu et al. (2022) and/or motivating methods for identifying optimal demonstrations Su et al. (2022). This research has found that demonstrations or instructions that are optimal for one model may not necessarily result in the best performance for another model. We anticipate that our benchmark will inspire and assist in further research into the relationship between language and instruction/demonstration for cross-lingual in-context learning. Fairness beyond languages: underrepresented variants, dialects, and cross-cultural NLP.Many of the diverse world languages are often excluded in widely used cross-lingual evaluation benchmarks, where recent papers show strong cross-lingual transfer capabilities. However, through our comprehensive analysis, we have discovered that even the most advanced LLMs currently available still face difficulties when dealing with less-represented languages. The most competitive instruction-tuned models, ChatGPT or mT0, show significant performance declines when it comes to indigenous languages, reaching a level akin to a random baseline. We advocate for conducting more studies on diverse local languages, including under-represented languages and their dialects, as emphasized in previous works such as Aji et al. (2022); Kakwani et al. (2020). We note that datasets in such languages are often translated from English Yu et al. (2022), which may introduce translation biases Artetxe et al. (2020) and fail to capture the linguistic nuances and interests of native speakers Clark et al. (2020); Asai et al. (2021). To address these challenges, it is important that further work be done to develop cross-cultural Natural Language Processing Hershcovich et al. (2022). Expand evaluations to complex tasks.Most recent research on multilingual in-context learning predominantly focuses on discriminative tasks Muennighoff et al. (2022); Ahuja et al. (2023) or translation tasks Lin et al. (2021). Further exploration can expand these evaluations to more diverse and complex tasks, such as MTOP Li et al. (2021) or MGMS8K Shi et al. (2023), or knowledge-intensive tasks Asai et al. (2021) as new multilingual benchmarks are developed. LimitationsAs the first step toward standardized evaluation for few-shot cross-lingual transfer, BUFFET focuses on popular discriminative tasks and some generative tasks. It does not include many datasets that require complex reasoning tasks, as noted above. Since our main focus is to benchmark different LLMs and learning methods in a comparable format, we do not explore sophisticated prompting methods, which can further boost performance. We anticipate that BUFFET will encourage the LLM community to explore new methods to further improve in-context learning beyond English. We use instructions translated by the NLLB Costajussa et al. (2022) for Target ICL; such machine-translated instructions are prone to errors, especially in less-represented languages, that can affect the final performance. Ethics StatementWhile there has been significant research on in-context learning with LLMs, most of the focus has been on the English language. This raises questions about the applicability of findings from English few-shot NLP to few-shot cross-lingual transfer scenarios. To address this gap, BUFFET aims to provide a comprehensive and less biased evaluation framework. However, it is important to note that our benchmark dataset currently covers only 57 out of the approximately 6,000 world languages. Moreover, we do not specifically focus on finer-grained language varieties and dialects that are commonly spoken by underrepresented populations. In light of these limitations, we encourage future research to explore the effectiveness and limitations of widely-used transfer methods in a more diverse range of languages. This will help us gain a deeper understanding of the generalizability of transfer learning techniques across different linguistic contexts. ## Acknowledgements This research was supported by NSF IIS-2044660, ONR N00014-18-1-2826, ONR MURI N00014-18-1-2670, DARPA MCS program through NIWC Pacific (N66001-19-2- 4031), and Allen Distinguished Award. AA is supported by the IBM fellowship. We are grateful to Orevaoghene Ahia for her help with ChatGPT evaluations. We thank our volunteer translators, Joongwon Kim, Usharani Injeti, and Sven Dorkenwald, for their help with translating instructions into different languages. Finally, we extend our appreciation to Jonathan H. Clark, Orevaoghene Ahia, Sandy Kaplan, and UW NLP researchers for their feedback on this draft.
2305.05782
The LOFAR Two-metre Sky Survey Deep Fields Data Release 1: V. Survey description, source classifications and host galaxy properties
Source classifications, stellar masses and star formation rates are presented for 80,000 radio sources from the first data release of the Low Frequency Array Two-metre Sky Survey (LoTSS) Deep Fields, which represents the widest deep radio survey ever undertaken. Using deep multi-wavelength data spanning from the ultraviolet to the far-infrared, spectral energy distribution (SED) fitting is carried out for all of the LoTSS-Deep host galaxies using four different SED codes, two of which include modelling of the contributions from an active galactic nucleus (AGN). Comparing the results of the four codes, galaxies that host a radiative AGN are identified, and an optimised consensus estimate of the stellar mass and star-formation rate for each galaxy is derived. Those galaxies with an excess of radio emission over that expected from star formation are then identified, and the LoTSS-Deep sources are divided into four classes: star-forming galaxies, radio-quiet AGN, and radio-loud high-excitation and low-excitation AGN. Ninety-five per cent of the sources can be reliably classified, of which more than two-thirds are star-forming galaxies, ranging from normal galaxies in the nearby Universe to highly-starbursting systems at z>4. Star-forming galaxies become the dominant population below 150-MHz flux densities of about 1 mJy, accounting for 90 per cent of sources at a 150-MHz flux density of 100 microJy. Radio-quiet AGN comprise around 10 per cent of the overall population. Results are compared against the predictions of the SKADS and T-RECS radio sky simulations, and improvements to the simulations are suggested.
P. N. Best, R. Kondapally, W. L. Williams, R. K. Cochrane, K. J. Duncan, C. L. Hale, P. Haskell, K. Malek, I. McCheyne, D. J. B. Smith, L. Wang, A. Botteon, M. Bonato, M. Bondi, G. Calistro Rivera, F. Gao, G. Gurkan, M. J. Hardcastle, M. J. Jarvis, B. Mingo, H. Miraghaei, L. K. Morabito, D. Nisbet, I. Prandoni, H. J. A. Rottgering, J. Sabater, T. Shimwell, C. Tasse, R. van Weeren
2023-05-09T22:02:22Z
http://arxiv.org/abs/2305.05782v1
# The LOFAR Two-metre Sky Survey: Deep Fields Data Release 1. ###### Abstract Source classifications, stellar masses and star formation rates are presented for \(\approx\)80,000 radio sources from the first data release of the Low Frequency Array Two-metre Sky Survey (LoTSS) Deep Fields, which represents the widest deep radio survey ever undertaken. Using deep multi-wavelength data spanning from the ultraviolet to the far-infrared, spectral energy distribution (SED) fitting is carried out for all of the LoTSS-Deep host galaxies using four different SED codes, two of which include modelling of the contributions from an active galactic nucleus (AGN). Comparing the results of the four codes, galaxies that host a radiative AGN are identified, and an optimised consensus estimate of the stellar mass and star-formation rate for each galaxy is derived. Those galaxies with an excess of radio emission over that expected from star formation are then identified, and the LoTSS-Deep sources are divided into four classes: star-forming galaxies, radio-quiet AGN, and radio-loud high-excitation and low-excitation AGN. Ninety-five per cent of the sources can be reliably classified, of which more than two-thirds are star-forming galaxies, ranging from normal galaxies in the nearby Universe to highly-starbursting systems at \(z>4\). Star-forming galaxies become the dominant population below 150-MHz flux densities of \(\approx\)1 mJy, accounting for 90 per cent of sources at \(S_{\rm 150MHz}\sim 100\mu\)Jy. Radio-quiet AGN comprise \(\approx\)10 per cent of the overall population. Results are compared against the predictions of the SKADS and T-RECS radio sky simulations, and improvements to the simulations are suggested. keywords: radio continuum: galaxies - galaxies: active - galaxies: star formation ## 1 Introduction Understanding the formation and evolution of galaxies requires a detailed knowledge of the baryonic processes that both drive and quench the process of star formation within galaxies across cosmic time. In this regard, the faint radio sky provides one of the most important windows on the Universe, as it offers a direct view onto three critical (and overdapping) populations of objects: star-forming galaxies, 'radio-quiet' active galactic nuclei (AGN), and low luminosity radio galaxies (e.g. Padovani, 2016). Arguably the most important observational test for any model of galaxy formation is measurements of the evolution of the cosmic star-formation rate density across cosmic time, and the distribution of that star formation amongst the galaxy population at each redshift, as a function of stellar mass, galaxy morphology, environment, and other properties. These crucial measurements require large, unbiased samples of star-forming galaxies over a wide range of redshifts. Much progress has been made in understanding the star-forming galaxy population, at least out to cosmic noon at \(z\sim 2\), using a variety of star-formation indicators (e.g. Madau & Dickinson, 2014). The primary uncertainty is the effect of dust: by cosmic noon, around 85 per cent of the total star-formation rate (SFR) density of the Universe is dust-enshrouded (e.g. Dunlop et al., 2017), and a sub-millimetre (sub-mm) or far-infrared (far-IR) view of the Universe paints a very different picture of galaxy properties to that of a population selected at optical (rest-frame ultraviolet) wavelengths (e.g. Cochrane et al., 2021). Current far-IR surveys are limited by sensitivity to the more extreme systems, where contamination of the far-IR light by AGN emission is also a concern (e.g. Symeonidis & Page, 2021). Radio emission provides a tool to observe the activity of galaxies in a manner that is independent of dust. For sources without AGN, the low-frequency radio emission arises primarily from recent supernova explosions of massive (young) stars (see reviews by Condon, 1992; Kennicutt, 1998), and thus directly traces the current star-formation rate (unless sufficiently low radio frequencies are reached such that free-free absorption becomes important; e.g. Schober et al., 2017). New generation radio interferometers offer sufficient sensitivity and field-of-view to survey large samples of star-forming galaxies out to high redshifts. Crucially, they can also provide sufficient angular resolution that deep surveys are not generally affected by the source confusion that limits the capabilities of surveys with sub-mm and far-IR telescopes such as the _Herschel_ Space Observatory, for which the vast majority of sources in deep surveys are blends (e.g. Oliver et al., 2012; Scudder et al., 2016). Star formation within massive galaxies is widely believed to be regulated in some manner by AGN, due to the large outflows of energy associated with the growth of supermassive black holes. AGN activity occurs in two fundamental modes (e.g. see reviews by Heckman & Best, 2014; Hardcastle & Croston, 2020). At high accretion rates, accretion of material on to a black hole is understood to occur through a'standard' geometrically-thin, optically-thick accretion disk (Shakura & Sunyaev, 1973), in which around 10 per cent of the rest-mass energy of the accreting material is emitted in the form of radiation ('radiative' or 'quasar-like' AGN). These AGN can drive outflowing winds through thermal or radiation pressure (e.g. Fabian, 2012, and references therein), which may have a substantial effect on the evolution of the host galaxy. Radiatively-efficient AGN sometimes possess powerful twin radio jets ('radio-loud' quasars or their edge-on counterparts, the 'high-excitation radio galaxies'; HERGs), and many recent works also suggest that even those that do not ('radio-quiet' AGN) frequently (or maybe even always) possess weak radio jets (Jarvis et al., 2019; Gurkan et al., 2019; Macfarlane et al., 2021; Morabito et al., 2022, and references therein). These AGN are detectable in deep radio surveys, either due to the weak radio jets or due to the star formation that can accompany the AGN activity. At lower accretion rates, typically below about 1 per cent of the Eddington accretion rate, the nature of the accretion flow on to a supermassive black hole is believed to change: the accretion flow is thought to become geometrically thick and radiatively inefficient (Narayan & Yi, 1994, 1995). A characteristic feature of these advection-dominated or radiatively-inefficient accretion flows is that most of the energy that they release is in the form of two-sided radio jets ('jet-mode' AGN; also referred to as 'low-excitation radio galaxies'). These jet-mode AGN dominate the radio sky at intermediate flux densities (above a few mJy), and the radio waveband is by far the most efficient means of identifying these sources. Jet-mode AGN have been very well-studied in the nearby Universe (e.g. Best & Heckman, 2012), where it is now widely accepted that they play a critical role in the evolution of massive galaxies and clusters, providing an energy input that counter-balances the radiative cooling losses of the surrounding hot gas and thus preventing that gas from cooling and forming stars (see reviews by McNamara & Nulsen, 2007; Fabian, 2012; Kormendy & Ho, 2013; Heckman & Best, 2014; Hardcastle & Croston, 2020, and references therein). Deeper radio surveys, probing the faint radio sky, enable these low-luminosity AGN to be detected and studied to higher redshifts (Best et al., 2014; Pracy et al., 2016; Williams et al., 2018; Whittam et al., 2022), and hence their role in the evolution of massive galaxies to be determined across cosmic time. Deep radio surveys can therefore offer a unique insight into many aspects of the galaxy and AGN population. However, to extract the maximum science from deep radio surveys, it is essential that they are carried out in regions of the sky which are extremely well-studied at other wavelengths across the electromagnetic spectrum. The ancillary data are required to identify the radio source host galaxies, to estimate their redshifts, to classify the nature of the radio emission (star formation vs radiatively-efficient AGN vs jet-mode AGN) and to determine the physical properties of the host galaxies (stellar mass, star-formation rate, environment, etc). Until recently, the state-of-the-art in wide-area deep radio surveys was the VLA-COSMOS 3 GHz survey (Smolcic et al., 2017), which used the Very Large Array (VLA) to cover 2 deg\({}^{2}\) of the Cosmic Evolution Survey (COSMOS) field, arguably the best-studied degree-scale extragalactic field in the sky. Smolcic et al. (2017) investigated the multi-wavelength counterparts of the \(\approx\)10,000 radio sources detected, and provided classifications, which then allowed several further investigations of the radio-AGN and star-forming populations (e.g. Smolcic et al., 2017; Novak et al., 2017; Delvecchio et al., 2017; Delbaize et al., 2017). Nevertheless, even the VLA-COSMOS 3 GHz survey does not have sufficient sky area to cover all cosmic environments, and may therefore suffer from cosmic variance effects, as well as having limited source statistics at the highest redshifts. The on-going MeerKAT International GigaHertz Tiered Extragalactic Exploration (MIGHTEE) 1.4 GHz survey aims to extend sky coverage at this depth to 20 deg\({}^{2}\); Heywood et al. (2022) provide an early release, with Whittam et al. (2022) deriving source classifications for 88 per cent of the \(\approx 5,000\) sources with host galaxy identifications over 0.8 deg\({}^{2}\) in the COSMOS field. The Low Frequency Array (LOFAR; van Haarlem et al., 2013) Two-metre Sky Survey (LoTSS) Deep Fields have a similar goal at lower frequency. The first data release (hereafter LoTSS-Deep DR1) was made public in April 2021: the radio data reach rms sensitivity levels \(\approx 4\) times deeper than the wider all-northern-sky LoTSS survey (Shimwell et al., 2017, 2019, 2022), corresponding to approximately the same effective depth as the VLA-COSMOS 3 GHz survey (for a source with typical radio spectral index, \(\alpha\approx 0.7\), where \(S_{\nu}\propto v^{-\alpha}\)) but over an order of magnitude larger sky area (Tasse et al., 2021; Sabater et al., 2021, hereafter Papers I and II respectively). An extensive optical and near-infrared cross-matching process has identified and provided detailed photometry for over 97 per cent of the \(\approx\)80,000 radio sources detected over the central regions of the target fields where the best ancillary data are available (a combined area of 25 deg\({}^{2}\); Kondapally et al., 2021, Paper III). These data have been used to provide high-quality photometric redshifts (Duncan et al., 2021, Paper IV). In this paper, the 5th of the series, these data are combined with far-IR data to carry out detailed spectral energy distribution (SED) fits to the multi-wavelength photometry from ultraviolet (UV) to far-IR wavelengths, using several different SED fitting codes. Using the results of this analysis, the radio sources are classified into their different types, and key physical parameters of the host galaxies, such as their stellar masses and star-formation rates, are determined. The layout of the paper is as follows. In Sec. 2 the LoTSS Deep Fields survey is described: this section outlines the choice of target fields, and places the first data release in to the context of the eventual full scope of the survey. Sec. 3 then describes the data that will be used in the paper and outlines the application of the SED fitting algorithms. Sec. 4 describes how the results are used to identify the (radiative-mode) AGN within the sample. The results of the different SED fitting algorithms are compared in Sec. 5, and used to define consensus measurements for the stellar mass and star-formation rate of each host galaxy. Combining this information with the radio data, Sec. 6 then describes the identification of radio-excess AGN. Sec. 7 summarises the final classifications of the objects in the sample, and investigates the dependence of these on radio flux density, luminosity, stellar mass and redshift. In Sec. 8 the results are compared against the predictions of the most widely-used radio sky simulations, and suggestions made for improvements to those simulations. Finally, conclusions are drawn in Sec. 9. The classifications derived are released in electronic form and are used for detailed science analysis in several further papers (Smith et al., 2021; Bonato et al., 2021; Kondapally et al., 2022; McCheyne et al., 2022; Mingo et al., 2022; Cochrane et al., 2023, and others). Throughout the paper, cosmological parameters are taken to be \(\Omega_{m}=0.3\), \(\Omega_{\Lambda}=0.7\) and \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), and the Chabrier (2003) initial mass function is adopted. ## 2 The LoTSS Deep Fields ### LoFAR observations of the LoTSS Deep Fields The International LOFAR Telescope (van Haarlem et al., 2013) is a remarkably powerful instrument for carrying out deep and wide radio surveys of the extragalactic sky, owing to its high sensitivity, high angular resolution (6 arcsec at 150 MHz when using only Dutch baselines, improving to 0.3 arcsec with the international stations included), and in particular its wide field-of-view. The primary beam full-width at half-maximum (FWHM) of the Dutch LOFAR stations is 3.8 degrees at 150 MHz, giving a field-of-view of more than 10 deg\({}^{2}\) in a single pointing. International stations have a larger collecting area and a correspondingly smaller beam: 2.5 deg FWHM; 4.8 deg\({}^{2}\) field-of-view. The LoTSS survey (Shimwell et al., 2017, 2019, 2022) is exploiting LOFAR's capabilities by observing the entire northern sky, with a target rms depth of below 100\(\mu\)Jy beam\({}^{-1}\) at favourable declinations (the non-steerable nature of the LOFAR antennas means that sensitivity decreases at lower elevations). Nevertheless, LoTSS only scratches the surface of the depth that radio surveys with LOFAR are capable of reaching. LoTSS provides an excellent census of the radio-loud AGN population which dominates the bright and intermediate radio sky, but samples only the brighter end of the radio-quiet AGN and star-forming galaxy populations which become dominant as the LoTSS flux density limit is approached. The LoTSS Deep Fields provide a complementary deeper survey, aiming to reach a noise level of 10-15 \(\mu\)Jy beam\({}^{-1}\) over a sky area of at least 30 deg\({}^{2}\). LoTSS-Deep is designed to have the sensitivity to detect Milky-Way-like galaxies out to \(z>1\), and galaxies with star-formation rates of 100\(M_{\odot}\) yr\({}^{-1}\) to beyond \(z=5\)(e.g. Smith et al., 2016), as well as being able to detect typical radio-quiet quasars right out to redshift 6 (Gloudemans et al., 2021). The sky area makes it possible to: (i) sample the full range of environments at high redshifts - for example, it is expected to include 10 rich proto-clusters at \(z>2\); (ii) include statistically meaningful samples of rarer objects (such as \(z>5\) starbursts); (iii) build large enough samples of AGN and star-forming galaxies (over 100,000 of each expected to be detected) to allow simultaneous division by multiple key properties, such as luminosity, redshift, stellar mass and environment. LoTSS-Deep is being achieved through repeated 8-hr LOFAR observations of the regions of the northern sky with the highest quality degree-scale multi-wavelength data. The four target fields are the European Large Area ISO Survey Northern Field 1 (ELAIS-N1; Oliver et al., 2000), the Bootes field (Jannuzi and Dey, 1999), the Lockman Hole (Lockman et al., 1986) and the North Ecliptic Pole (NEP); these are described in more detail in Section 2.3. Table 1 outlines the anticipated final depths of each field based on awarded observing time. Scaling by depth and area from radio source counts in shallower LoTSS-Deep observations, the final LoTSS Deep Fields are expected to detect more than 250,000 radio sources within the central 35 deg\({}^{2}\), overlapping the best multi-wavelength data. Figure 1 compares the sensitivity, field-of-view, and angular resolution of the LoTSS Deep Fields to other completed and on-going radio surveys. The final LoTSS Deep Fields dataset will be unrivalled in its combination of depth and area. The inclusion of the international stations will also provide an angular resolution which is unmatched by any competitor survey: indeed, at low frequencies, the LoTSS Deep Fields with international baselines will remain unique even in the era of the Square Kilometre Array (SKA). In order to account for the smaller primary beam of the international stations, from LOFAR Observing Cycle 14 onwards the pointing positions for the LoTSS-Deep observations of the Lockman Hole, Bootes and NEP fields have been dithered around a small mosaic. The mosaics have been designed to ensure good coverage of the sky area with the best-quality multi-wavelength data, within the primary beam of the international stations, while keeping offsets small enough so that there is negligible loss of sensitivity over this region when imaging with only Dutch stations. ### LoTSS-Deep DR1 This paper considers the radio source catalogues from the first LoTSS Deep Fields data release. LoTSS-Deep DR1 released the reduced LOFAR images and catalogues constructed from data taken before October 2018 (Paper I; Paper II), along with the optical/IR catalogues and host galaxy identifications (Paper III) and photometric redshifts (Paper IV). These LoTSS-Deep DR1 LOFAR observations focused on the ELAIS-N1, Bootes and Lockman Hole fields, due to the earlier availability of the multi-wavelength data in those fields. The LoTSS-Deep DR1 LOFAR images included only the data from the Dutch LOFAR stations, not the international stations, due to the additional complications associated with calibrating the long baselines and the associated computing requirements (see e.g. Morabito et al., 2022; Sweijen et al., 2022, for a description of recent advances towards a pipeline for international stations). The data allow an angular resolution of 6 arcsec to be achieved: higher angular resolution images will be produced in later data releases. As shown in Table 1, the images in LaTSS-Deep DR1 already reach an rms noise level below 20\(\mu\)Jy beam\({}^{-1}\) at 150 MHz at the centre of the deepest field (ELAIS-N1), away from bright sources. Sensitivity decreases with primary beam attenuation towards the outer regions of the field; dynamic range effects are also present around bright sources but only a few percent of the image suffers from significantly increased noise levels due to these calibration issues (Paper I; Paper II). Over 170,000 sources are catalogued, with peak flux densities above 5 times the local rms noise, across the full radio area of the three fields; as with all radio catalogues, incompleteness effects come in as the flux limit is approached (see Kondapally et al., 2022; Cochrane et al., 2023, for an analysis of the completeness for AGN and SFGs, respectively). More than 80,000 sources are catalogued in the central regions with the best multi-wavelength data (Paper III). As can be seen in Figure 1, LaTSS-Deep DR1 broadly matches the depth of the VLA-COSMOS 3GHz survey but over an order of magnitude larger sky area; similarly it matches the recent MeerKAT MIGHTEE Early Release (Heywood et al., 2022) in rms depth (the latter being limited by source confusion owing to its lower angular resolution), but again over larger area. \begin{table} \begin{tabular}{c c c c c c c c c} \hline Field & Coordinates & Area of best & Obs. time & central rms & N\({}^{0}\) sources & N\({}^{0}\) sources & Final awarded & Target \\ & (J2000) & ancillary data & in DR1 & noise in DR1 & full DR1 & best ancillary & integration & rms depth \\ & & [deg\({}^{2}\)] & [hrs] & [\(\mu\)Jy/beam] & area & data area & time [hrs] & [\(\mu\)Jy/beam] \\ \hline ELAIS-N1 & 16 11 00 & +54 57 00 & 6.74 & 164 & 19 & 84,862 & 31,610 & 500 & 11 \\ Bootes & 14 32 00 & +34 30 00 & 8.63 & 80 & 32 & 36,767 & 31,162 & 312 & 16 \\ Lockman Hole & 10 47 00 & +58 05 00 & 10.28 & 112 & 22 & 50,112 & 19,179 & 352 & 13 \\ NEP & 17 58 00 & +66 00 00 & 10.0 & – & – & – & – & 400 & 13 \\ \hline \end{tabular} \end{table} Table 1: Status of observations and imaging in LOFAR Deep Fields, including the data released in the LaTSS Deep Fields 1st data release (LoTSS-Deep DR1). The area of best ancillary data is defined in Paper III. Quoted rms noise levels are those at the centre of the field. The marginally lower sensitivity in Bootes compared to the other fields is due to its lower declination, and hence lower average elevation during the observations. The ‘number of sources in DR1 full area’ quoted is over the full catalogues presented in Paper I and Paper II, out to the 30 per cent power point of the primary beam (i.e. over – 25 deg\({}^{2}\) in each field). Figure 1: The survey depth, area and angular resolution of the LoTSS Deep Fields compared to other existing and on-going radio surveys. All survey depths are converted to a 1.4 GHz equivalent rms depth using a spectral index of \(\alpha=0.7\). The black points show published surveys, and the blue points show on-going surveys. The LoTSS Deep Fields are highlighted in red. The size of each symbol indicates the angular resolution of the survey, with the symbol area proportional to the beam FWHM. For the LoTSS Deep Fields final release, the larger symbol indicates the result of including just the Dutch baselines, while the smaller symbol shows what should be achievable after including the international stations (improved angular resolution, additional depth due to the extra collecting area, but smaller areal coverage due to the smaller primary beam of the international stations). Descriptions of the surveys included on the plot (listed from high to low effective rms depth) can be found in the following references: (ELAM (Wayth et al., 2015); WENSS (Rengelink et al., 1997); TGSS (Ittena et al., 2017); SUMSS (Mauch et al., 2003); NVSS (Condon et al., 1998); GLEAM-X (Hurley-Walker et al., 2022); RACS (Hale et al., 2021); FIRST (Becker et al., 1995); XXI-GMRT (Smolčić et al., 2018); VLASS (Lacy et al., 2020); Stripe82 (Hodge et al., 2011); LoTSS-Wide (Shimwell et al., 2019); VLA-COSMOS 1.4 GHz (Schinnerer et al., 2007); EMU (Norris et al., 2011); MIGHTEE (including Early Science – ES; Heywood et al., 2022) SSA-13 (Fomalont et al., 2006); VLA-COSMOS 3 GHz (Smolčić et al., 2017a); VLA-SWIRE (Owen & Morrison, 2008); GOODS-N (Owen, 2018); VLA Frontier (Heywood et al., 2021). ### Multi-wavelength data in the LoTSS Deep Fields ELAIS-N1, Bootes, Lockman Hole and NEP are the premier large-area northern extragalactic fields, with vast amounts of telescope time across the electromagnetic spectrum invested in observing these fields over the last two decades. Imaging at optical and near-IR wavelengths reaches 3-4 magnitudes deeper than typical all-sky surveys, allowing host galaxy identifications for over 97 per cent of the hosts of the radio sources in LoTSS-Deep DR1 (Paper III) compared to just 73 per cent using all-sky surveys in the LoTSS DR1 release (Williams et al., 2019). Other datasets, such as deep _Herschel_ and _Spitzer_ data in these fields, are irreplaceable, and add greatly to the scientific potential: _Herschel_ data are a key tool to constrain obscured star-formation rates, while the mid-IR wavelengths covered by _Spitzer_ contain the diagnostic emission from the AGN torus. This range of complementary data makes these excellent fields to study not only the high-redshift AGN and luminous star-forming galaxies detected by LOFAR, but also to understand how this activity sits within the wider cosmological context of the underlying galaxy population. As well as their combined benefit of sky area and sample size, each of the four LoTSS Deep Fields possesses unique characteristics or datasets which further enhance its specific scientific potential, whilst complementing each other. The specific data available in each field are summarised here; a more complete description of the available data in the ELAIS-N1, Lockman Hole and Bootes fields (but not NEP, as it was not included in the LoTSS-Deep DR1) can be found in Paper III, which also provides the coverage maps of each survey and the resulting catalogues. #### 2.3.1 Elais-N1 ELAIS-N1 has an ideal declination (+55 deg) for LOFAR observations, and is also a target field for LOFAR's Epoch of Reionisation studies (Jelic et al., 2014), providing a combined motivation for the observations. ELAIS-N1 benefits from some of the deepest wide-field optical, near-IR and mid-IR imaging. It is one of the Medium Deep Fields from the Panoramic Survey Telescope and Rapid Response Sysytem (Pan-STARRS-1) survey (Chambers et al., 2016), covering a 7 deg\({}^{2}\) field-of-view in the optical \(g\),\(r\),\(i\),\(z\),\(y\) bands. It is a Hyper-Suprime-Cam Subaru Strategic Program (HSC-SSP; Aihara et al., 2018) optical deep field, with deep observations in \(g\),\(r\),\(i\),\(z\),\(y\) and the narrow-band NB921 over 7.7 deg\({}^{2}\). \(u\)-band data over this full field are available from the _Spitzer_ Adaptation of the Red-Sequence Cluster Survey (SpARCS; Muzzin et al., 2009), and UV data were taken by the Galaxy Evolution Explorer (_GALEX_) space telescope as part of the Deep Imaging Survey (Martin et al., 2005). ELAIS-N1 also possesses deep near-IR imaging in \(J\) and \(K\) bands from the United Kingdom Infrared Deep Sky Survey (UKIDSS; Lawrence et al., 2007) Deep Extragalactic Survey (DXS), covering nearly 9 deg\({}^{2}\). Mid-infrared data were acquired by _Spitzer_ through both the _Spitzer_ Wide-area Infra-Red Extragalactic survey (SWIRE; Lonsdale et al., 2003) in IRAC channels 1 to 4 (3.6-8.0\(\mu\)m) over \(\sim 10\) deg\({}^{2}\) and the _Spitzer_ Extragalactic Representative Volume Survey (SERVS; Mauduit et al., 2012), which is around a magnitude deeper at 3.6 and 4.5\(\mu\)m in the central 2.4 deg\({}^{2}\). Longer wavelength data in the field have been taken using both _Spitzer_ (24\(\mu\)m data with the Multi-band Imaging Photometer for Spitzer; MIPS) and the _Herschel_ Space Observatory, the latter as part of the _Herschel_ Multi-tiered Extragalactic Survey (HerMES; Oliver et al., 2012), one of the deepest large-area _Herschel_ surveys. HerMES observed ELAIS-N1 at 100\(\mu\)m, 160\(\mu\)m, 250\(\mu\)m, 350\(\mu\)m and 500\(\mu\)m. #### 2.3.2 Bootes The Bootes field is the target of some of the deepest wide-field optical imaging, in the \(B_{W}\), \(R\) and \(I\) filters from the NOAO Deep Wide Field Survey (Jannuzi and Dey, 1999), in the \(z\)-band from the zBootes survey (Cool, 2007), and in the \(U\) and \(Y\) bands from the Large Binocular Telescope (Bian et al., 2013), all covering around 10 deg\({}^{2}\). The same sky region has been observed in the near-IR \(J\), \(H\) and \(K\) bands (Gonzalez et al., 2010) and using _Spitzer_ from 3.6 to 8.0\(\mu\)m as part of the _Spitzer_ Deep Wide Field Survey (SDWFS; Ashby et al., 2009). Catalogues of galaxies in the Bootes field were generated by Brown et al. (2007, 2008). Bootes has also been observed by _Herschel_ as part of HerMES, and by _Spitzer_-MIPS, adding far-infrared measurements to the dataset. In addition to this, Bootes benefits from excellent wide-field X-ray coverage, including a deep Msec _Chandra_ survey over the full 9.3 deg\({}^{2}\) field (Masini et al., 2020). The comparison between deep radio and deep X-ray observations opens many new scientific avenues, such as investigating the relationship between jet power and accretion rate in AGN, and determining the black hole accretion rates of star-forming galaxies to investigate the co-evolution of galaxies and black holes. Bootes also possesses a vastly higher number of spectroscopic redshifts than the other northern deep fields, largely due to the AGN and Galaxy Evolution Survey (AGES; Kochanek et al., 2012): these are also very valuable for training photometric redshifts for the radio source population (e.g. Paper IV). #### 2.3.3 Lockman Hole Located (like ELAIS-N1) at an ideal declination for LOFAR (+58 deg), the Lockman Hole is one of the regions of sky with the lowest Galactic HI column density (Lockman et al., 1986), making it ideal for extragalactic studies, especially at IR wavelengths due to its low IR background. For this reason, the Lockman Hole has been the target of some of the widest deep coverage in the optical to mid-IR bands. Optical data in the Lockman Hole has been taken by SpARCS in \(u\),\(g\),\(r\),\(z\) over 13.3 deg\({}^{2}\), and by the Red Cluster Sequence Lensing Survey (RCSLenS; Hildebrandt et al., 2016) in \(g\),\(r\),\(i\),\(z\) over 16 deg\({}^{2}\) (albeit not contiguous). As with ELAIS-N1, UV data have been obtained by the _GALEX_ Deep Imaging Survey, deep near-IR \(J\) and \(K\) band data are available as part of the UKIDSS-DXS survey (8 deg\({}^{2}\)), mid-IR data are available from both SWIRE (Channels 1-4 over 11 deg\({}^{2}\)) and SERVS (3.6 and 4.5 \(\mu\)m; 5.6 deg\({}^{2}\)) and far-IR data are available over the whole field from both _Spitzer_-MIPS imaging (24\(\mu\)m) and the _Herschel_ HerMES project (100\(\mu\)m, 160\(\mu\)m, 250\(\mu\)m, 350\(\mu\)m and 500\(\mu\)m). The Lockman Hole is arguably the best-studied of the deep fields at other radio frequencies (e.g. Mahony et al., 2016; Prandoni et al., 2018; Morganti et al., 2021). The multi-frequency radio data allow detailed investigations of radio spectral shapes, identifying peaked, remnant and re-started sources, and giving a unique insight into the physics and lifecycles of radio-loud AGN (e.g. Brienza et al., 2017; Jurlin et al., 2020). #### 2.3.4 North Ecliptic Pole The North Ecliptic Pole is an interesting field due to its location in the continuous viewing zone (CVZ) of many space telescopes, including the _JWST_, the _eROSITA_ X-ray mission and _Euclid_. Until very recently, the multi-wavelength data quality in the NEP was inferior to the other three LoTSS Deep Fields, but this is rapidly changing. The NEP is the location of the _Euclid_ Deep Field North which will provide deep sub-arcsecond near-IR imaging to depths of \(H=26\) over 10 deg\({}^{2}\) (and slightly shallower over a wider 20 deg\({}^{2}\) region). Such deep data will enable mass-complete samples to be defined down to \(\sim 10^{10}M_{\odot}\) at \(z=3\) and normal star-forming galaxies to be detected out to \(z>6\). The combination of matched sub-arcsecond near-IR and radio continuum imaging (with LOFAR's international baselines) offers a unique opportunity to study the structural evolution of galaxies, for example comparing the spatial distribution of star formation (probed by LOFAR) versus stellar mass (probed by _Euclid_) within galaxies, to cleanly distinguish between different growth scenarios (e.g. 'inside-out' or 'outside-in' growth) over large samples of massive galaxies with \(z<1\). Given these forthcoming datasets, a number of photometric surveys have been recently undertaken to provide matching observations at other wavelengths, including the Hawaii Two-0 survey (McPartland et al., 2023). Additionally, the _Euclid/WFIRST Spitzer_ Legacy Survey has obtained mid-infrared imaging over the central 10 deg\({}^{2}\) of the field using _Spitzer_ that is \(\sim\)0.8mag deeper than the SERVS data available in ELAIS-N1 and Lockman Hole. As shown in Table 1, the NEP is not included in LOFSS-Deep DR1, and hence not included in the analysis of this paper, as the radio data were not available at the time of the optical cross-identification. An image from 72-hrs of data is now available and will be published by Bondi et al. (2023). Furthermore, as LOFAR observes two HBA pointings simultaneously, observations of the NEP field have included a parallel beam centred on the Abell 2255 cluster, which has also produced an ultra-deep low-frequency image of that field (Botteon et al., 2022). ## 3 Characterising the Lotss-deep Host Galaxies ### Optical to mid-IR data For the three fields presented in LoTSS-Deep DR1 (ELAIS-N1, Bootes, Lockman Hole), Paper III presented photometric catalogues from ultraviolet to far-infrared wavelengths. The reader is referred to that paper for a full description of the catalogues; here, a brief overview is provided. For the ELAIS-N1 and Lockman Hole fields, data from UV through to mid-IR wavelengths were assembled and mosaicked on to a common pixel scale. Two combined \(\chi^{2}\) signal-to-noise images were then constructed, one by combining the optical to near-IR bands, and the other from the _Spitzer_ 3.6 and 4.5\(\mu\)m bands; these were treated separately due to the mis-match in angular resolution between the ground-based optical-to-near-IR and the _Spitzer_ images. Forced aperture photometry was then performed across all bands using sources detected in each of these stacked images, and the two catalogues were merged to produce a single consistent photometric catalogue in each field. Aperture corrections were applied band-by-band based on curve-of-growth analysis for typical faint galaxies in order to provide total flux and total magnitude measurements. The photometry was corrected for galactic extinction based on the Milky Way E(B-V) extinction map of Schlegel et al. (1998) and the Milky Way dust extinction law of Fitzpatrick (1999). Uncertainties on the photometry were determined using the variations between a large number of apertures randomly placed around the fields. For the Bootes field, forced aperture photometry catalogues already existed (Brown et al., 2007, 2008) using magnitude-limited samples selected in the I-band and the 4.5\(\mu\)m _Spitzer_ band. In this case, these catalogues were used as the starting point, and were merged and corrected in a similar manner to ELAIS-N1 and Lockman Hole. In all three fields, the catalogues were then cleaned of low-significance detections (sources detected in the combined \(\chi^{2}\) image but below 3\(\sigma\) significance in each individual band) and cross-talk artefacts, and those sources in regions around bright stars where either the cataloguing or the photometry might be unreliable were flagged, as indicated by the flag_clean parameter. More details on all of these processes can be found in Paper III. These photometric catalogues were then used as the basis for cross-matching with the LOFAR catalogues. Paper III outlines the selection of the studied area for which the highest-quality multi-wavelength data are available; sources within this region can be identified using the flag_overlap parameter. The cross-matching process also involved source association, such that the cataloged LOFAR sources were combined or deblended into true physical sources, where necessary. Within these defined areas, 81,951 physically distinct radio sources were catalogued over 25.65 deg\({}^{2}\) of sky across the three fields; optical or near-IR host galaxies were identified for over 97 per cent of these (Paper III), very much higher than the 73 per cent found for the wider LoTSS DR1 (Williams et al., 2019). Photometric redshifts for all of the objects in the field have been presented in Paper IV. These were derived from the UV to mid-IR data by combining machine learning and template fitting approaches using a hierarchical Bayesian framework. This method is shown to provide photometric redshifts which are accurate for both galaxy populations (out to \(z\approx 1.5\)) and sources dominated by AGN emission (out to \(z\approx 4\)), which is important for the LOFAR sample. As part of the calibration of the photometric redshifts, small (typically \(<\)5 per cent) offsets in the zero-point magnitudes were found to improve the accuracy of the template-fit photometric redshifts. These offsets are discussed further in Section 3.3. ### Far-infrared data The addition of far-IR photometry is described by McCheyne et al. (2022), and the reader is referred to that paper for details. In summary, the far-IR fluxes were measured using XID+ (Hurley et al., 2017) which is a Bayesian tool to deblend the flux from the low resolution _Herschel_ data into different potential host galaxies selected from optical/near-IR images. Fluxes were initially measured as part of the _Herschel_ Extragalactic Legacy Project (HELP; Shirley et al., 2021). In HELP, an XID+ prior list of potential emitters at 24\(\mu\)m was derived by applying a number of cuts to the optical-IR galaxy catalogue in order to select the sources most likely to be bright at 24\(\mu\)m (those detected both at optical wavelengths and in the _Spitzer_ 3.6-8.0\(\mu\)m bands), and this input list was used to deblend the 24\(\mu\)m data. Then, a second prior list was constructed from those sources with significant 24\(\mu\)m emission (above 20\(\mu\)Jy) and this was used to deblend the _Herschel_ data. The posterior distributions for the fluxes derived from XID+ allow the uncertainties to be estimated. For the LoTSS-Deep catalogue, a cross-match was first made between each LoTSS-Deep host galaxy position (or its LOFAR position if there was no host galaxy identification at optical-IR wavelengths) and the HELP catalogue. If a match was found then the HELP far-IR fluxes were assigned to the LOFAR source. If no match was found, then XID+ was re-run following the process above, but with the radio host galaxy position (or radio position in the case of no host galaxy identification) added to the prior list: this ensures that the assignment of zero flux is not simply due to the radio source having been incorrectly excluded from the prior list. ### Final catalogues for spectral energy distribution fitting In order to ensure consistency and reliability across the different spectral energy distribution (SED) fitting codes used in this paper, it was important to ensure that the input dataset was as robust as possible, and that all photometric errors were uniformly treated. For each field, a catalogue was produced combining the (aperture-corrected and Galactic extinction corrected) fluxes from UV to mid-IR wavelengths with the far-IR fluxes determined by XID+. Next, the small zero-point magnitude corrections determined during the photometric redshift fitting were applied: these are tabulated in Appendix B of Paper IV. Specifically, the corrections derived using the extended Atlas library (referred to as 'Brown' in that paper) were applied; this template set was chosen because it extended out to the longest IRAC wavelength and also incorporated the full range of SED types expected within the LoTSS Deep Fields sample. The photometry catalogue was then filtered to remove photometric measurements deemed to be seriously unreliable. These unreliable measurements were identified as those which were either 2.5 magnitudes lower, or 1 magnitude higher, than the value predicted by interpolating the two adjacent filter measurements. These limits were chosen, following Duncan et al. (2019), to avoid flagging any reasonable spectral emission or absorption features, or genuine breaks, while successfully identifying those measurements that are so discrepant that they could significantly influence the SED fitting. Around 1 per cent of the photometric measurements were identified in this way; these were flagged and not used in the subsequent fitting. Finally, in order to consistently deal with any residual photometric errors due to zero-points, aperture corrections or extinction corrections, 10 per cent of the measured flux was added in quadrature to all flux uncertainties. The resultant SED input catalogues for each field are made available in electronic form through the LOFAR Surveys website (lofar-surveys.org). ### Spectral Energy Distribution fitting Many different codes exist for fitting SEDs to an array of photometric data points for galaxies and AGN. Each of these has their own advantages and disadvantages. Pacifici et al. (2023) recently carried out a detailed comparison of different codes, finding that they provide broad agreement in stellar masses, but with more discrepancies in the star formation rates and dust attenuations derived. In this paper, four different SED-fitting codes are adopted, and a comparison of the results between these is used both to derive consensus measurements for stellar masses and star-formation rates, and to assist with the classification of the radio source host galaxies. The 'Multi-wavelength Analysis of Galaxy Physical Properties' (mappings; da Cunha et al., 2008) and 'Bayesian Analysis of Galaxies for Physical Inference and Parameter Estimation' (mappings; Carall et al., 2018, 2019) codes each use energy balance approaches to fit photometric points from the UV through to far-IR and sub-mm wavebands. Energy balance implies that the amount of energy absorbed by dust at optical and UV wavelengths is forced to match that emitted (thermally) by the dust through the sub-mm and far-IR. The mappings and nappings codes are built on the same fundamental templates for single stellar populations (Bruzual & Charlot, 2003) but differ in their implementation, in particular with regard to the parameterisation of the star-formation histories of the galaxies, the assumed dust models, and the approach to model optimisation. For high signal-to-noise galaxies the two codes generally give broadly consistent results (see Sec. 5), which previous studies have generally shown to be accurate (e.g. Hayward & Smith, 2015). However, neither magphys nor baggipes includes AGN emission in its model SEDs, nor do they account for AGN heating effects when determining energy balance, and therefore both can give poor fits and unreliable host galaxy parameters for galaxies with significant AGN emission. 'Code Investigating GALaxy Emission' (cigale; Burgarella et al., 2005; Noll et al., 2009; Boquien et al., 2019) is another broad-band SED-fitting code which uses energy conservation between the attenuated UV/optical emission and the re-emitted IR/sub-mm emission; cigale differs from magphys and nappings in that it incorporates AGN models which can account for the direct AGN light contributions and the infrared emission arising from AGN heating of the dust (more recent developments also allow for predictions of X-ray emission, cf. Yang et al., 2020). The inclusion of AGN models can give cigale a significant advantage over magphys and nappings when fitting the SEDs of galaxies that have a significant AGN contribution, allowing both more robust estimation of host galaxy parameters, and a mechanism to identify and classify AGN within the sample. However, in order to allow the additional complications of AGN fitting, for equivalent (practical) run times cigale is not able to cover the parameter space of host galaxy properties as finely as magphys and nagpipes, leading to potentially less accurate characterisation of galaxies that do not host AGN. All of the three codes discussed above adopt the principles of energy balance. However, if the distribution of ultraviolet light is spatially disconnected from the dust emission, as is often the case for very infrared luminous galaxies, then energy balance may not be valid; indeed, Buat et al. (2019) find for a sample of 17 well-studied dust-rich galaxies that SED-based UV-optical attenuation estimates account for less than half of the detected dust emission. This issue may be particularly pronounced in the presence of AGN, if the AGN models are not comprehensive enough to properly cover the parameter space of possible AGN SEDs. To mitigate these issues, the agnfitter code (Calistro Rivera et al., 2016) models the SED by independently fitting four emission components, with each independently normalised (albeit with a prior that the energy radiated in the infrared must be at least equal to the starlight energy absorbed by dust at optical/UV wavelengths): a big blue bump, a stellar population, hot dust emission from an AGN torus, and colder dust emission. agnfitter can provide superior fits for objects where energy balance breaks down, and also for objects with strong AGN components due to its superior modelling of the big blue bump. However, the lack of energy balance and the ability of the four components to vary independently can lead to physical solutions, or poorer constraints on the parameters of the stellar populations (although Gao et al., 2021 find broadly good agreement in measured stellar masses and SFRs between codes with and without energy balance, at least for hyperluminous infrared galaxies). To maximise the advantages of the different techniques, the LoTSS Deep Field host galaxies were all modelled using each of magphys, baggipes, cigale and agnfitter. Furthermore, for cigale, two different sets of AGN models were considered: those of Fritz et al. (2006) and those of Stalevski et al. (2012, 2016), the latter of which were recently incorporated into cigale by Yang et al. (2020). The following subsections provide details of the fitting methodology in each case. For all SED fitting, the redshift of the source is fixed at the spectroscopic redshift, zspec, for the minority of sources for which this exists (1602, 4039 and 1466 sources in ELAIS-N1, Bootes and Lockman, respectively). For the other sources, the redshift is fixed at the median of the first photometric redshift solution, z1,median. Photometric redshift errors may introduce errors on the inferred parameters, but for most sources these are anticipated to be small since the photometric redshifts are very accurate, with a median scatter of \(\Delta z/(1+z)\leq 0.015\) for host-galaxy dominated sources at \(z<1.5\)(Duncan et al., 2021). #### 3.4.1 magphys The application of magphys to the LoTSS Deep Fields sources is described by Smith et al. (2021), and so it is only briefly summarised here. The stellar population modelling adopts single stellar population (SSP) templates from Bruzual & Charlot (2003) and the two-component (birth cloud plus interstellar medium) dust absorption model of Charlot & Fall (2000), combined to produce an optical to near-IR template library of 50,000 SEDs with a range of exponentially-declining star-formation histories with stochastic bursts superposed. The dust emission is modelled using a library of 50,000 dust SEDs constructed from dust grains with a realistic range of sizes and temperatures, including polycyclic aromatic hydrocarbons. The energy balance criterion is used to combine the two sets of templates in a physically-viable manner, to produce a model for the input photometry that stretches from near-UV to sub-mm wavelengths. magphys determines the best-fitting SED for every source, returning the corresponding best-fit physical parameters and their marginalised probability distribution functions (PDFs). The best-fitting stellar mass and best-fitting value of the SFR over the last 100 Myr were adopted as the stellar mass and SFR respectively; the 100 Myr timescale corresponds well to that of the expected radio emission (e.g. Condon et al., 2002). For most galaxies, very similar results are obtained if a shorter period or the current instantaneous SFR are adopted instead (although results for some individual galaxies can vary significantly). The 16th and 84th percentile of the PDFs were adopted as the 1\(\sigma\) lower and upper limits respectively. In order to determine whether the calculated parameters are reliable, the \(\chi^{2}\) value of the fit was examined: following Smith et al. (2012), fits for which the determined \(\chi^{2}\) value was above the 99 per cent confidence limit for the relevant number of photometric bands included in the fit were flagged as unreliable. As noted by Smith et al. (2021), many of the objects that fall this test are objects with strong AGN contributions. On average, 17 per cent of sources across the three fields were flagged in this way, with ELAIS-N1 giving a significantly lower fraction (10 per cent), in line with expectations that the deeper radio data in that field should result in a higher fraction of star-forming galaxies. #### 3.4.2 bagpipes bagpipes was run on the LoTSS Deep Field sources, making use of the 2016 version of the Bruzual & Charlot (2003) SSP templates for its stellar population emission. Nebular emission is computed using the cloudy photoionization code (Ferland et al., 2017), following Byler et al. (2017). cloudy is run using each SSP template as the input spectrum. Dust grains are included using cloudy's 'ISM' prescription, which implements a grain-size distribution and abundance pattern that reproduces the observed extinction properties for the Interstellar Medium (ISM) of the Milky Way. A Calzetti et al. (2000) dust attenuation curve is adopted. Dust emission includes both a hot dust component from HII regions and a grey body component from the cold, diffuse dust. A wide dust attenuation prior is adopted, \(A_{\rm v}=[0,6]\), which gives the code the option to fit a high degree of attenuation. The absorbed energy is re-emitted at infrared wavelengths; the dust SED is controlled by three key parameters, as described by Draine & Li (2007): \(U_{\rm min}\), the lower limit of the starlight intensity; \(\gamma\), the fraction of stars at \(U_{\rm min}\); and \(q_{\rm PAH}\), the mass fraction of polycyclic aromatic hydrocarbons. The priors adopted on these parameters are broad, to allow the model to fit all types of galaxies, including those that are hot and dusty (Leja et al., 2018): \(U_{\rm min}=[0,25]\), \(\gamma=[0,1]\), and \(q_{\rm PAH}=[0,10]\). \(\eta\), the multiplicative factor on \(A_{V}\) for stars in birth clouds, is also fitted using the prior \(\eta=[1,5]\). Metallicity is allowed to vary in the range \(Z=[0,2.5]Z_{\odot,\rm old}\), where \(Z_{\odot,\rm old}\) denotes solar models prior to Asplund et al. (2009). The star-formation history (SFH) is parameterised using a double power law: \(\rm SFR(t)\propto[(t/\tau)^{\alpha}+(t/\tau)^{-\beta}]^{-1}\) where \(\alpha\) is the slope in the region of falling SFR, and \(\beta\) is the slope in the region of rising SFR. \(\tau\) relates to the time at which the SFR peaks. The code outputs posterior distributions for the fitted parameters \(A_{\rm v}\), \(U_{\rm min}\), \(\gamma\), and \(q_{\rm PAH}\), \(\eta\), the metallicity \(Z\), and the SFH parameters \(\alpha\), \(\beta\) and \(\tau\). Posterior distributions are also derived for the physical properties of stellar mass, star-formation rate, and specific star-formation rate, with the median and the 16th and 84th percentiles being adopted as the best-fit value and the lower and upper \(1\sigma\) errors. The reduced \(\chi^{2}\) of the best-fitting model was also returned. Objects with a reduced \(\chi^{2}\) above 5 were flagged as unreliable; this averaged about 9 per cent of sources across the three fields, again being lowest in ELAIS-N1 and highest in Bootes. #### 3.4.3 cigale cigale was run on the LoTSS Deep Fields sources in the manner outlined in Wang et al. (2021) and Malek et al. (2023). The choices for the input components for the modelling of the stellar population largely follow those of Pearson et al. (2018) and Malek et al. (2018). Specifically, the star-formation history was adopted to be a two-component model, with a delayed exponentially-decaying main star-forming component (\(\rm SFR_{delayed}\propto te^{-t/\tau}\)) plus the addition of a recent starburst. The Bruzual & Charlot (2003) SSP templates were adopted for the stellar emission. The Charlot & Fall (2000) dust attenuation model is applied to the derived SEDs, and energy-balance criteria are used to determine the quantity of emission to be re-emitted in the infrared. The dust emission is calculated using the dust emission model of Draine et al. (2014), which is an updated version of the Draine & Li (2007) model and describes the dust as a mixture of carbonaceous and amorphous silicate grains. A critical difference between cigale and magphys/bagpipes is the inclusion of an AGN component in the cigale models. For the LoTSS Deep Fields, cigale was run twice, using two different AGN models: the Fritz et al. (2006) model and the skirtor model of Stalevski et al. (2012, 2016). Both sets of AGN models assume point-like isotropic emission from a central source, which then intercepts a toroidal dusty structure close to the AGN. Radiative transfer models are used to trace the absorption and scattering of the AGN light by the dust in the torus, and model its re-radiation by the hot dust. The main differences between the two models are that the Fritz models adopt a smooth density distribution for the dust grains and use a 1-D approach, whereas the skirtor models treat the dusty torus as a two-phase medium with higher density clumps sitting within a lower density medium and use 3-D radiative transfer. A clumpy dust distribution was suggested by Krolik & Begelman (1988) to be necessary to stop the dust grains being destroyed by the hot surrounding gas. cigale returns Bayesian estimates of the stellar mass and various estimates of the recent star-formation rate of the galaxy, along with estimates of the uncertainties on these parameters. In this work, the star-formation rate averaged over the last 100 Myr is adopted, as for magraphy. cagale also returns a determination of the AGN fraction for the galaxy (hereafter \(f_{\rm AGN,CG-F}\) or \(f_{\rm AGN,CG-S}\) for the Fritz and skirtora models), defined as the fraction of the total infrared luminosity that is contributed by the AGN dust torus component. An uncertainty on the AGN fraction is also returned; where this is larger than the measured fraction, the 1-sigma lower limit on the AGN fraction is set to zero. Finally, the reduced \(\chi^{2}\) of the best-fitting model was used to identify unreliable fits, with objects with a reduced \(\chi^{2}\) above 5 being flagged (3 per cent and 2 per cent of sources in the Fritz and skirtora models respectively). #### 3.4.4 agnfitter agnfitter provides independent parameterisations for each of the accretion disk emission (big blue bump), the hot dust torus, the stellar component and the cooler dust heated by star formation; details of the parameterisation of these four components are provided by Calistro Rivera et al. (2016). agnfitter accounts for the effects of reddening on these emission components but without energy balance constraints. agnfitter was run on the LoTSS Deep Fields sources broadly following the implementation of Williams et al. (2018) but using an expanded set of input models (agnfitter v2; Calistro Rivera et al., in prep.). The code determines the relative importance of the four components in a few key wavelength regions, as well as broader physical parameters including estimates of the star-formation rate and the stellar mass. In this work, the IR-based estimate of the SFR was the one adopted. Following Williams et al. (2018), an AGN fraction is defined by considering the contribution of the emission components in the 1-30\(\mu\)m wavelength range. Note that this is different to the definition used for cigale which considers the AGN contribution to the total IR luminosity: as the AGN peaks in the mid-IR, the AGN fractions derived by agnfitter will typically be larger than those of cigale. The AGN fraction was defined as: \[f_{\rm AGN,af}=\frac{L_{\rm Torus,1-30}}{L_{\rm Torus,1-30}+L_{\rm SB,1-30}+L_ {\rm Gal,1-30}} \tag{1}\] where \(L_{\rm Torus,1-30}\), \(L_{\rm SB,1-30}\) and \(L_{\rm Gal,1-30}\) are the luminosities of the hot dust torus, the cooler dust heated by recent star formation, and the stellar component of the galaxy, respectively, all between 1 and 30\(\mu\)m. Note that this differs slightly from the definition of Williams et al. (2018) through the inclusion of the stellar component in the denominator; this avoids a high AGN fraction being determined when the mid-infrared emission is simply dominated by the light of older stars. The uncertainties on these luminosities are used to determine the 1\(\sigma\) upper and lower limits to the AGN fraction. Finally, agnfitter returns a log likelihood for the best-fit model; the \(\approx 3\) per cent of objects whose fits had a log likelihood below \(-30\) were flagged as unreliable (cf. Williams et al., 2018). ## 4 Identification of radiative-mode AGN A characteristic feature of radiative-mode AGN is a hot accretion disk, which is being obscured in certain directions by a dusty structure (the torus). These two structures give rise to a variety of physical features that can be used to identify the radiative-mode AGN. The most widely-used of these, where spectroscopic data is available, is emission line ratios (e.g. Baldwin et al., 1981, the BPT diagram): the ionising radiation from the hot accretion disk is significantly harder than that of a young stellar population, leading to stronger high-excitation forbidden lines. Spectroscopic information is available for only a small subset of the LoTSS-Deep sources (5.1, 21.1 and 4.7 per cent in ELAIS-N1, Bootes and Lockman Hole respectively, with the AGES data in Bootes producing the large difference between the fields), so this method cannot be used for the vast majority of the sources. This will change in the coming years due to the WEAVE-LOFAR survey (Smith et al., 2016, see also Sec. 9) but alternative methods are needed for AGN identification in the meantime. The hot dusty torus emits characteristic emission that has been widely used to identify radiative-mode AGN using mid-IR colours (e.g. Lacy et al., 2004; Stern et al., 2005). Commonly-used selections consider the four _Spitzer_ channels centred at 3.6\(\mu\)m, 4.5\(\mu\)m, 5.8\(\mu\)m and 8.0\(\mu\)m (Channels 1 to 4 respectively); the selection is based on the premise that the emission from stellar populations generally declines with increasing wavelength through the mid-IR (since the mid-IR probes redward of the rest-frame 1.6\(\mu\)m thermal peak of the dominant sub-solar stellar population) whereas hot AGN dust shows a rising spectrum. An equivalent approach uses the WISE mid-infrared colours (e.g. Wright et al., 2010). The exact colour-space cuts are generally defined using template tracks for galaxies and AGN to select regions of colour-space dominated by AGN. Lacy et al. (2004) and Stern et al. (2005) derived the first colour-cuts based on shallow _Spitzer_ data (hereafter referred to as the Lacy and Stern regions, respectively), and these were effective in separating out AGN from the population of relatively nearby inactive galaxies. However, the broad colour regions selected in these papers are heavily contaminated by higher redshift (\(z>0.5\)) inactive galaxies, that deeper _Spitzer_ surveys (such as those available in the LoTSS Deep Fields) are able to detect. Donley et al. (2012) therefore defined a much tighter region of mid-IR colour space (hereafter, the Donley region) within which AGN samples display much lower contamination, but consequently are also less complete. Even in these deep datasets, however, fainter galaxies often lack measurements in one or more channels, preventing any classification by the Stern, Lacy or Donley criteria. To help overcome this, Messias et al. (2012) derived a series of redshift-dependent colour cuts based on K-band to Channel 2, Channel 2 to Channel 4, or Channel 4 to 24\(\mu\)m flux ratios (hereafter, the Messias regions). These allow classification of a larger fraction of galaxies, but with the same issues regarding completeness and contamination. Furthermore, simple application of colour cuts takes no account of low signal-to-noise measurements which can scatter data across the colour criteria, and can also miss some types of AGN (e.g. Gurkan et al., 2014). The wide array of data available in the LoTSS Deep Fields allows a classification scheme to be developed which uses much more than just the mid-IR colour bands. The SED fitting described in the previous section encodes all of the mid-IR spectral expectations used in the Stern, Lacy, Donley and Messias colour criteria, but combines this with additional near-IR and optical data which allow simultaneous characterisation of the host galaxy properties; the latter allows the contribution of the host galaxy to the mid-IR to be directly predicted, and thus any additional AGN contribution to be more clearly distinguished. As an indication of this, Figure 2 shows the Stern, Lacy and Donley mid-IR colour-colour plots with the LoTSS-Deep sources in Bootes1 colour-coded by their AGN fraction as derived by cigale using the skirtor model. Sources classified as an AGN through optical spectra or X-ray properties are indicated in red. It can be seen that the X-ray and spectroscopically selected AGN and the objects with high cigale AGN fractions concentrate primarily in the selected colour-space regions, especially the Donley region, but that a significant fraction of these probable AGN are also found outside of these regions. Furthermore, there are objects within the colour-cuts (especially the broader Lacy and Stern regions) for which cigale predicts very low AGN contributions to the mid-IR. The use of the four SED fitting routines provides two routes to identifying the probable AGN. First, each of cigale and agnfitter provides an estimate of \(f_{\rm AGN}\), the fractional AGN contribution to the mid-IR. Second, objects which have a significant AGN contribution to their SED should be poorly fitted using magphys of magpipes (and typically better fitted using cigale of agnfitter). Figure 3 demonstrates these effects, by showing the cigale AGN fraction plotted against the ratio of the \(\chi^{2}\) values determined from the SED Figure 3: The cigale skirtor AGN fraction plotted against the ratio of the \(\chi^{2}\) values between SED codes that do not include AGN components (the lower value for the magphys (MP) and magphys (BP) fits) and those that do (the lowest of the cigale (CG) and agnfitter (AF) fits), for the LoTSS-Deep sources in Boötes. Points are colour-coded according to whether they are spectroscopic or X-ray AGN (red filled circles), or satisfy the Donley criteria (with S/N\(>\)3 in each band; blue open circles), or satisfy the broader Stern, Lacy or Messias cuts (with S/N\(>\)3; black crosses), or ‘non-AGN’ that either do not satisfy any cuts or have too low signal-to-noise in the mid-IR for this to be determined (green triangles). The clustering at certain cigale AGN fraction values (e.g. 0.05, 0.7) appears to be a feature of the code, perhaps due to the fairly limited sampling of the grid of AGN model parameters. The plot shows that as the cigale AGN fraction rises above \(\sim\)0.1, objects are more likely to be identified as AGN through spectroscopic or X-ray selection or the Donley mid-IR cuts, and also that the SED fitting begins to deteriorate (higher relative \(\chi^{2}\)) for SED codes that don’t include AGN components. Figure 2: The location of the LoTSS-Deep sources on the Lacy et al. (2004) and Donley et al. (2012, left) and on the Stern et al. (2005, right) mid-IR colour-colour classification plots (for sources with S/N\(>\)2 in all four bands), in the Boötes field. The blue dashed lines on the left-hand panel show the Lacy et al. selection criteria, and the blue solid lines show those of Donley et al. On the right-hand plot, the Stern wedge is shown by the blue dashed lines. In both plots, the greyscale colour-coding indicates the AGN fraction from the cigale SED fitting using the skirtor AGN model. Objects confirmed to be AGN through optical spectroscopy or X-ray observations are indicated by the red circles. fits without AGN components compared to those with AGN components, with points colour-coded by evidence for AGN from either spectroscopic or X-ray data, or from mid-IR colour cuts. The spectroscopic and X-ray selected AGN generally show both moderate-to-high AGN fractions and a higher \(\chi^{2}\) using magphys/magpipes than using cigale/agnfitter. The majority of objects which lie securely within the Donley mid-IR colour-cuts show the same characteristics. Objects that lie only within the broader Stern, Lacy or Messias colour regions typically show much lower AGN fractions and the \(\chi^{2}\) value from the magphys/magpipes fits is lower than or comparable to that from cigale/agnfitter; they largely overlap with the 'non-AGN' that either lie outside of these colour cuts or do not have sufficiently high signal-to-noise in their mid-IR measurements for this to be determined. Nevertheless, the SED fits are able to pick out promising AGN candidates within these categories. An examination of the AGN fractions derived by cigale and especially agnfitter shows that many of these have quite large uncertainties, especially for fainter galaxies with fewer securely-measured photometric points. Investigations indicated that the 16\({}^{\rm th}\) percentile of the posterior of the AGN fraction (i.e. the 1-sigma lower limit on the AGN fraction; hereafter P16) provided a more robust indication of the presence of an AGN. The selection of radiative-mode AGN was therefore made by considering three selection criteria (see below for a discussion of how the threshold values were set): 1. whether the P16 AGN fraction from cigale, using the skirtor AGN models, exceeded a threshold value of 0.06 (ELAIS-N1 and Lockman Hole fields) or 0.10 (Bootes field). 2. whether the P16 value for the AGN fraction from agnfitter, as defined in Eq. 1, exceeded a threshold value of 0.15 (ELAIS-N1 and Lockman Hole fields) or 0.25 (Bootes field). 3. if the lower of the reduced \(\chi^{2}\) values arising from the magphys and bagpipes SED fits was both greater than unity and at least a factor \(f\) greater than the lowest of the reduced \(\chi^{2}\) values arising from the two cigale and the agnfitter SED fits. The factor \(f\) was determined to be twice the median value of the \(\chi^{2}\) ratio between the better fit from magphys and bagpipes and the best fit from cigale and agnfitter(cf. Figure 4). This evaluated to \(f=1.36\) for ELAIS-N1, \(f=1.59\) for Lockman Hole and \(f=2.22\) for Bootes. An object was classified as a radiative-mode AGN if it satisfied at least two of these three criteria. In practice, this means either that it has a determined high AGN fraction from both cigale and agnfitter or it has a high AGN fraction from at least one of the two codes combined with a superior SED fit using methods which include AGN components. The selection cuts for each criterion were set by comparing the derived classifications with the spectroscopic and X-ray samples and considering the locations of the classified AGN and non-AGN on mid-IR colour-colour diagrams. The threshold values selected were different for Bootes than for the other two fields. This is because the AGN fractions calculated in that field were systematically higher than those in ELAIS-N1 or Lockman Hole (e.g. a median AGN fraction of 0.037 in Bootes using the cigale skirtor model, compared to 0.029 in each of ELAIS-N1 and Lockman), which is likely to be due to the different manner in which the photometric catalogues were constructed in Bootes (see Paper III). Setting higher thresholds in Bootes ensured a consistency of classification across the three fields (cf. Sec. 7). Finally, a small proportion of objects did not meet these criteria but had previously been identified to be an AGN based on either optical spectra or X-ray properties; these were added to the radiative-mode AGN sample (and correspond to about 3 per cent of all radiative-mode AGN). Fig. 4 shows the LofSS-Deep sources on different combinations of these selection criteria, with the sources that satisfy at least two criteria, and therefore are selected as radiative-mode AGN, shown in red. It can be seen that there is a broad consistency between the different criteria: most of the selected radiative-mode AGN satisfy all three criteria and therefore are secure classifications. The main addition to this is a population of sources selected as having high AGN fractions by both cigale and agnfitter but with comparable, low \(\chi^{2}\) values from the different fitting methods; these are probably sources where cigale and agnfitter are able to pick out a weak AGN through the mid-IR emission, but there is little-to-no direct AGN light through the optical to near-IR spectrum and so magphys and bagpipes are still able to provide a good fit to the majority of the spectrum. Fig. 5 shows the selected radiative-mode AGN and non-AGN on a series of mid-IR colour-colour diagrams, compared against the evolving colours of various galaxy template models. The panels are split by redshift ranges, in order to allow a clearer comparison against the template expectations. At each redshift, the panels show the Lacy and Donley colour plots (left), the Stern colour plot (middle), and the appropriate Messias plot (right). Template SED models were drawn from the 'Galaxy SED Atlas' of Brown et al. (2014) combined with the 'AGN SED Atlas' of Brown et al. (2019). SEDs were selected from these libraries for: (i) elliptical galaxies (as expected to be seen for jet-mode AGN); (ii) star-forming galaxies; (iii) AGN (including both quasars and edge-on 'type-II' AGN); and (iv) composite spectra, produced by combining a set of Seyfert AGN spectra with host galaxy spectra, with a range of weights. The template tracks for the different galaxy classes confirm both the motivation for, and the shortcomings of, the colour-colour selection criteria: the Donley region relatively cleanly selects AGN at \(z<2.5\) but is incomplete for composite systems; the Stern and Lacy regions are more complete for composite systems but contaminated, especially at the higher redshifts; the Messias cuts perform relatively well, especially at the highest redshift where the use of the 24\(\mu\)m colour gives a clear advantage, but still have some incompleteness and contamination. The red points show the objects selected as radiative-mode AGN by the techniques outlined above. At all redshifts these broadly overlap the regions of the AGN and composite templates, extending where appropriate beyond the colour-selection limits. It is clear, however, that in the \(z>2.5\) redshift range there remains a significant population of objects that are not classified as AGN, and yet which lie in similar regions of colour-space to the AGN. At these redshifts, as is evident from Fig. 5, it is only the Channel 4 and 24\(\mu\)m filters that are able to probe rest-frame wavelengths where an AGN template becomes clearly distinct from the galaxy templates, and the composites are even more difficult to distinguish. Especially with the typically low signal-to-noise of the galaxies in this highest redshift bin, the SED fitting techniques may be less reliable: although the classifications are provided for all sources, readers should treat these with caution at \(z>2.5\), where there may well be a degree of incompleteness in the AGN sample. ## 5 Comparison of derived properties and consensus measurements Two of the most important galaxy properties to determine are the stellar mass and the star-formation rate. Each of the SED fitting codes provides an estimate of these parameters. This section discusses how these values are combined to produce consensus measurements for each source. In brief summary, for sources which do not host an AGN, the magnphys and bagpipes codes ought to provide the best measurements of mass and SFR, because these models offer a significantly broader selection of galaxy templates. Indeed, for these sources, the results from these two codes show excellent agreement in their estimates of both stellar mass (median absolute difference of just 0.09 dex) and SFR (0.14 dex). The consensus values of the stellar mass and SFR for non-AGN were therefore generally derived from the logarithmic mean of the magphys and bagpipes results. For radiative-mode AGN, the magphys and bagpipes results are potentially unreliable as they do not include any AGN component in their SED modelling. The two cigale runs (with the Fritz and skirtor AGN models) should be more reliable, and indeed these two agree with each other well: the median absolute difference is only 0.09 dex in stellar mass and 0.13 dex in SFR. agnitter is found to provide less consistent results, but is valuable for the small fraction (\(\approx 2\) per cent) of sources which are highly AGN-dominated, and for which agnitter's superior modelling of the AGN UV emission is required. The consensus values of the stellar mass and SFR for radiative-mode AGN were therefore typically derived from the logarithmic mean of the two cigale results, except where cigale failed to provide an acceptable fit, in which case the agnitter values were adopted. Sections 5.1 and 5.2 now provide (for stellar mass and SFR respectively) a much more detailed comparison of the outputs of the different SED fitting codes, along with a full description of how the generalised approach discussed above was adapted in cases where one or more of the SED codes failed to provide an acceptable fit. Readers not interested in these finer details may wish to skip to Section 6. ### Consensus stellar masses For sources which are not identified to be a radiative-mode AGN, the results from the magphys and bagpipes codes show excellent agreement in their estimates of stellar mass: where both magphys Figure 4: The selection criteria used to identify radiative-mode AGN, and the relative distributions of the AGN and non-AGN thus identified. The upper-left panel compares the reduced \(\chi^{2}\) value resulting from SED models including an AGN component (agnitter, cigale) against those which do not (magphys, bagpipes), with the blue dashed line showing selection criterion (ii). The upper right plot shows the 1-sigma lower limits (16\({}^{th}\) percentile; P16) to the AGN fraction from agnitter and cigale (with the skirtor AGN model), with the blue dashed lines showing selection criteria (i) and (ii). The lower plots show selection criteria (i) _vs_ (ii) and (i) _vs_ (iii) in the left and right panels respectively. Data shown are for ELAIS-N1. Sources are selected as radiative-mode AGN if they satisfy at least two of the three criteria (or are confirmed AGN from spectroscopic or X-ray observations); these sources are shown in red. and bagpipes pass the threshold for an acceptable fit (see Section 3.4) the median absolute difference in stellar mass is just 0.09 dex, with over 90 per cent of sources agreeing within 0.25 dex; the outliers are generally the faintest sources, at low masses or high redshifts. cigale also gives very similar values, with a median difference in stellar mass of only 0.11 dex, and over 85 per cent agreeing within 0.25 dex. agnfitter shows much lower agreement, however, with a median difference in stellar mass of 0.27 dex compared to the estimates from the other codes. This inconsistency for aagnfitter is likely to be associated with the lack of an energy balance in the fitting process. For these non-AGN the consensus stellar mass was derived from Figure 5: Infrared colour-colour plots for the LofSS-Deep sources in ELAIS-N1, compared with template spectra. The sources classified as (radiative-mode) AGN are plotted in red and the non-AGN in black symbols; sources are only plotted if they have a signal-to-noise of at least 3 in each of the relevant filters. For clarity, sources (and templates) are divided into three redshift ranges: the top row is for \(z<1\), the middle row for \(1<z<2.5\) and the bottom row for \(z>2.5\). For each redshift, the left-hand plot shows the mid-IR IRAC flux ratios used for the Lacy et al. (2004, blue dashed lines) and Donley et al. (2012, blue solid lines) selections. The middle column shows the Stern et al. (2005) colour criteria, with the Stern region indicated by the blue dashed lines. The right-hand column shows the selection criteria proposed by Messias et al. (2012), combining IRAC colours with the K-band flux at the lower redshifts, and with the 24\(\mu\)m flux at the highest redshifts. In each plot the coloured lines indicate the evolution over the specified redshift range of a selection of galaxy and AGN template spectra, from Brown et al. (2014) and Brown et al. (2019), separated into ellipticals (pink), star-forming galaxies (yellow), AGN (purple), and composites (green). As can be seen, the broad colour cuts suffer to various extents from both incompletenss and contamination. The selected AGN broadly align with the regions of colour space covered by the AGN and composite template spectra. the mean of the logarithm of the stellar masses derived using magphys and ragpples, as long as both codes provided an acceptable fit to the data (\(\approx 86\) per cent of the non-AGN, though rising to nearly 95 per cent in ELAIS-N1). If one of the two codes provided a bad fit and the other a good fit (11 per cent of cases), then the stellar mass estimate from the well-fitting code was adopted as the consensus measurement. If both codes produced fits below the acceptability threshold then the values of the two stellar mass estimates were examined: if they agreed with each other within 0.3 dex (\(\approx 2\) per cent of cases) then it was likely that the unreliability of the SED fits was driven by some outlier points that did not invalidate the stellar mass estimates, and so the logarithmic mean of the two values was adopted as the consensus stellar mass. If the two values disagreed by more than 0.3 dex, then the stellar mass estimates of the two cigale fits were examined as well: if the full range of all 4 stellar masses was less than 0.6 dex (\(\approx 0.3\) per cent of cases) then the logarithmic mean of the four measurements was adopted as the consensus measurement; if the range was larger than 0.6 dex (\(\approx 0.6\) per cent of sources) then it was deemed that no reliable stellar mass could be provided. A comparison of the consensus masses derived against the estimates from each code individually is shown by the black points in Fig. 6, confirming visually the good agreement of the magphys and ragpples codes, broad agreement of cigale, and larger scatter of agnfitter for these sources. For radiative-mode AGN, the two cigale runs provide stellar mass estimates that agree well with each other: the median absolute difference is only 0.09 dex, with 90 per cent of sources within 0.3 dex. Compared to these values, as expected, the results from magphys and ragpples show greater scatter (each 0.16 dex median difference) and also a larger fraction of outliers where the codes significantly over-estimate the mass due to AGN light being incorrectly modelled as stellar emission (cf. Fig. 6). Again, agnfitter shows a larger dispersion in stellar mass measurements relative to the other codes, with a median absolute difference of 0.49 dex; this may be due to the stellar component being fitted independently without an energy balance constraint, with some stellar light perhaps being incorrectly modelled as AGN emission or vice versa, although it could also be related to the different approach to modelling the AGN emission. For these reasons, for the radiative-mode AGN, if both cigale runs provided acceptable fits then the logarithmic mean of the stellar masses from these two runs was accepted as the consensus mass (with agnfitter excluded due to its higher proportion of outliers); this was the case for just over 94 per cent of the radiative-mode AGN. Otherwise, if just one of the cigale runs provided an acceptable fit (\(\approx 3\) per cent of cases) then the stellar mass from that run was adopted. If neither cigale run provided a good fit, but agnfitter did, then there was a likelihood that this was a case where either energy balance was breaking down or the superior modelling of the AGN UV emission by agnfitter was helping the fit; in these 2 per cent of cases, the agnfitter stellar mass estimate was used. Otherwise, it was decided that no reliable stellar mass estimate was possible. Fig. 6 shows a comparison on each mass estimate against the consensus mass derived, and illustrates the trends discussed above. The lower-right panel also compares the consensus masses against those derived in Paper IV using a grid-based SED fitting mechanism (see also Duncan et al. 2019). This comparison is interesting because the stellar masses in Paper IV are derived for all galaxies in the field, not only the radio sources, and therefore allow a comparison between the radio sources and the underlying population. In Paper IV it is argued that the stellar mass estimates are only reliable out to \(z\sim 1.5\), and so this is set as an upper limit for the plotted points. As can be seen, the agreement between the Paper IV stellar masses and the consensus masses derived here is very good for the non-AGN, with no significant systematic offset (\(<0.1\) dex) and a median scatter of 0.11 dex. The performance for AGN is slightly worse, but still good, with a median scatter of 0.23 dex. These results confirm that the Paper IV masses provide reliable measurements for the broader population that can be used in comparison against the consensus masses for the radio source population. In this paper, no attempt is made to derive uncertainties on the consensus stellar masses for individual sources. Uncertainties arise both due to statistical errors in the individual fits and systematic effects between different SED codes. Each SED code offers an estimate of its statistical uncertainty for each source, and the difference between the stellar masses from different SED codes can be used to gauge the size of the systematic errors. Another source of error is that during the SED fitting the redshift of the source is fixed at the best photometric redshift (unless a spectroscopic redshift is available): uncertainties in the photometric redshift are likely to be a significant contributor to the mass uncertainty for any given source. Instead of calculating uncertainties for individual sources, therefore, the approach taken here is to derive characteristic uncertainties on stellar mass as a function of the galaxy's mass and redshift. The characteristic uncertainties are evaluated in Appendix A, and are found to be typically around 0.1 dex for higher mass sources at \(z<2\), increasing towards higher redshifts and lower masses. ### Consensus SFRs Estimation of consensus SFRs follows broadly the same principles as those of the stellar masses, in the preferred use of the magphys and ragpples results for the non-AGN and with the cigale results generally used for the AGN. As would be expected (cf. Pacifici et al. 2023), the agreement in SFR estimates between the different codes is not quite as good as that of stellar masses, but still strong. For non-AGN, the SFR estimates of magphys and ragpples show systematic differences of less than 0.1 dex, with a median scatter of only 0.14 dex and over 75 per cent of cases agree within 0.3 dex. The cigale measurements agree comparably well at large SFRs, but frequently provide higher SFR estimates than either ragpples or magphys at lower SFRs. agnfitter suffers from a significant systematic offset of, on average, more than 0.3 dex higher SFRs than the other estimators. For the radiative-mode AGN, the two cigale SFR estimations show good agreement with each other (median difference 0.13 dex). Both magphys and ragpples systematically over-estimate the SFRs of these radiative-mode AGN, by around 0.15 dex on average. Fig. 7 provides a visual illustration of these effects. To determine the consensus SFRs, like for stellar masses, the outputs from magphys and ragpples are primarily considered for the non-AGN. The only significant difference in approach arises because of a small proportion of sources (around 9 per cent of all the non-AGN sources, mostly at lower SFRs) for which ragpples returns an acceptable fit, but the SFR is dramatically below that of magphys and with an uncertainty that can be several orders of magnitude larger than the estimated value. These very low SFRs arise because of the parametric (exponentially-declining) form of the ragpples SFR history, which can lead to unrealistically-low best-fit SFRs at large ages where the e-folding time is short, but with considerable uncertainty. For these sources, the cigale SFR estimates are found to broadly agree with the magphys values, with both often within the 1\(\sigma\) confidence interval of the bappipes fit. Therefore, sources for which the bappipes fit is deemed to be good, but the uncertainty on the bappipes SFR estimate is more than 5 times the estimate itself, are treated differently. In these cases, if magphys provides an Figure 6: A comparison of the masses derived by the different SED fitting codes against the final consensus masses for the LofSS Deep Field sources in ELAIS-N1. magnphys, magnphys and degla all give broadly consistent results for non-AGN, but differ for the AGN subset, for which the cigale results should be more reliable, admitting masses show a small systematic offset compared to the other codes, and more outliers at high mass. The lower right plot examines the masses produced in Paper IV (only out to \(z<1.5\)); these are seen to give consistent results with only slightly larger scatter. This is of interest because these stellar masses were produced for the entire galaxy population in these deep fields, not just the LofSS-Deep host galaxies. Figure 7: A comparison of the star-formation rates derived by the different SED fitting codes against the final consensus value, for the LoTSS Deep Field sources in ELAIS-N1. magphys and magphys give broadly consistent results for non-AGN; their performance on objects identified as (radiative-mode) AGN is more mixed, but generally reasonable where the fit is not flagged as a bad fit. cigalé’s SFR estimations for non-AGN generally perform well at higher SFRs (especially with the skirtor AGN model), but over-predict the SFR in some lower-SFR galaxies. The estimated SFRs of objects selected as AGN show a high degree of consistency between the two different cigalé twins. aomfitter SFRs show more scatter and a small systematic offset compared to the other codes. acceptable fit then the magphys estimate is adopted as the consensus value; if it does not, but the magphys and cigale estimates agree within 0.5 dex then the logarithmic mean of the magphys and cigale values is taken as the consensus value; otherwise, the results are deemed inconsistent and no consensus SFR is derived. Other than these cases, the approach to derive consensus SFRs for the non-AGN exactly matches that for deriving stellar masses. Similarly, for the radiative-mode AGN, the approach for stellar masses using cigale (or occasionally agnitted) estimates is replicated for the SFRs. Fig. 7 compares the consensus SFRs against the estimates from each individual code. The spread in derived values between different codes is comparable to that in the analysis of Pacifici et al. (2023). As with stellar masses, no attempt is made to provide a source-by-source uncertainty on the consensus SFR, but Appendix A discusses the typical errors; except for the few per cent of lowest-SFR objects at each redshift (where the uncertainties increase greatly), these can be broadly approximated as \(\Delta(\mathrm{SFR})\approx 0.1(1+z)^{0.5}\) dex. ## 6 Identification of radio AGN As discussed in the introduction, star-forming galaxies show a tight correlation between their radio luminosity and their SFR2. This relation allows the identification of sources which possess significant radio emission associated with AGN activity, as they will appear offset to larger radio luminosities than would be predicted from their SFR (cf. Delvecchio et al., 2017; Williams et al., 2018; Whitam et al., 2022). Relationships between SFR and low frequency radio luminosity have been previously derived at relatively low redshifts by Calistro Rivera et al. (2017), Brown et al. (2017), Gurkan et al. (2018) and Wang et al. (2019), and most recently by Smith et al. (2021) using the LoTSS-Deep data in ELAIS-N1. As discussed by Smith et al., in order to determine an accurate relation it is essential to properly account for non-detections, otherwise there is a risk that the derived relation will be dependent on the depth of the radio imaging, with the bias decreasing as the depth of the radio imaging increases. Smith et al. derive their relationship out to \(z\approx 1\) using a near-IR magnitude selected sample, finding \(\log_{10}L_{150\mathrm{MHz}}=22.22+1.06\log_{10}(\mathrm{SFR})\) for the sample as a whole (where \(L_{150\mathrm{MHz}}\) is in units of \(\mathrm{W\,Hz^{-1}}\) and SFR in units of \(M_{\odot}\,\mathrm{yr^{-1}}\)), based on SFRs derived using magphys. Footnote 2: Note that this assumes that effects such as free-free absorption at low radio frequencies are not important. Schober et al. (2017) estimate that for star-forming galaxies like the Milky Way, free-free absorption is only important below a critical frequencies of a few MHz, which is well below the LOFAR observing frequency. For starburst galaxies like Arp 220, however, they estimate a critical frequency of a few hundred MHz; this is potentially relevant, since the LOFAR-detected sources at \(z\sim 2\) have SFRs approaching those of starburst systems, and are observed at rest-frame frequencies of \(\sim 500\,\mathrm{MHz}\). Nevertheless, Calistro Rivera et al. (2017) studied the radio spectral shapes of LOFAR-selected star-forming galaxies, and An et al. (2023) recently extended this analysis to the LoTSS Deep Fields: in both cases, a slight flattening of the median radio spectra was found at the lowest frequencies, from \(\alpha\approx 0.8\) at high frequencies to \(\alpha\approx 0.6\) at LOFAR frequencies. Although this might be evidence for free-free absorption, this change in spectral index only affects the radio luminosity (and hence estimated SFR) by \(\approx 0.1\,\mathrm{dex}\) for an average source. It works in the direction of reducing any radio excess, and thus more securely classifying a source as not having a radio AGN. Therefore, the possible effects of free-free absorption are ignored in this paper. In this paper, the use of the consensus SFRs, and the extension to higher redshifts, may be expected to lead to small changes in the best-fit relation. A suitable relation is therefore derived using a 'ridgeline' approach. In this approach, the sources are binned into different (narrow) bins in SFR, and within each bin the distribution of radio luminosities of the detected sources is examined. The peak of the distribution is identified as the ridgeline point. Provided the radio survey is sufficiently deep then, especially in the presence of a distorted distribution (the star-forming population plus a distribution of radio-excess AGN), this method should provide a more reliable value than the mean or median of the distribution of detected sources. The radio luminosities and SFRs of the LoTSS-Deep sources are shown in the upper panel of Fig. 8, along with the calculated ridgeline points, which can be well-fitted by the relation \[\log_{10}(L_{150\mathrm{MHz}}/\mathrm{W\,Hz^{-1}})=22.24+1.08\log_{10}(\mathrm{ SFR}/\mathrm{M_{\odot}\,yr^{-1}}) \tag{2}\] The uncertainty on the ridgeline gradient is \(\pm 0.06\), and the uncertainty on the intercept at \(\log_{10}(\mathrm{SFR})=1.5\) (the median value, where the errors on the gradient and intercept are uncorrelated) is \(\pm 0.07\). To within \(1\sigma\), there is no difference in this relation between those sources classified as radiative-mode AGN or not. The relation derived from the ridgeline is fully consistent with that of Smith et al. (2021), agreeing within 0.1 dex over the full range of star-formation rates probed. The distribution of radio luminosities below the ridgeline can be reasonably well-fitted by a Gaussian distribution of width 0.22 dex; this also holds in different bins of star-formation rate, with the Gaussian width remaining constant (to \(\pm 0.02\) dex) from low to high SFR. The distribution above the ridgeline shows a much more extended tail, as expected. In ELAIS-N1 and Lockman Hole, radio-excess sources are here defined as those sources with radio luminosities exceeding the ridgeline value by 0.7 dex, corresponding to approximately \(3\sigma\). It should be noted that this limit corresponds to approximately 0.8 dex above the relation of Smith et al. (2021) at high SFR; these authors derived a scatter in their relation of around 0.3 dex at \(\mathrm{SFR}>10M_{\odot}\,\mathrm{yr^{-1}}\) (at lower SFRs they measured lower scatter, but noted that this might be due to the limiting depth of the radio imaging); Cochrane et al. (2023) also derive a similar value for the scatter. Therefore, the radio-excess selection adopted here also broadly corresponds to a \(3\sigma\) excess relative to the Smith et al. relation. In Bootes (where the input photometry was different), it is found that the scatter in the SFR-radio relation increases towards higher redshifts, and adoption of a fixed 0.7 dex cut-off leads to an excess of radio-AGN at higher redshifts compared to the other two fields. To remedy this, in Bootes the radio excess threshold is modified slightly to \((0.7+0.1z)\) dex, which brings the classifications in this field in line with those in ELAIS-N1 and Lockman (cf. Fig 9). There is a small population of radio sources with consensus SFRs well below \(0.01M_{\odot}\,\mathrm{yr^{-1}}\). SFRs at this level cannot be accurately estimated by the SED fitting codes, and thus have large associated uncertainties. This makes a radio-excess classification based on the consensus SFR potentially unreliable for these sources. To avoid this issue, these sources were only classified as radio-excess if their radio luminosity exceeded (by 0.7 dex) that expected for a SFR of \(0.01M_{\odot}\,\mathrm{yr^{-1}}\). If their radio luminosity was below that level, but above the radio-excess limit for their estimated consensus SFR, they were deemed to be unclassifiable in terms of radio excess (0.4 per cent of sources). Finally, a small proportion of sources do not reach the radio-excess selection threshold, but are clearly extended or multi-component radio sources, inconsistent with simply being star-forming galaxies. Those sources which are either multi-component sources associated through the LOFAR Galaxy Zoo effort (Paper III) with a physical size in excess of 80 kpc, or single component sources with a major axis size in excess of 80 kpc and which also exceed the resolved source threshold defined in Shimwell et al. (2019) by at least a factor of 1.5, were deemed to be clearly extended. These sources were added to the radio-excess sample if they were not already included (just under 0.5 per cent of the sample). The lower panels of Fig. 8 show the ratio of measured radio luminosity over that expected from the consensus SFR as a function of redshift (left) and stellar mass (right); the horizontal dashed lines show the expected relation for star-forming galaxies and the radio-excess threshold, and the blue circles indicate again the peak of the distribution at each redshift. It can be seen that there is a weak variation of the population distribution with redshift, but no consistent trend, and the distribution peak never moves more than 0.2 dex (\(<1\sigma\)) from the ridgeline value. Radio excess sources are found across all redshifts. The apparent gradual decline in the ratio with increasing redshift at \(z>2.5\) may be due to an increasing incompleteness in the classification of radiative-AGN at these redshifts (see Sec. 4), leading to an over-estimate in the SFR of some sources. Regarding stellar mass, it is immediately clear from the lower-right panel of Fig. 8 that the proportion of radio-excess sources increases very strongly with mass, in particular for those objects not selected to be radiative-mode AGN. This is the well-known trend that, in the local Universe, the radio-loud AGN fraction shows a very strong mass dependence (e.g. Best et al., 2005; Sabater et al., 2019). Kondapally et al. (2022) use this L\(\odot\)S-Deep sample to investigate the cosmic evolution of this trend. Fig. 8 also shows a weak variation of the peak of the distribution of observed-to-predicted radio luminosity with mass, with a consistent trend of higher mass galaxies having on average a slightly higher radio luminosity for a given SFR. This has been previously seen in the radio luminosity to SFR relation (e.g. Gurkan et al., 2018; Smith et al., 2021), but it remains unclear to what extent this is due to an intrinsic mass-dependence of the amount of radio emission arising from star formation, as opposed to the effect of a contribution from a population of radio-weak AGN, more prevalent at higher stellar masses, that fall below the selection limit for radio-excess sources. Regardless, the variations in Fig 8 are sufficiently small (in both redshift and stellar mass) that the use of a single SFR-radio relation does not significantly affect the selection of radio-excess sources. ## 7 Final radio source classifications, and dependencies In the previous sections, L\(\odot\)S-Deep sources have been identified as either radiative-mode AGN or not, and either radio-excess sources or not, with a small number of sources being unclassifiable in each case. Here, these are combined to derive a final set of source classifications. * Sources which are neither radiative-mode AGN nor radio-excess sources are classified simply as star-forming galaxies (SFGs). Note that this may include some quiescent galaxies (with SFRs below the stellar mass _vs_ SFR main sequence) whose low redshift nevertheless allows the star formation to be detected by LOFAR. * Sources which are radiative-mode AGN but which do not display a radio excess are radio-quiet AGN (RQAGN; including the radio-quiet quasars) * Sources which are not radiative-mode AGN but do display a radio excess are the population of jet-mode AGN. Traditionally these sources are referred to as low-excitation radio galaxies (LERGs) * Sources which are both radiative-mode AGN and radio-excess sources are sources such as radio-loud quasars (Type I or Type II). These are traditionally referred to as high-excitation radio galaxies (HERGs). * Finally, any source which could not be reliably classified in either of the criteria was left as unclassified. Table 2 shows the number of sources of each class in each field. As can be seen, the majority population in L\(\odot\)S-Deep DR1 is the star-forming galaxies: these comprise just over two-thirds of the total population, rising to over 70 per cent in the deepest field, ELAIS-N1. Radio-quiet AGN contribute nearly 10 per cent of the total, with the two radio-loud classes contributing around 18 per cent between them, mostly as LERGs. Five per cent of the sources are unclassified. Of these, around 3 per cent are the sources without host galaxy identifications or redshifts for which no SED fitting could be carried out, and the remaining 2 per cent are mostly fainter galaxies for which the SED fitting algorithms either did not provide acceptable fits or provided highly inconsistent results. Table 3 provides the first five lines of the classification data for each source in ELAIS-N1, along with the consensus mass and SFR measurements; the full catalogues for each field are provided electronically. More extensive catalogues, including the key outputs of each SED fitting code that were used to derive these, are made available on the LOFAR Surveys website (lofar-surveys.org). Figure 9 shows the distribution of the different classes of source as a function of various properties of the host galaxy. The top panels show the distribution with respect to the 150-MHz flux density: the left panel shows the fraction at a given flux density, and the right panel shows the cumulative fraction above a given flux density. The population is dominated by radio-loud AGN above flux densities of about a mJy. The bulk of these are the LERGs, but with the fraction of HERGs beginning to rise at the highest flux densities, where the coverage of the sample begins to run out due to lack of sky area for these rarer bright sources. This rise of the HERG population is seen even more starkly in the middle left panel, which shows the distribution as a function of radio luminosity, and is in line with expectations from the relative luminosity functions of these two populations (e.g. Best & Heckman, 2012; Best et al., 2014). At lower flux densities (and below 150 MHz luminosities of around \(10^{25}\)W Hz\({}^{-1}\)), star-forming galaxies take over the sample and quickly become the dominant population, accounting for over 90 per cent of sources at the limiting flux density reached in ELAIS-N1 (and more than 75 per cent of the cumulative population above \(S_{150\rm MHz}\approx 100\mu\)Jy). The switch between a star-formation dominated population and a radio-loud AGN dominated population occurs at around \(S_{150\rm MHz}=1.5\) mJy, which is fully consistent with the switch point at higher frequency of \(S_{1.4\rm GHz}\approx 200\mu\)Jy (found by Smolckic et al., 2017) or \(S_{1.4\rm GHz}\approx 250\mu\)Jy (found by Padovani, 2016), considering the typical radio spectral index of these sources. At all flux densities below a few mJy there is a significant population of radio-quiet AGN, accounting for just under 10 per cent of all sources over the 100 \(\mu\)Jy to 1 mJy flux density range. This is slightly lower than the fraction found in observations at higher frequencies: early work by Simpson et al. (2006) suggested that 20 per cent of sources with \(100\,\mu\)Jy \(\lesssim 5_{1.4\rm GHz}\lesssim 300\,\mu\)Jy are radio-quiet AGN, while the COSMOS 3GHz work of Smolckic et al. (2017) indicated between 15 and 20 per cent (as determined from the 70 per cent subset of their 'High Luminosity AGN' sample that shows no radio excess). The origin of this difference is not completely clear. It may be related to different implementations of the radio-loud to radio-quiet separation, but more likely is associated with the radio-quiet AGN having a flatter spectral index than star-forming galaxies (e.g. due to a greater proportional contribution of flatter-spectrum core emission) and therefore lesser prominence at the lower frequencies probed by LOFAR. Given the steepness of the radio source counts, a difference \begin{table} \begin{tabular}{c c c c c c} \hline Source classification & ELAIS-N1 & Lockman Hole & Boötes & Total & Percentage \\ \hline Star-forming galaxies & 22720 & 21044 & 11916 & 55680 & 67.9 \\ Radio-quiet AGN & 2779 & 2633 & 2030 & 7442 & 9.1 \\ Low-excitation radio galaxies & 4287 & 5304 & 3158 & 12749 & 15.6 \\ High-excitation radio galaxies & 510 & 710 & 524 & 1744 & 2.1 \\ Unclassified & 1314 & 1471 & 1551 & 4336 & 5.3 \\ \hline Total & 31610 & 31162 & 19179 & 81951 & 100 \\ \hline \end{tabular} \end{table} Table 2: The number of sources of each class in the LoTSS-Deep DR1 dataset. Figure 8: _Top:_ the distribution of radio luminosity versus SFR for LoTSS Deep Field sources in ELAIS-N1, split into those identified as radiative-mode AGN from their SED (red points) and the sources which are not radiative-mode AGN (‘SED non-AGN’; black points). Within narrow bins in SFR, the ‘ridgeline’ points (larger blue circles) indicate the peak of the distribution of radio luminosities. These can be well-fitted by a power-law distribution shown by the solid blue line, which is in broad agreement with literature relations (green lines). _Bottom:_ the ratio of observed radio luminosity to that predicted from the consensus SFR based on the ridgeline fit, versus redshift (left) and stellar mass (right). The horizontal dashed lines represent the expected relation and the radio-excess threshold. Solid blue points in each plot show the peak of the distribution in narrow bins. These always lie within 0.2 dex of the expected relation. Radio-excess sources are found over the full range of redshifts, but predominantly concentrate at high stellar masses. Figure 9: The fraction of sources of each different class (star-forming galaxies in grey; radio-quiet AGN in purple; low-excitation radio galaxies in blue; high excitation radio galaxies in orange; unclassifiable sources in yellow) as a function of radio flux density (upper panels; left gives fraction at a given flux density, and right gives cumulative fraction above a flux density), radio luminosity (middle left), stellar mass (middle right; for sources with \(z<1.8\) only - see text), optical r-band magnitude (lower left) and redshift (lower right; out to a final bin of \(4<z<6\)). On each plot, the solid line for each class represents the derived fraction, and the shaded region indicates the calculated uncertainty. The open symbols show the values derived from each individual field (square = ELAIS-N1; asterisk = Lockman Hole; diamond = Boötes), where there are at least 5 sources from that field in the given bin, and demonstrate the broad agreement between fields. Note that the rise of the radio-quiet AGN population at the highest stellar masses is probably an artefact of larger mass uncertainties for these sources; see text for details. of only \(\approx\)0.2 in spectral index between star-forming galaxies and radio-quiet AGN would decrease the proportion of radio-quiet AGN in the sample by about a factor of 2; LOFAR studies of radio-quiet quasars provide evidence in support of such flatter spectral indices (e.g. Gloudemans et al., 2021). The additional panels of Fig. 9 show the distribution of source classes as a function of redshift, stellar mass and optical magnitude. Note the strong rise of unclassified sources at \(z<0.1\); low SFRs for these galaxies can also lead to ambiguous radio excesses, while in addition the aperture photometry and aperture corrections used for the LoTSS Deep Field photometry (Paper III) are not optimised for these low redshifts, and resulting errors will affect the SED fitting. At these redshifts, it is in any case better to use the shallower, wider-area LoTSS surveys. All populations are seen over the full range of optical magnitudes. As expected, the LERG population shows increasing importance at higher stellar masses (note that this panel only includes redshifts \(z<1.8\) as mass estimates become increasingly less reliable at higher redshifts). The radio-quiet AGN show a dramatically increasing importance at stellar masses above \(10^{11.5}M_{\odot}\), but this is likely to be an artefact, driven by larger mass uncertainties for these sources due to the potential AGN contributions to their spectra: the number of sources at these very highest masses is relatively low, and so a few sources scattered up to high masses due to wider uncertainties on their masses, or due to errors in the photometric redshifts pushing them to higher redshift (and hence higher luminosity and mass), can artificially dominate the population. Interestingly, star-forming galaxies are seen across the full range of redshifts studied; this indicates that the LoTSS-Deep sample is not only able to study normal star-forming galaxies in the low and moderate redshift Universe, but also to select starbursting galaxies in the early Universe. All of these results are broadly consistent across the three fields (indicated by the open symbols in Fig. 9). In Sec. 4, the threshold levels for selection of radiative-mode AGN were set slightly differently in Boites than the other two fields, based on the typically higher \(f_{\rm AGN}\) values found for the known spectroscopic and X-ray AGN and colour-selected probable AGN. The consistency of the classifications between fields in Fig. 9 gives confidence that this variation in thresholds is indeed appropriate. The remaining variations are consistent with what might be expected from cosmic variance, and indicate the importance of combining the multiple fields in order to overcome these effects, as well as to build a large statistical sample of sources. ## 8 Comparisons with Simulated Sky Models Radio sky simulations provide a valuable tool for predicting the populations of radio sources that will be observed in a given survey. In addition to the planning of future radio surveys (e.g. Norris et al., 2013) or predictions of parameter constraints achievable with those (e.g. Raccanelli et al., 2012; Harrison et al., 2016), these simulations are a valuable tool in assessing the completeness of different radio surveys (e.g. Hale et al., 2023), or in generating random samples for clustering analyses (e.g. Siewert et al., 2020). The two most widely used radio sky simulations in the literature are the SKA Design Study (SKADS) Simulated Skies (Wilman et al., 2008) and the more recent Tiered Radio Extragalactic Continuum Simulation (T-RECS; Bonaldi et al., 2019). The starting point for these simulations is the measured luminosity functions of different source populations, and their cosmic evolution, which has typically been measured out to intermediate redshifts. The luminosity functions are then extrapolated to lower luminosities (lower flux densities), evolved out to higher redshifts, and potentially converted to a different observed frequency. Comparison of the predictions of these models against new deep observations such as the LoTSS Deep Fields provides a critical test of the assumptions that go into the radio sky simulations, and an opportunity to revise and improve these. SKADS provides simulated predictions for four different radio source populations: star-forming galaxies, radio-quiet AGN, and two populations of radio-loud AGN. The two radio-loud AGN populations represent a low-luminosity and a high-luminosity component that Wilman et al. (2008) associated with the FRI and FRII morphological sub-populations (Fanaroff & Riley, 1974), but which also map reasonably well onto the LERG and HERG classifications, respectively, used in this paper. Thus, all four radio source populations can be directly compared between the SKADS simulations and the LoTSS-Deep data. The radio-loud AGN population in T-RECS is constructed from luminosity functions for steep- and flat-spectrum radio sources together with BL Lac objects: these do not map onto the radio-AGN subclasses considered here, so comparisons with T-RECS can only be made with the radio-loud AGN population as a whole. T-RECS also includes predictions for SFGs, but does not include a separate radio-quiet AGN population: instead, T-RECS assumes that the radio emission of radio-quiet AGN is dominated by the on-going star-formation and thus that the radio-quiet AGN are encompassed within the star-forming population. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Source Name & Radio ID & \(S_{\rm 150MHz}\) & \(z\) & AGN & log\({}_{\rm 10}\)(Mass) & log\({}_{\rm 10}\)(SFR) & Radio excess & Extended & Radio & Overall \\ & & [Jy] & & class & [M\({}_{\odot}\)] & [M\({}_{\odot}\)/yr] & [dex] & & class & class \\ \hline ILTJ155957.58+550052.4 & 0 & 0.000396 & 2.0437 & 0 & 11.62 & 2.22 & 0.31 & 0 & 0 & SFG \\ ILTJ155958.2+5550105.3 & 1 & 0.000736 & 0.6697 & 0 & 11.00 & 1.58 & 0.15 & 0 & 0 & SFG \\ ILTJ155958.68+550534.6 & 2 & 0.000197 & 1.4289 & 0 & 11.58 & 1.16 & 0.79 & 0 & 1 & LERG \\ ILTJ155959.52+545751.0 & 3 & 0.000158 & 1.7777 & 0 & 11.20 & 1.71 & 0.32 & 0 & 0 & SFG \\ ILTJ160000.65+550723.3 & 4 & 0.000196 & 3.6960 & 1 & 11.42 & 2.87 & -0.13 & 0 & 0 & RQAGN \\ \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \\ \hline \end{tabular} \end{table} Table 3: Classification results and consensus measurements for each source. The table shows the first five sources in ELAIS-N1: full catalogues are available electronically. Columns give the full source identifier, the radio ID number, the total 150 MHz flux density (in Jy), the redshift, the final radiative-mode AGN classification (1=AGN, 0=non-AGN, \(-1\)=unclassifiable), the logarithm of the consensus stellar mass (in solar masses), the logarithm of the consensus SFR (in solar masses per year), the radio excess (in dex), a flag to indicate extended radio sources (as defined in Sec. 6; 1=extended, 0=compact), the final radio-AGN classfication (1=radio-AGN, 0=no radio excess, \(-1\)=unclassifiable), and the overall classification (SFR=star-forming galaxies; RQAGN=radio-quiet AGN; LERG=low-excitation (jet-mode) radio galaxy; HERG=high-excitation (quasar-mode) radio galaxy; Une=unclassified. Values of \(-99\) indicate where no measurement is available. For both the SKADS and T-RECS simulations, a predicted source population was extracted over a randomly-located sky area corresponding to each of the three LoTSS Deep Fields. The simulations include sources to well below the flux limits of the observation and so, to replicate the observations, the LoTSS-Deep completeness simulations of Kondapally et al. (2022) and Cochrane et al. (2023) were used to determine the probability that each simulated source would be detected, and the source was randomly included in, or excluded from, the simulated catalogue in accordance with that probability. Figure 10 shows how the resultant simulated samples compare against the LoTSS-Deep data in both flux density (left panels) and redshift (right panels). Note that the small dip in the redshift distribution of all LoTSS-Deep populations over \(1.0<z<1.5\) is due to an aliasing effect in the photometric redshifts, particularly in the ELAIS-N1 and Lockman Hole fields, probably due to the lack of H-band data; this is discussed in more depth in Cochrane et al. (2023), but is not a significant issue for the analysis in the current paper. The upper panels of Figure 10 show the simulation \(vs\) data comparison for a simple split into the two T-RECS source populations: star-forming galaxies plus radio-quiet AGN, against radio-loud AGN (HERGs + LERGs). Note that as well as allowing a comparison against both T-RECS and SKADS, this population split is arguably the most robustly determined in the LoTSS-Deep dataset, as it depends only on the presence or absence of a radio-excess rather than the (more difficult to establish) evidence for a radiative AGN. These upper panels show that both T-RECS and SKADS describe fairly well the transition between these two populations with decreasing radio flux density. T-RECS also provides an accurate match to the redshift distribution out to redshift \(z\sim 4\), beyond which the simulated source counts fall below those measured in the data; it is not clear whether this is a shortcoming of the simulation, or whether the photometric redshifts of the highest redshift sources become less reliable. The SKADS simulations also match the data reasonably well out to redshift \(z\sim 2\), but thereafter they over-predict the number of radio-loud AGN and under-predict the star-forming galaxy population. The lower panels of Figure 10 provide further analysis of the SKADS simulations, split into the four sub-populations. Here, significant differences are observed between the simulated and observed datasets. First, SKADS underpredicts the number of SFGs by a factor \(\approx 2\) at all redshifts \(z\gtrsim 0.2\). This is a result which has previous been established (e.g Bonaldi et al., 2016; Smolcic et al., 2017); Hale et al. (2023) use a'modified SKADS' model where they double the number of star-forming galaxies. Second, SKADS substantially overpredicts the number of radio-quiet AGN at lower redshifts and lower flux densities compared to the observations. Although it cannot be excluded that this is due to misclassification of faint radio-quiet AGN as star-forming galaxies in the observational data, a more likely explanation is that, as discussed earlier, this is due to an assumed radio spectral index of 0.7 for the radio-quiet AGN; a flatter spectral index (or curved spectral shape due to low-frequency absorption) would lead to a lower prevalence of these sources at the low frequencies of the LoTSS-Deep data. The combination of fewer SFGs and more RQAGN gives rise to the good agreement at low redshifts in the upper panel. For the radio-loud AGN, the difference in the high redshift number counts comes primarily from an over-prediction of the LERG population; the high redshift evolution of these sources was unknown at the time of the SKADS simulations, and so was assumed to be flat beyond \(z\sim 0.7\); recent works (e.g Kondapally et al., 2022) show this to be a reasonable assumption out to \(z\sim 2\), but with indications of a decline between \(2.0<z<2.5\), suggesting a breakdown of the SKADS assumptions. In conclusion, while the SKADS simulations have been very successful in producing simulated radio skies, datasets such as LoTSS-Deep which probe new parameter space are revealing the shortcomings in our understanding 15 years ago when those simulations were first produced. The more modern T-RECS simulations provide a better match to the current dataset, but would be enhanced by the explicit inclusion of a radio-quiet AGN dataset, since the assumption that the radio emission of these sources is entirely produced through star-formation is known not to be true (see e.g. Macfarlane et al., 2021). Furthermore, explicit separation of the radio-loud population into HERG and LERG components in T-RECS would be a valuable addition and allow more detailed comparison of the simulation performance. ## 9 Summary The LoTSS Deep Fields are the widest deep radio survey ever undertaken. The LoTSS-Deep first data release, comprising \(\approx\)80,000 radio sources, is already an order of magnitude larger than previous radio source samples at this depth. The final LoTSS-Deep sample will detect \(>250,000\) radio-selected sources over a \(35\deg^{2}\) region of sky, split into four different fields to largely overcome cosmic variance. Extensive multi-wavelength photometry from the UV to the far-IR in each field facilitates a huge range of scientific exploration. In this paper, a combination of four different SED fitting codes has been applied to the multi-wavelength photometry of each of the LoTSS-Deep DR1 sources. Two of the four codes (cigale and agrfitter) include an AGN component in their SED modelling, and these offer an estimate of the AGN contribution to the overall galaxy SED. The other two codes (magnphys and magtypes) do not include AGN components, but offer more comprehensive coverage of the parameter space of the stellar component, and therefore are able to provide more accurate results for galaxies without AGN contributions. By combining the AGN fractional contributions estimated by cigale and agrfitter with the relative fitting ability of these two codes compared against magphys and magtypes, those galaxies with an AGN contribution to their SED are identified. Consensus stellar masses and star-formation rates are determined for each galaxy. For the galaxies without AGN contributions, these are generally based on the magphys and magtypes results, which show excellent overall agreement with each other. For those which do show an AGN contribution to their spectra, the cigale results are primarily adopted, as cigale is shown to provide more reliable estimates than agrfitter. The consensus star-formation rates are used to determine a relationship between 150 MHz radio luminosity and star-formation rate, using a 'ridgeline' approach to minimise bias from both radio selection effects and weak radio-AGN contributions. The determined relation is \(\log_{10}L_{150\rm MHz}=22.24+1.08\log_{10}(\rm SFR)\), where \(L_{150\rm MHz}\) is in units of W Hz\({}^{-1}\) and SFR in units of \(M_{\odot}\) yr\({}^{-1}\). This is in very good agreement with previous literature studies. Radio-excess sources are then identified as those sources which show at least 0.7 dex (corresponding to \(\approx 3\sigma\)) more radio emission than would be expected based on the star formation rate. Using these results, the LoTSS Deep Field sources are then classified into four classes: (i) star-forming galaxies, which show neither any evidence for an AGN in their optical/IR SED nor a radio-excess; (ii) radio-quiet AGN, which do have an AGN contribution to their optical/IR SED, but show no radio excess; (iii) low-excitation radio galaxies (jet-mode radio-AGN), which show a radio excess but no optical/IR AGN signatures; (iv) high-excitation radio galaxies which show both AGN emission in their optical/IR SED and a radio ex cess. Less than 5 per cent of the sources are unable to be classified. Overall, over two-thirds of the sources in the LoTSS Deep Fields are star-forming galaxies, around 16 per cent are LERGs, just under 10 per cent are radio-quiet AGN, and 2 per cent are HERGs. The three LoTSS Deep Fields show strong agreement in their source populations, despite significant differences in the input multi-wavelength photometric data. The star-forming galaxies dominate the population below flux densities of \(S_{\rm 150MHz}\approx 1\) mJy, accounting for \(\approx\)90 per cent of the sources close to the flux limit of the deepest field, \(S_{\rm 150MHz}\la 100\mu\)Jy. In terms of luminosity, the star-forming galaxies become the largest population below \(L_{\rm 150MHz}\approx 10^{25}\)W Hz\({}^{-1}\). At higher flux densities, and higher luminosities, the LERGs are the dominant population. The proportion of HERGs begins to rise significantly at the very highest flux densities and luminosities, but the LoTSS Deep Fields do not cover enough sky area to probe the regime where these become the dominant population. Star-forming galaxies are observed across all redshifts, ranging from normal star-forming galaxies in the nearby Universe to extreme starbursting systems at \(z>4\). They are also observed across a wide range of optical magnitudes and stellar masses, peaking at around \(10^{10.5}\) solar masses, typical of galaxies towards the upper end of the star-forming main sequence. The proportion of radio-quiet AGN rises noticeably towards higher redshifts; it also rises sharply towards the highest stellar masses, but this is likely to be an artefact of the steep stellar mass function coupled with larger uncertainties on the stellar masses of this population. The LERG population reaches its peak importance at redshifts 1 to 3; however, the proportion of LERGs is smaller than that of the star-forming galaxies at all redshifts, stellar masses and optical magnitudes. The observed populations are compared against the prediction of the SKADS and T-RECS radio sky simulations. SKADS is shown to underpredict the star-forming galaxy population by a factor \(\approx 2\) across all redshifts. It over-predicts the proportion of radio-quiet AGN in the sample. This is likely to be due to the assumption of a radio spectral index of \(\alpha=0.7\) for these sources: a flatter spectral index, as indicated by recent LOFAR observations of radio-quiet quasars, would reduce the prevalence of these sources in these low-frequency observations. Finally, SKADS over-predicts the numbers of LERGs at redshifts \(z>2\), as it does not account for the negative cosmic evolution of this population at high redshift beginning to be observed in the latest datasets. T-RECS provides a good match to Figure 10: A comparison of the radio source population fractions as a function of 150 MHz flux density (left panels) and the redshift distribution of radio sources (right panels) between the LoTSS-Deep data (solid lines and shaded regions) and the simulated sky predictions from SKADS (Wilman et al., 2008, dashed lines) and T-RECS (Bondidi et al., 2019, dot-dash lines). The upper panels show the populations split just into star-forming galaxies plus radio quiet AGN (blue) versus radio-loud AGN (green), which can be compared against both SKADS and T-RECS simulations. The lower panels compare the four sub-populations against the SKADS simulation predictions; note that the separation of the two SKADS radio-loud classes does not map precisely onto the HERG/LERG classification used in this paper, although it is reasonably similar (see text). the star-forming and radio-loud AGN populations, but its lack of a separate radio-quiet AGN population is a significant shortcoming. The classifications, stellar masses and SFRs derived in this paper form a vital input to many other studies using the LoTSS Deep Fields first data release (Smith et al., 2021; Bonato et al., 2021; Kondapally et al., 2022; McCheyne et al., 2022; Mingo et al., 2022; Cochrane et al., 2023, and others), and the techniques developed to derive these can be applied to future data releases of the LoTSS Deep Fields. Many advances continue to be made in the LoTSS Deep Fields that, in addition to new deeper radio data, will improve classifications still further. Over the next 5 years, the WEAVE-LOFAR survey (Smith et al., 2016) will obtain around a million optical spectra of LOFAR sources, including all sources detected in the LoTSS Deep Fields, using the new William Herschel Telescope (WHT) Enhanced Area Velocity Explorer (WEAVE) multi-object spectrograph (Jin et al., 2023). WEAVE-LOFAR will provide spectroscopic redshifts for the vast majority of the star-forming galaxies, radio-quiet AGN and HERGs (especially at lower redshifts) due to their strong emission lines, removing one of the largest uncertainties in the SED fitting. It may be possible to obtain spectroscopic redshifts for LERGs from weaker lines or continuum features, and even where this is not the case, the confirmed absence of strong emission lines and AGN features will add confidence to the reliability of the photometric redshifts. For many sources, WEAVE-LOFAR will also improve source classifications through either emission line diagnostics, or emission line to radio flux ratios (cf. Best & Heckman, 2012, at lower redshifts). Future imaging of these fields at 0.3-arcsec resolution, by including the international LOFAR baselines (cf. Morabito et al., 2022; Swejen et al., 2022), will further improve source classification by allowing compact radio cores (AGN), kpc-scale star-forming regions, and small-scale core-jet radio sources to be distinguished by their radio morphology in these fields (Morabito et al., 2022). A comparison between the SED-determined classifications and those from high resolution radio morphology will be very interesting. The final LoTSS-Deep sample, imaged with sub-arcsec radio resolution and coupled with high-resolution optical spectroscopy for each source, will represent an extremely powerful resource for studies of the evolution of galaxies and AGN. ## Acknowledgements PNB, JS and RK are grateful for support from the UK STFC via grant ST/R000972/1 and ST/V000594/1. RK acknowledges support from an STFC studentship via grant ST/R504737/1. MJH and DJBS acknowledge support from STFC via grant ST/V000624/1. BM acknowledges support from STFC under grants ST/R00109X/1, ST/R000794/1, and ST/T000295/1. WLW acknowledges support from the CAS-NWO programme for radio astronomy with project number 629.001.024, which is financed by the Netherlands Organisation for Scientific Research (NWO). KJD acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 892117 (HIZRAD). CLH acknowledges support from the Leverhulme Trust through an Early Career Research Fellowship. KM is supported by the Polish National Science Centre grant UMO-2018/30/E/ST9/00082. MB and IP acknowledge support from INAF under the SKA/CTA PRIN 'FORECaST' and the PRIN MAIN STREAM 'SauROS' projects. MB also acknowledges support from the Ministero degli Affari Esteri e della Cooperazione Internazionale - Direzione Generale per la Promozione del Sistema Paese Pogetto di Grande Rilevanza ZA18GR02. MJJ acknowledges support from the Oxford Hinze Centre for Astrophysical Surveys. LKM is grateful for support from the UKRI Future Leaders Fellowship (grant MR/T042842/1). RJW acknowledges support from the VIDI research programme with project number 639.042.729, which is financed by the Netherlands Organisation for Scientific Research (NWO). We thank the anonymous referee for helpful comments. This paper is based on data obtained with the International LOFAR Telescope (ILT) under project codes LC0_019, LC2_024, LC2_038, LC3_008, LC4_008, LC4_034 and LC10_012. LOFAR (van Haarlen et al., 2013) is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Universite d'Orleans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland. This research made use of the LOFAR-UK computing facility located at the University of Hertfordshire and supported by STFC [ST/P000096/11]. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version of this paper. ## Data Availability The data used in this paper come from the LoTSS Deep Fields Data Release 1. The radio images and radio catalogues are presented by Tasse et al. (2021) and Sabater et al. (2021), and were made publicly available through both the Centre de Donnees astronomiques de Strasbourg (CDS) and through the LOFAR Surveys website at [https://lofar-surveys.org/deepfields.html](https://lofar-surveys.org/deepfields.html). The multi-wavelength photometric catalogues and photometric redshifts come from Kondapally et al. (2021) and Duncan et al. (2021) respectively, both of which are also available through CDS and the LOFAR Surveys website. For each field, a table of classifications, stellar masses and SFRs is made available electronically as part of this paper. Furthermore, the adapted input photometric catalogue developed in Sec. 3.3 for the SED fitting, and a table of the key SED fitting results from Secs. 4, 5 and 6 have been made available on [https://lofar-surveys.org/deepfields.html](https://lofar-surveys.org/deepfields.html). More extensive SED fitting results from each code can be made available upon reasonable request to the corresponding author.
2310.18217
Runtime Resolution of Feature Interactions through Adaptive Requirement Weakening
The feature interaction problem occurs when two or more independently developed components interact with each other in unanticipated ways, resulting in undesirable system behaviors. Feature interaction problems remain a challenge for emerging domains in cyber-physical systems (CPS), such as the Internet of Things and autonomous drones. Existing techniques for resolving feature interactions take a "winner-takes-all" approach, where one out of the conflicting features is selected as the most desirable one, and the rest are disabled. However, when multiple of the conflicting features fulfill important system requirements, being forced to select one of them can result in an undesirable system outcome. In this paper, we propose a new resolution approach that allows all of the conflicting features to continue to partially fulfill their requirements during the resolution process. In particular, our approach leverages the idea of adaptive requirement weakening, which involves one or more features temporarily weakening their level of performance in order to co-exist with the other features in a consistent manner. Given feature requirements specified in Signal Temporal Logic (STL), we propose an automated method and a runtime architecture for automatically weakening the requirements to resolve a conflict. We demonstrate our approach through case studies on feature interactions in autonomous drones.
Simon Chu, Emma Shedden, Changjian Zhang, Rômulo Meira-Góes, Gabriel A. Moreno, David Garlan, Eunsuk Kang
2023-10-27T15:45:30Z
http://arxiv.org/abs/2310.18217v1
# Runtime Resolution of Feature Interactions through Adaptive Requirement Weakening ###### Abstract The _feature interaction problem_ occurs when two or more independently developed components interact with each other in unanticipated ways, resulting in undesirable system behaviors. Feature interaction problems remain a challenge for emerging domains in cyber-physical systems (CPS), such as the Internet of Things and autonomous drones. Existing techniques for resolving feature interactions take a "winner-takes-all" approach, where one out of the conflicting features is selected as the most desirable one, and the rest are disabled. However, when multiple of the conflicting features fulfill important system requirements, being forced to select one of them can result in an undesirable system outcome. In this paper, we propose a new resolution approach that allows all of the conflicting features to continue to partially fulfill their requirements during the resolution process. In particular, our approach leverages the idea of _adaptive requirement weakening_, which involves one or more features temporarily _weakening_ their level of performance in order to co-exist with the other features in a consistent manner. Given feature requirements specified in Signal Temporal Logic (STL), we propose an automated method and a runtime architecture for automatically weakening the requirements to resolve a conflict. We demonstrate our approach through case studies on feature interactions in autonomous drones. ## I Introduction Modern software systems are often constructed by composing a set of independently developed components or _features_, each of which is designed to achieve a particular objective or a _requirement_. For instance, a typical automotive system contains an array of software features that are designed to ensure vehicle safety under different circumstances, such as emergency braking and lane-keeping assist. Sometimes, these features can interact with each other in unanticipated ways, resulting in undesirable system behavior. This type of problem, also called the _feature interactions problem_, has been long studied by the software engineering community [1, 2, 3], but remains a major challenge, especially in emerging domains such as the Internet of Things and autonomous systems [4, 5, 6, 7]. There are two main aspects of the feature interactions problem: (1) _detection_ of a possible conflict between features and (2) its _resolution_. In this paper, we focus on the resolution of feature interactions at _run-time_. Existing approaches to resolution leverage some notion of what it means for one feature to be more _desirable_ than others; then when a conflict arises, the most desirable out of the conflicting features is selected as the one that is ultimately executed by the system (and actions from the rest are discarded). One such common notion is based on user-defined _priorities_ or _precedent list_[8, 9, 10, 11]. Other recent approaches include _variable-specific_ resolution (where a conflict between features that modify the same variable is resolved by selecting the action that is considered safest [7, 12]) and _property-based_ resolution (where the feature that is most likely to satisfy a given system requirement is selected [13]). These "winner-takes-all" approaches, however, share one major drawback: When multiple of the conflicting features fulfill important system requirements, being forced to select only one of them leads to an outcome that is undesirable from the developer's perspective. For example, when a pair of conflicting features in a vehicle are both designed to perform critical safety functions (e.g., maneuvering around an obstacle while staying within the lane), discarding one or the other might bring the system into an unsafe state in either case. To overcome this challenge, we propose a new resolution approach that allows the conflicting features to continue to (partially) fulfill their requirements during the resolution process. The key idea behind this approach is that of _adaptive requirement weakening_: When a feature conflicts with another, it may be acceptable to temporarily "compromise" the level of its functionality, by weakening the requirement that it is designed to achieve. Consider a pair of features, \(F_{1}\) and \(F_{2}\), that are designed to fulfill requirements \(R_{1}\) and \(R_{2}\), respectively. When in presence of each other, there may be scenarios in which both requirements cannot be fulfilled simultaneously (i.e., \(F_{1}\oplus F_{2}\not\models R_{1}\wedge R_{2}\)). To resolve this conflict, instead of disabling \(F_{1}\) or \(F_{2}\), our approach involves weakening one or more of the given feature requirements (e.g., from \(R_{1}\) to \(R^{\prime}_{1}\)) such that the features are able to function and co-exist with each other in a consistent manner (e.g., \(F_{1}\oplus F_{2}\models R^{\prime}_{1}\wedge R^{\prime}_{2}\)). In addition, we say that our approach is _adaptive_, since the degree to which one or more requirements are weakened depends on the particular environmental context in which a conflict arises. In this paper, we present a realization of this approach in a class of systems called _cyber-physical systems_ (CPS), where software features are used to monitor and control one or more physical entities in the environment [14]. The requirements of individual features are specified using a notation called _Signal Temporal Logic (STL)_[15], which is particularly well-suited for describing continuous-domain and time-sensitive behaviors of CPS. Based on the semantics of STL [15], we provide (1) a formal definition of what it means to _weaken_ a requirement, (2) a method for automatically transforming a pair of conflicting requirements, \(R_{1}\) and \(R_{2}\), into weakened, consistent versions, \(R_{1}^{\prime}\) and \(R_{2}^{\prime}\), and (3) a runtime architecture that leverages this method to dynamically modify the behavior of the components and resolves the conflict. Weakening the requirement of a feature, however, involves degrading the level of its functionality and can reduce the overall utility of the system. Thus, an ideal method would weaken the requirements no more than by a _minimal_ degree that is sufficient to resolve the given conflict. Finding such _minimal weakenings_, however, is a challenging problem, since in general, the space of weakening candidates for a given requirement \(R\) can be enormous, if not infinite. In particular, we demonstrate how this problem can be formulated as an instance of _mixed-integer linear programming (MILP)_[16], where the goal is to synthesize weakened requirements that no longer conflict with each other while being as close to the original requirements as possible. We have built a prototype implementation of our runtime resolution approach on top of PX4 Autopilot [17], an open-source autopilot software used in consumer and industrial drones. We have evaluated our approach using four different (possibly conflicting) features in an autonomous drone under a wide range of simulated scenarios. Our evaluation shows that our weakening-based approach is effective at resolving conflicts while allowing the features to continue to satisfy the weakened versions of their requirements. The contributions of this paper are as follows: * A theoretical foundation for the resolution of feature interactions using STL-based requirement weakening (Section IV) * A runtime architecture that leverages requirement weakening to resolve conflicts (Section V), * An approach for finding minimal weakening through translation into MILP (Section V-B), and; * An implementation of the weakening-based resolver (Section VI) and an evaluation on case studies involving autonomous drone features (Section VII). **Relevance to SEAMS**. Our approach can be regarded as performing a type of self-adaptation, as it involves dynamically changing the behavior of components to manage conflict scenarios that are difficult to predict or resolve at the design time. As opposed to the typical control loop used in self-adaptive systems (e.g., MAPE-K [18]), which adapts the system asynchronously, our approach is synchronous with the control loop of the CPS, as later described in Section V, Fig. 2. ## II Motivating Example Consider an autonomous organ delivery drone attempting to deliver an organ from a donor in _Hospital B_ to a recipient in _Hospital A_, inspired by the example from [19]. The drone contains two features: (1) the _delivery path planner_, which ensures the organ is delivered to the receiver's hospital in the most efficient manner possible, and (2) the _safe landing enforcer_, which ensures that the battery on the drone has enough charge to safely make it to the nearest land. Under normal conditions, the _delivery path planner_ will always be active, generating an action (\(a_{deliver}\)) to move the drone to its destination, while the _safe landing enforcer_ remains off by default unless triggered upon sensing a low battery level. Imagine a scenario where the drone encounters unexpected turbulence mid-flight, causing the battery to discharge much faster. When the battery dips below a certain threshold, the safe landing enforcer is activated, which then generates a safe landing action (\(a_{land}\)) to direct the drone to the nearest land. Since the delivery path planner is unaware of the landing enforcer, it continues to generate \(a_{deliver}\), which results in a conflict between the two features, as depicted in Figure 1. Existing methodsOne possible approach to resolving this conflict is to leverage a user-defined list of priorities among the features. The drone operator, for example, may designate the safe landing feature as having the highest priority, and the flight controller could be programmed to disregard the actions from all other features, including the delivery path planner. This approach to resolution results in system behaviors where the requirement of the highest-ranked feature (i.e., safe landing) is satisfied while the others are disregarded; in this example, choosing to land the drone may result in the organ failing to be delivered before its expiry time. This type of "winner-takes-all" approach, however, may not be suitable in situations where such a strict ordering among features does not exist, or all of the conflicting features play a critical role in maintaining the system's safety and performance. For instance, while landing the drone safely before running out of battery is certainly important, giving up an organ in the process is also a highly undesirable outcome, as the life of a patient may depend on its timely delivery. Proposed methodOur approach, in comparison, attempts to resolve the conflict in a way that satisfies the requirements of both conflicting features. The key idea is Fig. 1: A possible conflict in an organ delivery drone. that for certain types of requirements in CPS, it may be acceptable to temporarily compromise the degree to which the system satisfies them; i.e., satisfy a weaker version of an original requirement. This notion of requirement satisfaction, also termed _satisficing_[20], can enable a resolution approach that involves relaxing some of the requirements but without entirely giving up on any one of them. For example, suppose that the requirements for the safe landing and delivery planning features are as follows: \(R_{land}\)_: If the battery threshold falls below 10%, the drone should land on the nearest land. \(R_{deliver}\): The drone should fly at a fast-enough speed to reach the destination before the delivery time._ During resolution, one or both of these requirements can be weakened. For example, the requirement for the safe landing feature may be weakened by lowering the threshold that triggers the drone to find a landing spot (e.g., from 10% to 5%). With the new weakened requirement (\(R_{land}\to R^{\prime}_{land}\)), the behavior of the two features becomes consistent again (i.e., \(R^{\prime}_{land}\wedge R_{delivery}\) is satisfiable). Intuitively, this amounts to delaying the safe landing feature in order to allow the drone to complete its organ delivery mission. As a trade-off, the safe landing feature has compromised the level of safety that it originally promised, since there is now an increased risk that the drone may run out of battery before landing. Alternatively, the conflict could be resolved by weakening both requirements. For example, \(R_{delivery}\) may be weakened by reducing the speed of the drone, to allow the battery to be depleted at a slower rate than at the original speed. This would cause a delay in organ delivery, but it would also enable the drone to complete its delivery while weakening the battery threshold from 10% to 8% only. In either case, this weakening-based approach results in an arguably more desirable system outcome than the priority-based method, since both the requirements of the features can be satisfied (at the cost of temporarily compromising their optimal functionality). ChallengesIn general, there may be a large number of ways to resolve a conflict using this approach, weakening one or more requirements by different amounts. Since weakening involves degrading the functionality of the features, an ideal resolution process would involve weakening the requirements no more than needed. At the same time, the system operator may also wish to place a harder constraint on the maximum amount by which a requirement can be weakened (e.g., for \(R_{land}\), the threshold cannot be set below 5%, since that might compromise the safety of the battery beyond what is acceptable). We later show (1) how this type of weakening can be formally realized over STL and (2) an approach that uses a MILP solver to generate minimal weakenings. ## III Preliminaries SignalsIn our approach, the behavior of CPS is modeled by real-valued continuous-time _signals_. Formally, a signal \(s\) is a function \(\mathbf{s}:T\to D\) mapping from a time domain, \(T\subseteq\mathbb{R}_{\geq 0}\), to a tuple of \(k\) real numbers, \(D\subseteq\mathbb{R}^{k}\). Intuitively, the value of a signal \(\mathbf{s}(t)=(v_{1},\ldots,v_{k})\) represents different state variables of the system at time \(t\); (e.g., \(v_{1}\) might represent the altitude of the drone). Signal Temporal Logic (STL)STL extends linear temporal logic (LTL) [21] for specifying the time-varying behavior of a system in terms of signals. The basic unit of a formula in STL is a signal predicate in the form of \(f(\mathbf{s}(t))>0\), where \(f\) is a function from \(D\) to \(\mathbb{R}\); i.e., the predicate is true if and only if \(f(\mathbf{s}(t))\) is greater than zero. Then, the syntax of an STL formula \(\varphi\) is defined as: \[\varphi:=\top\mid f(\mathbf{s}(t))>0\mid\neg\varphi\mid\varphi_{1}\wedge \varphi_{2}\mid\varphi_{1}\mathcal{U}_{[a,b]}\varphi_{2}\] where \(\top\) is a Boolean \(true\) constant, \(a,b\in\mathbb{R}\) and \(a<b\). The _until_ operator \(\varphi_{1}\mathcal{U}_{[a,b]}\varphi_{2}\) means that \(\varphi_{1}\) must hold until \(\varphi_{2}\) becomes true within a time interval \([a,b]\). The until operator can be used to define two other important temporal operators: _eventually_ (\(\Diamond_{[a,b]}\varphi:=True\ \mathcal{U}_{[a,b]}\varphi\)) and _always_ (\(\Box_{[a,b]}\varphi:=\neg\Diamond_{[a,b]}\neg\varphi\)). RobustnessTypically, the semantics of temporal logic such as LTL is based on a _binary_ notion of formula satisfaction (i.e., formula \(\varphi\) is either satisfied or violated by the system). Due to its signal-based nature, STL also supports a _quantitative_ notion of satisfaction, which allows reasoning about how "close" or "far" the system is from satisfying or violating a property. This quantitative measure is also called the _robustness_ of satisfaction. Informally, the robustness of signal \(\mathbf{s}\) with respect to formula \(\varphi\) at time \(t\), denoted by \(\rho(\varphi,\mathbf{s},t)\), represents the smallest difference between the actual signal value and the threshold at which the system violates \(\varphi\). For example, if the property \(\varphi\) says that "the drone should maintain an altitude of at least 5.0 meters," then \(\rho(\varphi,\mathbf{s},t)\) represents how close to 5.0 meters the drone maintains its altitude. Formally, robustness is defined over STL formulas as follows: \[\rho(f(\mathbf{s}(t))>0,\mathbf{s},t) \equiv f(\mathbf{s}(t))\] \[\rho(\neg\varphi,\mathbf{s},t) \equiv-\rho(\varphi,\mathbf{s},t)\] \[\rho(\varphi_{1}\wedge\varphi_{2},s,t) \equiv\min\{\rho(\varphi_{1},\mathbf{s},t),\rho(\varphi_{2}, \mathbf{s},t)\}\] \[\rho(\Diamond_{[a,b]}\varphi,\mathbf{s},t) \equiv\sup_{t_{1}\in[t+a,t+b]}\rho(\varphi,\mathbf{s},t_{1})\] \[\rho(\Box_{[a,b]}\varphi,\mathbf{s},t) \equiv\inf_{t_{1}\in[t+a,t+b]}\rho(\varphi,\mathbf{s},t_{1})\] where \(\inf_{x\in X}f(x)\) is the greatest lower bound of some function \(f:X\rightarrow\mathbb{R}\) (and \(\sup\) the least upper bound). The robustness of satisfying predicate \(f(\mathbf{s}(t))>0\) captures how close signal \(\mathbf{s}\) at time \(t\) is above or below zero. For example, consider formula \(\varphi\equiv alt(t)-5>0\), capturing the property that "the drone altitude is at least 5.0 meters." If, at time \(t\), the altitude signal is \(alt(t)=10\) meters (i.e., \(5.0\) meters above the required altitude), \(\rho(\varphi,\mathbf{s},t)\) is computed as \(5\). On the other hand, robustness \(\rho(\Box\varphi,a,t)\) describes the point at which the system is furthest away from satisfying \(\varphi\). For instance, consider property \(\phi\equiv\Box_{[0,2]}(alt(t)-5>0)\), which says that the drone must maintain a minimum altitude of 5.0 meters for interval \(t=[0,2]\). Suppose that the system evolves to generate signal \(\mathbf{s}\) with the altitude of 6.0, 3.0, 5.5 meters at \(t=0,1,2\), respectively; then, \(\rho(\phi,\mathbf{s},0)\) \(\rho((alt(t)-5>0),\mathbf{s},1)\) = -2.0 (i.e., the system _violates_\(\phi\) by the robustness value of 2.0). ## IV Requirement Weakening We present an extension to STL to support the systematic weakening of requirements in STL. It enables us to specify the maximum extent of weakening allowed for a requirement and quantitatively measure the degree of weakening, which later plays an important role to ensure that requirements are weakened no more than needed to resolve a conflict. The extension is built upon the observation that the robustness value of an instantiated STL formula can be increased by changing the atomic proposition, and shortening or enlarging the time interval of the temporal operators in STL. This increase in robustness quantification leads to a less restrictive requirement, which is easier to satisfy than the original one. ### _weakSTL: STL with Weakening_ We propose _weakSTL_, an extension to STL with weakening semantics. _weakSTL_ introduces additional parameters to atomic propositions and temporal operators to indicate the range of signal values and time intervals allowed for weakening. The syntax of a _weakSTL_ formula is defined as: \[\varphi\ := \top\ \ |\ \ f_{p}(\mathbf{s}(t))>0\ \ |\ \ \neg\varphi\ \ |\ \ \varphi\wedge\psi\ \ |\ \ \varphi\vee\psi\ \ |\] \[\ In addition, we define \(\varphi_{0}\) be the instantiation without any weakening from the _weakSTL_ formula \(\varphi\), i.e., \(\theta\) is defined to be 0 for every sub-expression of \(\varphi\). Finally, we leverage the robustness concept in the original STL to measure the _degree of weakening_ between two instantiations of a _weakSTL_ formula \(\varphi\), as follows: \[\Delta(\varphi_{\theta 1},\varphi_{\theta 2},\mathbf{s},t)=\rho(\varphi_{ \theta 2},\mathbf{s},t)-\rho(\varphi_{\theta 1},\mathbf{s},t)\] where \(\varphi_{\theta 1},\varphi_{\theta 2}\in M(\varphi)\). #### Iv-C3 Example Let us revisit the example in Section III, where \(\phi\equiv\square_{[0,2]}(alt(t)-5>0)\). Consider the _weakSTL_ formula \(\varphi\equiv\square_{[0,2],0,2}(alt(t)-5>0)\) (i.e., \(\varphi=M(\phi)\) with \(p=0\) and \(q=2\)). Then, \(\varphi\) can be instantiated into two weaker versions of \(\varphi\), by weakening the time interval to \(t^{\prime}=[0,1]\) with \(\theta=(0,1)\) or \(t^{\prime}=[0,0]\) with \(\theta=(0,2)\). Given signal \(\mathbf{s}\) with an altitude of 6.0, 3.0, 5.5 meters at \(t=0,1,2\), respectively, the original STL requirement \(\phi\) is violated, with the robustness value of -2.0. However, with \(\theta=(0,2)\), the resulting, weaker STL requirement \(\varphi_{\theta}\) is satisfied, with the robustness value of \(\rho(\varphi_{\theta},\mathbf{s},0)=1.0\). The degree of weakening between the weaker and initial requirements is measured as \(\Delta(\varphi_{0},\varphi_{\theta})=\Delta(\phi,\varphi_{\theta})=1.0-(-2.0) =3.0\). ## V Runtime Resolution Architecture An overview of our proposed runtime architecture for weakening-based resolution is shown in Figure 2. We assume that each feature in our system periodically observes the state of the environment (through one or more sensors) and generate a command to an actuator in order to influence the state. For example, the safe landing feature from our running example periodically monitors the state of the battery (which is a physical component and is thus considered part of the environment for the software system). If the battery level drops below a safe threshold, the landing feature generates a command to direct the drone to land on the nearest land. There are two major components in the proposed architecture: (1) the _detector_, which detects possible conflicts between the features, and (2) the _resolver_, which resolves a possible conflict by generating a new set of actions that are consistent with each other. Since the focus of this paper is on resolution, for detection, we adopt the approach proposed by [7] and [12], where a pair of features are considered to be in conflict if they generate actions that modify an overlapping part of the environment (e.g., the safe landing and delivery path planning features both affect the direction of the drone's next movement and thus are in conflict). When triggered by the detector, the resolver takes three types of inputs: (1) a conflict, represented by a set of conflicting features, (2) a set of requirements for the conflicting features, and (3) an _environment model_, which describes how the state of the environment evolves based on the action of the system (explained further below). Then, the resolver performs two steps: (1) _weakening_ of one or more of the given requirements, and (2) produce a set of actions of the features that adhere to the weakened requirements (and thus, non-conflicting with each other). In the subsequent sections, we further explain (1) how an environment model is used to predict the behavior of a system given an action and (2) how the weakening and resolution steps by the resolver are carried out using a MILP solver. ### _Environment Model_ Given a set of feature requirements, \(R_{1},...,R_{k}\), the goal of the resolution is to find weaker versions of one or more of them, \(R^{\prime}_{1},...,R^{\prime}_{k}\), such that they are consistent (i.e., it is possible to satisfy \(R^{\prime}_{1}\wedge...\wedge R^{\prime}_{k}\)). In the context of STL, checking the satisfaction of weakened requirement \(R^{\prime}\) involves evaluating it over some signal \(\mathbf{s}\) that represents the possible future states of the system _if_ the component behaved according to \(R^{\prime}\). In the proposed framework, the environment model plays the role of generating such a _predictive signal_. More specifically, the environment model is represented as transition system \(T=(\mathcal{Q},\mathcal{A},\delta,\mathcal{Q}_{i})\), where: * \(\mathcal{Q}\subseteq\mathbb{R}^{k}\) is the set of environment states. Each state is a particular combination of values for signal variables, represented as a k-dimensional tuple; \(q=(v_{1},...v_{k})\in\mathcal{Q}\). * \(\mathcal{A}\) is the set of actuator actions. * \(\delta:\mathcal{Q}\times\mathcal{A}\rightarrow\mathcal{Q}\) is the transition function that captures how the system moves from one state to another by performing an action. * \(\mathcal{Q}_{i}\) is the set of initial states. For example, the environment model for the drone example may capture the (x, y, z) location of the drone, its velocity, as well as the amount of remaining battery. The location of the drone changes during each transition depending on the current velocity, which, in turn, may be modified by a system action that accelerates or decelerates the drone. Similarly, the battery level also can be modeled as decreasing at a steady rate (\(drain\_rate\)) while the drone is in movement: \[q^{\prime}=\delta(q,a)\] \[q^{\prime}.battery\_level=q.battery\_level-drain\_rate\] Then, given a sequence of actions \(a_{1},a_{2},...,a_{n}\) and the current state \(q\), the environment model can be executed over these actions to generate a corresponding state sequence, \(q_{0},q_{1},...q_{n}\), which can then be formed into predictive signal \(\mathbf{s}\). Our approach does not prescribe a particular notation for specifying an environment model, as long as it can be used to generate signals as depicted above. For our implementation, we use the MiniZinc modeling language [22], which provides declarative constraints for specifying relationships between different variables of a system. Fig. 2: Overview of the proposed resolution architecture. ### _Weakening-based Resolution as MILP_ In our approach, the weakening and resolution steps inside the resolver (Figure 2) are carried out together by reduction to a constrained optimization problem--in particular, MILP. A standard MILP problem involves finding values for a set of decision variables that maximize (or minimize) an objective function while satisfying a set of constraints. We describe how our problem can be formulated into MILP2. Footnote 2: Without loss of generality, we show a formulation for a MILP problem to resolve a conflict between _two_ features. #### V-B1 Minimal requirements As inputs, the resolver is given a pair of feature requirements in STL, \(R_{1}\) and \(R_{2}\). From these, the resolver first generates \(weakSTL\) formulas for \(R_{1}\) and \(R_{2}\). In addition, the user specifies the maximum allowed degree of weakening by providing values for \(p\) and \(q\) for each _weakSTL_ requirement. These bounds, in effect, define the weakest possible requirement that is allowed by the _weakSTL_ requirement, also called the _minimal requirement_. For example, consider \(R_{1}\equiv(alt(t)>5)\). Suppose that the user is willing to accept \(R_{1}\) to be weakened but no more than \((alt(t)>2)\), since staying below that altitude is considered unacceptable. Then, the user would specify the value of \(3\) for bound \(p\) in _weakSTL_\(\varphi\equiv f_{p}(\mathbf{s}(t))>0\), where \(\varphi_{0}\equiv R_{1}\). #### V-B2 Conflict resolution Weakening-based resolution problem is formulated as follows: **Problem 1**: _Given (i) the original STL formulas \(\varphi_{0},\psi_{0}\) instantiated from \(weakSTL\)\(\varphi\) and \(\psi\), respectively, (ii) past signal \(\mathbf{s}_{post}\) where \(\neg\exists\mathbf{s}_{pred}\bullet(\mathbf{s}_{post}\neg\mathbf{s}_{pred},0 )\models\varphi_{0}\wedge\psi_{0}\), find a pair of weakened STL formulas, \(\varphi^{\prime}\in M(\varphi),\psi^{\prime}\in M(\psi)\), and a predictive signal, \(\mathbf{s}_{pred}\), such that_ \[\exists\ \mathbf{s}_{pred}\bullet(\mathbf{s}_{post}\neg\mathbf{s}_{pred},0 )\models\varphi^{\prime}\wedge\psi^{\prime} \tag{2}\] Problem 1 defines a conflict as a condition where no possible future states (as represented by predictive signal \(\mathbf{s}_{pred}\)) can satisfy the conjunction of the original requirements. Intuitively, weakening-based resolution attempts to resolve this conflict by finding weakened requirements \(\varphi^{\prime}\) and \(\psi^{\prime}\) that are satisfiable over some possible future execution (i.e., \(\mathbf{s}_{past}\neg\mathbf{s}_{pred}\)). #### V-B3 Optimization problem Problem 1 can be formulated as a constrained optimization problem, as follows: **Problem 2**: _Given \(\varphi,\psi\in weakSTL\), transition system \(T=(\mathcal{Q},\mathcal{A},\delta,\mathcal{Q}_{i})\), and state sequence \(q^{t}=q_{0},\ldots,q_{t}\) representing the signal observed from the environment so far, compute:_ \[\text{argmin}_{\theta(\varphi),\theta(\psi),\mathbf{a}} \Delta(\varphi_{0},\varphi_{\theta(\varphi)},\mathbf{s},0)\ +\ \Delta(\psi_{0},\psi_{\theta(\psi)},\mathbf{s},0) \tag{3}\] \[s.t. \theta(\varphi)\in\mathbb{Z}^{k}\] (4) \[\theta(\psi)\in\mathbb{Z}^{m}\] (5) \[\mathbf{s}_{i}=q_{i}\text{ for }i\leq t\] (6) \[\mathbf{s}_{i}=\delta(s_{i-1},\mathbf{a}_{i-1})\text{ for }t<i\leq t+N\] (7) \[(\mathbf{s},0)\models\varphi_{\theta(\varphi)}\wedge\psi_{\theta (\psi)} \tag{8}\] _where \(k\) and \(m\) are the total number of weakening parameters of \(\varphi\) and \(\psi\) as defined by Eq. 1, \(N\in\mathbb{N}\) is a finite horizon provide by the user, \(\mathbf{s}=s_{0}\ldots s_{t+N}\), and \(\mathbf{a}=a_{t}\ldots a_{t+N-1}\). Moreover, transition system \(T\) is provided by the environment model and is used to predict signal \(\mathbf{s}\) from \(t\) to \(t+N\)._ Problem 2 poses the weakening-based resolution problem as an optimization problem. Intuitively, this optimization problem generates (1) weakening parameters \(\theta(\varphi),\theta(\psi)\) for weakSTL formulas \(\varphi,\psi\) and (2) a sequence of \(N\) actions \(a_{t}\ldots a_{t+N-1}\) that result in the satisfaction of the weakened requirements. These parameters are generated such that the degree of weakening for each of the weakSTL formulas is minimized, i.e., weaken the requirements no more than necessary (Eq. 3). Signal \(\mathbf{s}\) is the concatenation of the past signal (Eq. 6) and the predicated signal generated via the environment model \(T\) and actions \(\mathbf{a}\) (Eq. 7). Finally, the weakened requirements \(\varphi_{\theta(\varphi)}\wedge\psi_{\theta(\psi)}\) must hold over \(\mathbf{s}\) (Eq. 8). To encode Problem 2 as an MILP instance, we extend the MILP-based formulation of the STL control synthesis problem in [23] to capture the degree of weakening \(\Delta(\varphi_{0},\varphi_{\theta(\varphi)},\mathbf{s},0)\) and \(\Delta(\psi_{0},\psi_{\theta(\psi)},\mathbf{s},0)\). In [23], the control synthesis problem finds a signal \(\mathbf{s}\) that satisfies a given STL formula under an environmental model. In our scenario, we include additional variables to capture the weakening parameters \(\theta(\varphi)\) and \(\theta(\psi)\) to the STL MILP encoding in [23]. As in [23], we assume that every predicate function \(f\) that defines \(f(\mathbf{s}(t))>0\) in STL formula \(\varphi\) must be a linear or affine function. This assumption guarantees that our encoding is expressible as a MILP. Finally, the translated MILP problem is dispatched to an off-the-shelf solver (Gurobi [24] in our implementation). The solver can either find a solution or return UNSAT (meaning it cannot find weakened requirements and actions that satisfy the constraints). If the solver is able to find a solution, the resolver returns the generated action sequence \(\mathbf{a}=a_{t}\ldots a_{t+N-1}\) to be executed by the system as the resolved actions. ## VI Implementation ### _Simulator_ To demonstrate our approach, we have implemented a prototype of our resolution architecture3 on top of PX4, an open source flight control software [25]. To run our experiments, we use the jMAVSim drone simulator (part of PX4), which supports the simulation of the physical dynamics of the drone while it reacts to control actions. Footnote 3: All of the code, models, and experimental data is available at [https://github.com/sychoo/CPS-weakening-based-resolution](https://github.com/sychoo/CPS-weakening-based-resolution) For evaluation, we implemented the following features on top of the PX4 flight control software: * _Delivery Planning Feature_: Plans for the shortest path from point A to point B and generates a set of velocity vectors during the flight to follow the designated path. * _Safe Landing Feature_: Directs the drone to land on the nearest land when the battery level drops below a preset safety threshold. * _Boundary Enforcer_: Ensures that the drone remains within the map boundaries. When active, it generates a velocity vector orthogonal to the map boundary. * _Runaway Enforcer_: Ensures that the drone stays away from drones nearby. When active, it generates a velocity vector to evade a nearby drone. ### _Environment Model_ As discussed in Section V, our framework leverages a model of the environment during the resolution process. For the drone system, the environment model captures (1) 3D Cartesian space around the drone, (2) the physical dynamics of the drone, including how its location is changed by an action that sets its velocity, and how its speed is affected by an acceleration/deceleration action, (3) the amount of remaining battery and its depletion rate (based on the velocity of the ego drone), and (4) the estimated speed and location of a nearby drone (which we call the _chaser_ drone); in particular, the model assumes that the chaser is moving towards the ego drone at a fixed speed. We believe that this model is general enough to capture important aspects of a typical drone environment and reusable across multiple features. The environment model is specified in the MiniZinc modeling language and is automatically translated into the underlying MILP constraints during the resolution process. ## VII Evaluation This section presents the evaluation of our proposed approach. We focus on the following 3 research questions: * **RQ1.** Does the weakening-based approach better achieve the desired feature requirements compared to the existing priority-based approach? * **RQ2.** Does the weakening-based approach provide a stronger guarantee for satisfying the minimal feature requirements compared to the existing priority-based approach? * **RQ3.** What is the performance overhead of our approach? Does it interfere significantly with the system operation? To study the proposed questions, we conducted two case studies involving autonomous drones: (1) an organ delivery drone and (2) a surveillance drone. In each case study, we evaluated our weakening-based method to resolve conflicts between two different features of the drone. ### _Experimental Design_ We designed a set of experiments4 to compare the proposed weakening-based approach to a priority-based resolution approach, which uses a fixed ordering among the features and only selects the action generated by the highest-ranking feature. The weakening-based resolution approach does not require any ordering between different features. Instead, it allows the user to define an original requirement and a minimal requirement for each feature (specified as bounds on _weakSTL_ formulas, as described in Section V-B). Footnote 4: All our experiments were run on a macOS machine with 32 GB RAM and a 6-core Intel Core i7. When the MILP solver is unable to generate a solution, it outputs UNSAT (for unsatisfiability); this may occur in certain environmental contexts where it is impossible to weaken the requirements (within the maximum allowed degrees as defined by the minimal requirement). In that case, the resolver simply selects one of the conflicting actions to execute. To test the resolution approaches under diverse scenarios, we randomly generated different configurations for the drone simulation, including the initial starting points of the ego and nearby drones, the maximum speeds of the drones, the mission waypoints (e.g., delivery destination), and the size of the map boundary. Then, for each of these scenarios, we simulated the drone multiple times (each under a different resolution approach) and recorded the system states throughout its execution (i.e., the signal for the entire simulation). To measure the performance of the resolution approaches, we use the robustness of satisfaction of the given feature requirements as the metric. Our rationale behind choosing this metric is that robustness captures how well the system is achieving the objectives of the features, and thus can serve as a reasonable proxy for the desirability of system behavior that results from a particular resolution approach. In particular, to analyze the impact of resolution, we measure and record the robustness values for the _original_ (not the weakened) requirements during the period of a feature interaction; that is, we start collecting the values starting at the time point where both features are activated and stop at the point where both features are deactivated. Finally, before carrying out our experiments, we developed the following hypotheses to be tested: * **H1 (for RQ1).** The weakening-based approach results in a higher overall satisfaction of feature requirements than the priority-based approach. * **H2 (for RQ2).** The weakening-based approach provides a stronger guarantee of minimal requirements than the priority-based approach. * **H3 (for RQ3).** The weakening-based approach incurs non-trivial overhead, but it is not significant enough to disrupt the operation of the existing drone controller. ### _Organ Delivery Drone Case Study_ As described in Section II, there are two features of interest in the organ delivery drone: the _delivery planner_ and the _the safe landing feature_. A conflict between these two features arises when both features are activated simultaneously (i.e., the battery drops below a safe threshold while the planner is computing the next velocity vector to the destination). The original STL requirements specified for the two features are as follows: \[R_{deliver}:\square_{[0,1]}(curr\_speed>\\ (distance\_to\_dest/remaining\_delivery\_time))\\ R_{land}:\square_{[0,1]}(battery<40\%\rightarrow\Diamond_{[0,1]}(is\_ landing=1))\] In addition, we also specified the following minimal requirements for the two features: \[R_{deliver}:\square_{[0,1]}(curr\_speed>\\ (distance\_to\_dest/remaining\_delivery\_time))\\ R_{land}:\square_{[0,1]}(battery<20\%\rightarrow\Diamond_{[0,1]}(is\_ landing=1))\] Note that the minimal requirement for the delivery planner is the same as the original one (i.e., it cannot be weakened), since timely delivery of the organ is considered critical. For this case study, we generated 25 randomized scenarios by varying the configuration parameters as described in Section VII-A. Then, we ran each scenario four times: (1) weakening-based approach with the delivery planner as the fallback action, (2) weakening with the landing feature as the fallback, (3) priority-based approach with the planner as the preferred feature, and (4) priority with the landing feature preferred. This resulted in a total of 100 scenario runs. ### _Surveillance Drone Case Study_ Consider a drone that performs a surveillance mission, visiting a set of waypoints within a designated boundary of the map (inspired by an example from [26]). The environment contains another simulated drone (called the _chaser_) that constantly flies towards the ego drone. The two features of interest here are the _boundary enforcer_ and the _runaway enforcer_, each of which is tasked with keeping the drone safe from a collision with the boundary or the chaser (respectively). A conflict can occur in situations when the ego drone travels to a position that is close to the boundary and the chaser, as shown in Figure 3. The following original STL requirements were specified for the two features: \[R_{runaway}:\square_{[0,1]}(distance\_to\_chaser>10)\] \[R_{boundary}:\square_{[0,1]}(distance\_to\_boundary<=20\to\] \[\Diamond_{[0,1]}distance\_to\_boundary>20)\] In addition, we also specified the following minimal requirements for the two features: \[R_{runaway}:\square_{[0,1]}(distance\_to\_chaser>2)\] \[R_{boundary}:\square_{[0,1]}(distance\_to\_boundary<=20\to\] \[\Diamond_{[0,1]}distance\_to\_boundary>2)\] As with the organ delivery case study, we generated 25 random scenarios and ran each scenario four times (twice with the weakening-based approach and the other two times with the priority-based approach). ### _Experimental Result_ Figure 4 shows, for each case study, the _overall_ robustness values for the weakening-based and priority-based approaches. The overall robustness value is computed as the average of the normalized robustness values for the feature requirements and is intended to show how the system fulfills the objectives of all the features. As seen in Figure 4, the weakening-based approach achieves higher overall robustness than the priority-based approach, as it attempts to satisfice the requirements of both features (unlike the priority-based method, which gives up on the feature that is not selected). One may note that in the organ delivery case study, the overall robustness values for both approaches are negative; this is because during the conflicts in these scenarios, the actions available to the drone are drastically different (land vs. keep flying) and thus selecting one feature will result in a large violation of the other's requirement. Even in such negative scenarios, the weakening-based approach is still able to achieve a lower overall violation of the requirements than the priority-based method, as shown in Figure 4. We provide a more detailed analysis of the results, broken down by the individual feature requirements, shown in Fig. 5. #### Vi-C1 Organ delivery drone In Fig. 5, charts (A) and (B) show the average robustness values for the landing and delivery requirements, respectively. The results show that the priority-based approach achieves the highest robustness for the requirement of the feature that it selects (e.g., 7.07 for the delivery planning feature in (B)). At the same time, the requirement of the feature that is discarded by the priority-based method shows a large violation. Both of these outcomes are as expected, since the system achieves the requirement of the feature that it specifically prioritizes. In comparison, the results suggest that the weakening-based approach achieves a compromise between prioritizing one of the features and discarding the other. For example, in chart (B), although the weakening-based approach achieves a lower robustness value than the priority method that selects the delivery feature, it avoids the large violation that would result if this feature was entirely discarded. In (C), it can be seen that the weakening-based approach ensures the satisfaction of the minimal requirement (even in the worst case) while the priority-based method fails to do so. Lastly, in (D), none of the resolution methods can guarantee the delivery planning feature, as it is deemed as a hard constraint that cannot be weakened (i.e., the minimal requirement is the same as the original requirement). We did, however, observe that the weakening-based resolution still obtains a higher robustness value in its worst-case scenario than the priority-based method does. #### Vi-C2 Surveillance drone Similar patterns can be observed here as the ones in the prior case study. In Fig. 5, charts (E) and (F), it can be seen that on average, the weakening Fig. 4: Overall robustness values for the two case studies. Fig. 3: A conflict scenario between the boundary and runaway enforcers. Action \(a_{resolve}\) represents a possible action generated by the weakening-based approach. based approach achieves a compromise between prioritizing vs discarding a feature as it is done by the priority-based method. In (H), for the boundary enforcer, the weakening-based approach is not able to guarantee the minimal requirement, violating it in its worst-case outcome (-22.26). This is due to the fact that occasionally, the feature interaction scenario forces the drone into a non-recoverable position (i.e., cannot avoid crashing into the boundary), causing the resulting MILP problem to be unsatisfiable. Even then, it can be seen that the weakening-based approach avoids the large violation that is caused by the priority-based method (-46.26). #### V-D3 Summary Based on Fig. 4, we conclude that the weakening-based approach attains a higher overall robustness value than the priority-based approach, supporting hypothesis **H1**. This suggests that the weakening-based approach is effective at satisficing the requirements of both conflicting features. In Fig. 5, (C-D), (G-H), it can be seen that the weakening-based approach, in its worst-case outcome, may fail to guarantee the minimal requirement under environmental scenarios that do not permit any weakening solution. This suggests that if satisfying a particular requirement is critical, the priority-based method that always selects that feature may be more desirable. Thus, hypothesis **H2** is not supported. ### _Performance Overhead_ Since the weakening-based approach uses a MILP solver, it incurs significantly more overhead than the priority-based method, which simply involves selecting the preferred action. Based on our timing measurement, the MILP-based resolution process took approximately 0.28 seconds on average across the two case studies. During our simulation runs, we did not observe any noticeable delays or disruptions to the drone operation. This is because the control loop inside the PX4 drone software was running at 2Hz, resulting in a window of 0.5 seconds for each cycle of the controller update (i.e., the resolution process completed before the next control action was to be generated). On the other hand, the amount of overhead depends on the complexity of the feature requirements and the environment model, and thus it is possible that one may run into performance issues for larger, more complex systems than our drone software. As part of future work, we plan to explore other methods that leverage different types of search heuristics (e.g., machine learning-based or generic algorithms) and could potentially provide more efficient resolution. ### _Threats to Validity_ There are three sources of potential errors in our experiments: (1) the selected case study may not be representative of general CPS applications, (2) the scenarios generated may exclude exceptional scenarios, and (3) drone simulation is hardware-dependent. For (1), we believe that the two case studies we conducted embody common characteristics of CPS, as PX4 is a well-established, popular drone software. However, a more extensive validation that involves other types of CPS (e.g., automotive systems) may provide further support for the effectiveness of the proposed resolution approach. To address (2), we generated a comprehensive sampling of scenarios and reduced selection biases by randomizing the configuration parameters. However, our sampling is non-exhaustive and is likely to exclude some exceptional scenarios. For (3), more restrictive hardware may cause an increase in the performance overhead due to the computing resources that are required for the MILP solver and the simulator. Fig. 5: Robustness breakdown by each feature requirement. Chart groups (A)-(D) and (E)-(H) correspond to the organ delivery and surveillance drone case studies, respectively. Each of (A), (B), (E), and (F) compares the average robustness values for three different approaches: (1) weakening-based, (2) priority-based with feature 1, and (3) priority with feature 2. Charts (C), (D), (G), and (H) show the lowest robustness values; the red line represents the threshold at which the minimal requirement is violated. ## VIII Related Work There is a large body of work on feature interactions within software engineering [1, 2, 3, 7, 12, 27, 28, 29, 30, 31, 32, 33, 34]. Here, we mainly provide a discussion of related work on resolution (rather than detection) of feature interactions. Gafford et al. [26] proposes a _synthesis-based_ approach to the resolution of feature interactions, where given a pair of conflicting actions, a space of possible alternative actions is enumerated to find an action that best satisfies the objectives of the two features. Their approach is similar to ours in that it also (1) relies on the notion of robustness in STL to define the desirability of an action and (2) attempts to find an action as a middle-ground between the two conflicting actions. However, their approach is limited to cases where the features generate the same type of action (e.g., a pair of velocity vectors), whereas our approach can be applied to features with different types of actions (e.g., landing vs. flying towards the destination). Maia et al. investigates the problem of _defiant components_, where one or more local components, in trying to achieve their individual objectives, conflict with a global system requirement [19]. They propose an approach called _cautious adaption_ to dynamically modify the behavior of the local component and fulfill the global requirement when a conflict arises; adaptation here is carried out by injecting a piece of logic into the local component that overrides its default behavior. Although their approach shares some similarities with ours in that they both involve temporarily changing the objective of a system component, there are some noticeable differences: (1) our work deals with conflicts between competing features (or components) instead of local vs. global requirements and (2) their approach requires wrappers that are crafted at design time to handle known "exceptional situations" (i.e., conflicts), whereas our approach can handle _unexpected_ conflict scenarios, as long as they are captured by the environment model. Requirement relaxation (or weakening) has been investigated in self-adaptive systems. RELAX [35] is a temporal logic to support the specification of requirements that explicitly capture uncertainty about possible system behavior. RELAX can support self-adaptation mechanisms where the system dynamically adjusts its behavior to accommodate for uncertainty or changes in the environment. DeVries et al. use RELAX to investigate the concept of _partial_ feature interactions, where a pair of features only partially satisfy (i.e., satisfice) their individual requirements [36], although they do not discuss a mechanism for resolving such interactions. One interesting future direction is to leverage RELAX as another type of requirements specification language (instead of STL) to support the weakening-based resolution. In [37], the authors propose an iterative, _multi-grained_ approach to requirements relaxation, where requirements of a higher granularity are relaxed first (for computational efficiency) before lower-level requirements. Although our approach currently assumes feature requirements to be at the same level of granularity, their approach may be useful for resolving more complex interactions that involve requirements across different levels of system abstraction. Our approach for leveraging an environment model to generate an action that satisfies a desired objective can be regarded as a type of model-predictive control (MPC) [38]. In particular, we adopt the STL-based MPC method developed by Raman et al. [39], where they also leverage a MILP solver to synthesize an action that satisfies an STL property. ## IX Limitations and Future Work We propose a reconciliation-based approach to the resolution of feature interactions, where one or more of the given requirements are weakened to enable the conflicting features to behave consistently. Through case studies on autonomous drones, we have demonstrated that the proposed approach can achieve an overall higher satisfaction of the conflicting features, compared to the conventional priority-based method. Our work makes several assumptions about the characteristics of the underlying system. First, it is most effective for systems where requirements can be assigned a meaningful, quantitative notion of satisfaction (i.e., STL-based requirements), are of equal importance, and where the user is willing to accept temporary degradation in the system performance of the original requirements (i.e., soft requirements). Thus, this approach may not be suited for features that perform a very critical function (e.g., emergency braking feature in a vehicle) where even small degradation is unacceptable; in such cases, a priority-based method may be more suitable. Our approach also relies on an environment model that generates predictive signals for different feature actions. Instead of a deterministic, discrete transition system, a stochastic model (e.g., Markov decision processes) may provide a more realistic model of the environment. Such a model would also require a very different approach to weakening than our MILP-based method and is beyond the scope of this paper. So far, we have evaluated our approach on pairwise feature interactions. Although, in principle, the weakening-based approach should be applicable to any number of features, a more extensive validation involving N-way interactions [40] would further support the generality of the proposed approach. Finally, we believe that the idea of requirements weakening have other applications beside feature interaction resolution (e.g., using weakening to gracefully degrade the quality of a service in response to environmental deviations). We plan to explore such applications as part of future work. ## Acknowledgement This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center DM23-0069. This work was also supported in part by award N00014172899 from the Office of Naval Research, by the NSA under Award No. H9823018D0008, and by the National Science Foundation award CCF-2144860.
2302.08215
Aligning Language Models with Preferences through f-divergence Minimization
Aligning language models with preferences can be posed as approximating a target distribution representing some desired behavior. Existing approaches differ both in the functional form of the target distribution and the algorithm used to approximate it. For instance, Reinforcement Learning from Human Feedback (RLHF) corresponds to minimizing a reverse KL from an implicit target distribution arising from a KL penalty in the objective. On the other hand, Generative Distributional Control (GDC) has an explicit target distribution and minimizes a forward KL from it using the Distributional Policy Gradient (DPG) algorithm. In this paper, we propose a new approach, f-DPG, which allows the use of any f-divergence to approximate any target distribution that can be evaluated. f-DPG unifies both frameworks (RLHF, GDC) and the approximation methods (DPG, RL with KL penalties). We show the practical benefits of various choices of divergence objectives and demonstrate that there is no universally optimal objective but that different divergences present different alignment and diversity trade-offs. We show that Jensen-Shannon divergence strikes a good balance between these objectives, and frequently outperforms forward KL divergence by a wide margin, leading to significant improvements over prior work. These distinguishing characteristics between divergences persist as the model size increases, highlighting the importance of selecting appropriate divergence objectives.
Dongyoung Go, Tomasz Korbak, Germán Kruszewski, Jos Rozen, Nahyeon Ryu, Marc Dymetman
2023-02-16T10:59:39Z
http://arxiv.org/abs/2302.08215v2
# Aligning Language Models with Preferences ###### Abstract Aligning language models with preferences can be posed as approximating a target distribution representing some desired behavior. Existing approaches differ both in the functional form of the target distribution and the algorithm used to approximate it. For instance, Reinforcement Learning from Human Feedback (RLHF) corresponds to minimizing a reverse KL from an _implicit_ target distribution arising from a KL penalty in the objective. On the other hand, Generative Distributional Control (GDC) has an _explicit_ target distribution and minimizes a forward KL from it using the Distributional Policy Gradient (DPG) algorithm. In this paper, we propose a new approach, \(f\)-DPG, which allows the use of _any_\(f\)-divergence to approximate _any_ target distribution. \(f\)-DPG unifies both frameworks (RLHF, GDC) and the approximation methods (DPG, RL with KL penalties). We show the practical benefits of various choices of divergence objectives and demonstrate that there is no universally optimal objective but that different divergences are good for approximating different targets. For instance, we discover that for GDC, the Jensen-Shannon divergence frequently outperforms forward KL divergence by a wide margin, leading to significant improvements over prior work. Machine Learning, ICML ## 1 Introduction Language models (LMs) have recently revolutionized the field of Natural Language Processing thanks to their generative capabilities, which are useful in a vast number of tasks (Brown et al., 2020; Srivastava et al., 2022). However, generated texts can also violate widely-held human preferences, e.g. helpfulness (Askell et al., 2021), non-offensiveness (Gehman et al., 2020), truthfulness (Lin et al., 2022) or equal treatment (Cao et al., 2022). Aligning LMs with human preferences is the problem of adapting the LM in such a way that generated content is perceived to match the human's intent (Ouyang et al., 2022) or that it is helpful, honest, and harmless (Askell et al., 2021; Bai et al., 2022). Fundamentally, an aligned LM can be seen as a desired target distribution that we would like to generate from (Korbak et al., 2022). Some approaches leave this distribution implicit, to be defined as a side-effect of the proposed intervention. These include prompting with natural language instructions or demonstrations (Askell et al., 2021), using scorers or safety filters while decoding (Roller et al., 2021; Xu et al., 2021), supervised fine-tuning on curated data (Solaiman and Dennison, 2021; Ngo et al., 2021; Welbl et al., 2021; Chung et al., 2022) or selected samples from the model (Zelikman et al., 2022; Scheurer et al., 2022; Dohan et al., 2022), and fine-tuning the language model using reinforcement learning with a learned reward function that approximates human feedback (Reinforcement Learning from Human Feedback or RLHF; Ziegler et al., 2019; Bai et al., 2022; Ouyang et al., 2022). Instead, Khalifa et al. (2021) propose a framework that they name Generation with Distributional Control (GDC), where they explicitly define the target distribution \(p\) that represents the aligned LM in closed form, and then train generative model \(\pi_{\theta}\) to approximate \(p\) via methods such as Distributional Policy Gradients (DPG; Parshakova et al., 2019), which minimizes the forward Kullback-Leibler (KL) divergence \(\mathrm{KL}(p||\pi_{\theta})\) of \(p\) to \(\pi_{\theta}\). The advantage of such an approach is that it decouples the problem of describing the aligned LM from the Figure 1: On many target distributions, the Jensen-Shannon (JS) divergence (green) outperforms the Kullback-Leibler (KL) divergence (blue) as an _objective_, even when performance is measured in terms of KL from the target \(p\) (left panel, \(\downarrow\) better). See Sec. 4.2. problem of approximating it. Furthermore, even if RL with KL penalties (Todorov, 2006a; Kappen et al., 2012; Jaques et al., 2017, 2019), the method used to fine-tune a LM in RLHF, is defined only in terms of reward maximization, it has also been shown to be equivalent to minimizing the _reverse_ KL divergence \(\mathrm{KL}(\pi_{\theta}||p)\) of \(\pi_{\theta}\) to a target distribution \(p\) that can also be written explicitly in closed-form (Korbak et al., 2022b). The possibility of approximating various distributions according to different divergence measures begs the question: Does the choice of a divergence measure matter? In principle, all divergences lead to the same optimum, namely the target distribution \(p\). However, when we restrict \(\pi_{\theta}\) to a certain parametric family that does not include \(p\) (i.e., the search space is _mis-specified_), then the minimum can be found at different points, leading to optimal models with different properties. Moreover, different divergences present different loss landscapes: some might make it easier for stochastic gradient descent to find good minima. Finally, the space of possible divergence measures and forms of target distributions is a vast and largely uncharted terrain. Prior work has largely failed to decouple the form of a target distribution and the algorithm used for approximating it. Here, we introduce \(f\)-DPG, a new framework to fine-tuning an LM to approximate any given target distribution by following any divergence in the \(f\)-divergences family, which includes both the forward KL and the reverse KL cited above, but also Total Variation (TV) distance, Jensen-Shannon (JS) divergence, among others. \(f\)-DPG generalizes existing approximation techniques from both DPG and RL with KL penalties algorithms, thus allowing us to investigate new ways to approximate the target distributions defined by the GDC and RLHF frameworks. In particular, we explore the approximation of various target distributions representing different alignment goals, which include imposing lexical constraints, reducing social bias with respect to gender and religion, enforcing factual consistency in summarization, and enforcing compilability of generated code. We focus our experiments on four instantiations of \(f\)-DPG, namely KL-DPG, RKL-DPG, TV-DPG and JS-DPG, whose objective is to minimize the forward KL, reverse KL, TV and JS divergences, respectively, and evaluate each experiment in terms of approximation quality as measured by all of these same \(f\)-divergences. We show that we can obtain significantly improved results over the original KL-DPG algorithm (Parshakova et al., 2019) by minimizing other \(f\)-divergences, even when the approximation quality is evaluated under the lens of the forward KL. Furthermore, we observe that while there is no single best optimization objective for all cases, JS-DPG often strikes a good balance and significantly improves upon prior work (Khalifa et al., 2021; Korbak et al., 2022a), as illustrated in Fig. 1. Overall, the contributions of the paper include: 1. Introducing \(f\)-DPG, a unifying framework for approximating any target distribution by minimizing any \(f\)-divergence (Sec. 3.2), and deriving a universal formula for gradient descent with \(f\)-divergences (Theorem 1). 2. Extending \(f\)-DPG to include baselines for variance reduction (Fact 1); and handling conditional target distributions (Fact 2). 3. Investigating the performance of \(f\)-DPG on a diverse array of thirteen LM alignment tasks, three forms of target distributions, four \(f\)-divergence objectives and eight metrics. ## 2 Background We can organize approaches to LM alignment along two axes: how the target distribution is constructed and how it is approximated. The first problem roughly corresponds to representing human preferences through the specification of a probability distribution and the second to allowing the production of samples from that distribution. ### Defining a target distribution The target distribution expresses an ideal notion of an LM, incorporating human preferences, as probabilities \(p(x)\) over texts \(x\) according to how well they satisfy the preferences. Formally, \(p(x)\) is often defined through a non-negative function \(P(x)\) (aka an _energy-based model_ or EBM) such that \(p(x)\propto P(x)\). \(P(x)\) (and \(p(x)\) after normalization) can be used to score samples, but not to directly obtain them because it lacks an autoregressive form. In the rest of the paper, we will focus on target distributions modeling three types of preferences prominently employed in recent literature about GDC (Khalifa et al., 2021) and RLHF (Ziegler et al., 2019; Stiennon et al., 2020; Ouyang et al., 2022; Menick et al., 2022; Bai et al., 2022a). Binary preferencesFor human preferences naturally expressible as a binary constraint \(b(x)\in\{0,1\}\) (e.g. a sample \(x\) must never contain a curse word), Khalifa et al. (2021) proposed the following target distribution: \[p_{\mathrm{GDC,bin}}(x)\propto a(x)b(x), \tag{1}\] where \(a\) is a pretrained LM and \(b(x)=0\) if \(x\) contains a curse and \(b(x)=1\) otherwise. \(p_{\mathrm{GDC,bin}}\) is the distribution enforcing that all samples match the binary constraint, which deviates minimally from \(a\) as measured by \(\mathrm{KL}(p_{\mathrm{GDC,bin}}||a)\). Scalar preferencesSome human preferences, such as helpfulness, are more naturally expressed as scalar scores. Alignment with respect to these is typically addressed with RLHF (Stiennon et al., 2020; Ziegler et al., 2019; Ouyang et al., 2022), which consists of, first, capturing human preferences as a reward function \(r(x)\) (e.g. scores given a reward model trained to predict human preferences) and second, applying RL with KL penalties (Todorov, 2006a; Kappen et al., 2012; Jaques et al., 2017, 2019) to maximize this reward while penalizing departure from \(a(x)\): \[J_{\mathrm{RLKL}}(\theta)=\mathbb{E}_{x\sim\pi_{\theta}}\left[r(x)-\beta\log \frac{\pi_{\theta}(x)}{a(x)}\right]. \tag{2}\] This objective can be equivalently framed as minimizing the reverse KL, \(\mathrm{KL}(\pi_{\theta}||p_{\mathrm{RLKL}})\), where the target distribution \(p_{\mathrm{RLKL}}\) is defined as: \[p_{\mathrm{RLKL}}(x)\propto a(x)\exp(r(x)/\beta), \tag{3}\] where \(\beta\) is a hyperparameter (Korbak et al., 2022). Distributional preferencesFinally, there is a class of distributional preferences (Weidinger et al., 2021) that cannot be expressed as a function of a single sample \(x\) but depend on the entire distribution, e.g. a particular gender distribution of persons mentioned in LM samples. Khalifa et al. (2021) model such preferences through distributional constraints using the following exponential family target distribution \[p_{\mathrm{GDC\_dist}}(x)\propto a(x)\exp\Big{[}\sum_{i}\lambda_{i}\phi_{i}(x )\Big{]}, \tag{4}\] where \(\phi_{i}\) are features defined over texts (e.g. the most frequent gender of people mentioned in \(x\)) and \(\lambda_{i}\) are coefficients chosen so that the expected values \(E_{x\sim p}\left[\phi_{i}(x)\right]\) match some desired values \(\bar{\mu}_{i}\) (e.g., 50% gender balance). The resulting distribution \(p_{\mathrm{GDC\_d}}\) matches the target feature moments, while deviating minimally from \(a\) as measured by \(\mathrm{KL}(p_{\mathrm{GDC\_dist}}||a)\). ### Approximating the target distribution Drawing samples from a target distribution \(p\) constitutes the inference problem. There are broadly two approaches to this problem: (i) augmenting decoding from \(a\) at inference time to obtain samples from \(p\) and (ii) training a new parametric model \(\pi_{\theta}\) to approximate \(p\) which can then be sampled from directly. The first family of approaches includes guided decoding methods (Dathathri et al., 2020; Qin et al., 2022), Monte Carlo sampling techniques such as rejection sampling to sample from simple distributions like \(p_{\mathrm{GDC\_bin}}\)(Roller et al., 2021; Ziegler et al., 2022), and Quasi Rejection Sampling (QRS) (Eikema et al., 2022) or MCMC techniques (Miao et al., 2019; Goyal et al., 2022) to sample from more complex distributions, such as \(p_{\mathrm{GDC\_dist}}\). In the rest of the paper, we will focus on the second family: methods that train a new model \(\pi_{\theta}\) to approximate \(p\) by minimizing a divergence measure from \(p\), \(D(\pi_{\theta}||p)\). Khalifa et al. (2021) uses Distributional Policy Gradients (DPG; Parshakova et al., 2019) to approximate the target distribution by minimizing \(\mathrm{KL}(p||\pi_{\theta})\), or equivalently, \(\mathrm{CE}(p,\pi_{\theta})\): \[\nabla_{\theta}\mathrm{CE}(p,\pi_{\theta})=-\mathbb{E}_{x\sim\pi_{\theta}} \frac{p(x)}{\pi_{\theta}(x)}\nabla_{\theta}\log\pi_{\theta}(x). \tag{5}\] ## 3 Formal aspects In this section, we describe the \(f\)-divergence family, and introduce a generic technique, \(f\)-DPG, for minimizing the \(f\)-divergence between a target distribution \(p\) and a model \(\pi_{\theta}\). We then describe the application of \(f\)-DPG to aligning language models with human preferences. ### \(f\)-divergences Consider a convex function \(f:(0,\infty)\rightarrow\mathbb{R}\) with \(f(1)=0\). Let \(f(0)\doteq\lim_{t\to 0}f(t)\) and \(f^{{}^{\prime}}(\infty)\doteq\lim_{t\to 0}tf(\frac{1}{t})\).1 Let \(p_{1},p_{2}\) be two distributions over a discrete set \(\mathcal{X}\). The \(f\)-divergence between \(p_{1}\) and \(p_{2}\) can be defined as Footnote 1: The limits are well-defined and take values in \((-\infty,\infty]\). The convention for \(f^{{}^{\prime}}(\infty)\) is motivated by the fact that \(\lim_{t\rightarrow\infty}f^{{}^{\prime}}(t)=\lim_{t\to 0}tf(\frac{1}{t})\) (Hiriart-Urruty and Lemaréchal, 2013). \[D_{f}(p_{1}||p_{2})\doteq\mathbb{E}_{x\sim p_{2}}\left[f\left( \frac{p_{1}(x)}{p_{2}(x)}\right)\right]+f^{{}^{\prime}}(\infty)\;p_{1}(p_{2}=0) \tag{6}\] where \(p_{1}(p_{2}=0)\) is the \(p_{1}\)-mass of the set \(\{x\in\mathcal{X}:p_{2}(x)=0\}\)(Polyanskiy, 2019; Liese and Vajda, 2006). The function \(f\) is called a generator of \(D_{f}\). By convention, if \(p_{1}(p_{2}=0)=0\), the last term of Eq. (6) is set to \(0\) regardless of the value of \(f^{{}^{\prime}}(\infty)\) (which can be infinite).2 It can be shown that \(D_{f}(p_{1}||p_{2})\geq 0\) for any \(p_{1}\) and \(p_{2}\), with equality if \(p_{1}=p_{2}\); conversely, if \(D_{f}(p_{1}||p_{2})=0\) and \(f\) is strictly convex at \(1\), then \(p_{1}=p_{2}\). Footnote 2: Based on the commonly made assumption that the support of \(p_{1}\) is dominated by the support of \(p_{2}\) (\(Supp(p_{1})\subset Supp(p_{2})\)), Eq. (6) simplifies to \(D_{f}(p_{1}||p_{2})=\mathbb{E}_{x\sim p_{2}}\left[f\left(\frac{p_{1}(x)}{p_{2} (x)}\right)\right]\). The \(f\)-divergence family includes many important divergence measures, in particular KL divergence \(\mathrm{KL}(p_{1}||p_{2})\), reverse KL divergence \(\mathrm{KL}(p_{2}||p_{1})\), Jensen-Shannon divergence, and Total Variation distance. We list these \(f\)-divergences and their generators in Tab. 1. For more details about notations and properties of \(f\)-divergences, see App. A.1 and also Liese and Vajda (2006); Polyanskiy (2019); Sason and Verdu (2016); Sason (2018). ### Distributional alignment with \(f\)-divergences Let \(\mathcal{X}\) be a discrete countable or finite set, in our case a set of texts. Given a target probability distribution \(p(x)\) over elements \(x\in\mathcal{X}\), our goal is to approximate \(p\) with a generative model (aka policy) \(\pi_{\theta}\). On the other hand, the generative model \(\pi_{\theta}\) is a parametric model, typically an autoregressive neural network, from which we can (i) directly sample and (ii) evaluate probabilities \(\pi_{\theta}(x)\). We approach this problem by attempting to minimize the \(f\)-divergence of \(\pi_{\theta}\) to \(p\):3 Footnote 3: We could have chosen to do \(\min_{\theta\in\Theta}D_{f}(p||\pi_{\theta})\). However the _perspective transform_\(f^{*}(t)\doteq t\ f(\frac{1}{t})\) allows interchangeability of arguments: \(D_{f}(\pi_{\theta}||p)=D_{f^{*}}(p||\pi_{\theta})\), making either form possible. The form in Eq. (7) permits a simpler statement of our main theorem. See App. A.1, A.3 for details. \[\min_{\theta\in\Theta}D_{f}(\pi_{\theta}||p), \tag{7}\] where \(\theta\) varies inside the parametric family \(\Theta\). Note that when the family \(\pi_{\theta},\theta\in\Theta\) is "well-specified", i.e., when \(\exists\theta_{0}\) s.t. \(p=\pi_{\theta_{0}}\), the true minimum of Eq (7) is \(0\), attained at \(\theta_{0}\), whatever divergence \(D_{f}\) is chosen. In contrast, when the family is "mis-specified" i.e. does not include \(p\), the distribution \(\pi_{\theta}\) with minimal divergence can be strongly dependent on the chosen divergence \(D_{f}\). Eq. (7) might be solved approximately using stochastic optimization with samples drawn from the distribution \(p\), as the definition of \(D_{f}(\pi_{\theta}||p)\) involves taking the expectation with respect to \(p\). However, it is often not possible to sample directly from \(p\), while it is possible to sample from \(\pi_{\theta}\). Our optimization technique is then based on the following core result, which we prove in App. A.3. **Theorem 1**.: _Let \(p\) and \(\pi_{\theta}\) be distributions over a discrete set \(\mathcal{X}\) such that at least one of the following conditions holds: (i) \(\forall\theta\in\Theta,\text{Supp}(p)\subset\text{Supp}(\pi_{\theta})\), or (ii) \(\text{Supp}(\pi_{\theta})\) does not depend on \(\theta\). Then:_ \[\nabla_{\theta}D_{f}(\pi_{\theta}||p)=E_{x\sim\pi_{\theta}}\left[f^{{}^{ \prime}}\left(\frac{\pi_{\theta}(x)}{p(x)}\right)\nabla_{\theta}\log\pi_{ \theta}(x)\right]. \tag{8}\] Note that it may happen in Eq 8 that \(p(x)=0\) and \(\pi_{\theta}(x)>0\), hence \(\frac{\pi_{\theta}(x)}{p(x)}=\infty\), in which case the expression \(f^{{}^{\prime}}\left(\frac{\pi_{\theta}(x)}{p(x)}\right)\) should be understood as denoting the value \(f^{\prime}(\infty)\) as defined earlier.4 Footnote 4: The derivative \(f^{\prime}(t)\) of any convex function \(f(t)\) is defined almost everywhere, with the possible exception of a countable number of non-differentiable points, at which a subgradient can be used instead (Hiriart-Urruty and Lemarchal, 2013; Rockafellar, 1970). See also App. A.4. In the context of LMs, our domain of application, we will use Thm. 1 in situations where \(\pi_{\theta}\), being a standard softmax-based autoregressive model, has full support over \(\mathcal{X}\) (i.e. \(\text{Supp}(\pi_{\theta})=\mathcal{X}\)) for all \(\theta\)'s, while the support of \(p\) might be strictly included in \(\mathcal{X}\) in some experiments (Sec. 4.2, 4.4). It is instructive to consider Thm. 1 in relation to rewards in RL. In the standard policy gradient algorithm (Williams, 1992), to find the model that maximizes the average reward \(E_{x\sim\pi_{\theta}}\left[r(x)\right]\), one computes the gradient of the loss using the formula \(\nabla_{\theta}E_{x\sim\pi_{\theta}}\left[r(x)\right]=E_{x\sim\pi_{\theta}} \left[r(x)\nabla_{\theta}\log\pi_{\theta}(x)\right]\). The gradient in Eq. 8 is very similar, with a "pseudo-reward" \(r_{\theta}(x)=-f^{{}^{\prime}}(\frac{\pi_{\theta}(x)}{p(x)})\), one difference being that now \(r_{\theta}\) depends on \(\theta\) (see (Korbak et al., 2022b) for related remarks). We refer to the approach in Eq. 8 under the name \(f\)_-DPG_, in reference to the original DPG (Distributional Policy Gradient) approach introduced in (Parshakova et al., 2019), which can be seen as a special case of \(f\)-DPG ("KL-DPG") with \(D_{f}(\pi_{\theta}||p)\) set to \(\text{KL}(p||\pi_{\theta})\) as discussed in Sec. 3.4. ### Adding a baseline Based on the similarity to policy gradients, we adopt the widely used _baseline_ technique from RL, as previously studied in Williams (1992); Baxter and Bartlett (2001); Schulman et al. (2016) and in the context of DPG in (Korbak et al., 2022b). This technique involves subtracting a constant \(B\) from the reward term, and does not introduce bias in the estimate of the gradient at a given \(\theta\). In our case, with \(r_{\theta}(x)\doteq-f^{{}^{\prime}}(\frac{\pi_{\theta}(x)}{p(x)})\), we can write \(\nabla_{\theta}D_{f}(\pi_{\theta}||p)=E_{x\sim\pi_{\theta}}r_{\theta}(x) \nabla_{\theta}\log\pi_{\theta}(x)=E_{x\sim\pi_{\theta}}(r_{\theta}(x)-B)\ \nabla_{ \theta}\log\pi_{\theta}(x)\), based on the observation that \(E_{x\sim\pi_{\theta}}\ \nabla_{\theta}\log\pi_{\theta}(x)=0\) (see also App. A.6). **Fact 1**.: _Subtracting \(B\) from \(r_{\theta}(x)\) does not introduce bias into \(f\)-DPG gradient estimates._ Typically, \(B\) is chosen to be the average of the rewards, \(B\doteq E_{x\sim\pi_{\theta}}\left[r_{\theta}(x)\right]\). In the experiments of Sec. 4, we use the baseline technique where \(B\) is an estimate of the average of pseudo-rewards, unless otherwise specified. ### Recovering some existing methods Various existing methods for aligning LM with preferences can be included in the \(f\)-DPG framework. GdcIn GDC, fitting the policy \(\pi_{\theta}\) to the target \(p\) (which is given by either one of Eq. 1 or Eq. 4) is done using DPG (Parshakova et al., 2019), namely by minimizing the **forward KL**, \(\text{KL}(p||\pi_{\theta})\). In the \(f\)-DPG framework, \(\text{KL}(p||\pi_{\theta})=D_{f}(\pi_{\theta}||p)\) with \(f(t)=-\log t\), \(f^{\prime}(t)=-1/t\), and Thm. 1 leads to the formula: \[\nabla_{\theta}D_{f}(\pi_{\theta}||p)=E_{x\sim\pi_{\theta}}-\frac{p(x)}{\pi_{ \theta}(x)}\nabla_{\theta}\log\pi_{\theta}(x),\] which is equivalent to Eq. 5. RL with KL penaltiesLet's rewrite the target distribution of Eq. (3) as \(p(x)\doteq p_{\text{RLKL}}(x)=1/Z\ a(x)\ e^{r(x)/\beta}\), where \(Z\) is a normaliser. Then \(\text{KL}(\pi_{\theta}||p)=D_{f}(\pi_{\theta}||p)\), with \(f(t)=t\log t\) corresponding to **reverse KL**, and \(f^{\prime}(t)=1+\log t\). Thm. 1 implies that: \[\nabla_{\theta}D_{f}(\pi_{\theta}||p)\] \[=E_{x\sim\pi_{\theta}}\left(1+\log\frac{\pi_{\theta}(x)}{Z^{-1}a(x )\exp(r(x)/\beta)}\right)\nabla_{\theta}\log\pi_{\theta}(x)\] \[=E_{x\sim\pi_{\theta}}\left(-\frac{r(x)}{\beta}+\log\frac{\pi_{ \theta}(x)}{a(x)}\right)\nabla_{\theta}\log\pi_{\theta}(x),\] where we have exploited the fact that \(1+\log Z\) is a constant, hence \(E_{x\sim\pi_{\theta}}(1+\log Z)\)\(\nabla_{\theta}\log\pi_{\theta}(x)=0\). Up to the constant factor \(\beta\), this form recovers the usual formula for estimating the gradient of the loss defined in Eq. (2): \(\nabla_{\theta}J_{\mathrm{RLKL}}(\theta)=E_{x\sim\pi_{\theta}}\left(r(x)- \beta\log\frac{\pi_{\theta}(x)}{a(x)}\right)\nabla_{\theta}\log\pi_{\theta}(x)\). ### Estimating \(Z\) The target distribution \(p\) is often defined as \(p(x)\propto P(x)\), where \(P(x)\) is a non-negative function over \(\mathcal{X}\). The distribution \(p\) can then be computed as \(p(x)=1/Z\)\(P(x)\), where \(Z\) is the normalizing constant (partition function) defined by \(\sum_{x\in\mathcal{X}}P(x)\). An estimate of \(Z\) can be obtained by importance sampling, using samples from the current \(\pi_{\theta}\), based on the identity \(Z=\mathbb{E}_{\pi_{\theta}}\frac{P(x)}{\pi_{\theta}(x)}\). Each such estimate is unbiased, and by averaging the estimates based on different \(\pi_{\theta}\)'s, one can obtain a more precise estimate of \(Z\), exploiting _all_ the samples obtained so far. For details about the estimate of \(Z\), see Algorithm 1 in App. A.3, as well as the ablation study in App. H.3. ### Conditional target distributions For a conditional task such as machine translation, summarization or dialogue, where \(\pi_{\theta}\) is defined as a conditional distribution \(\pi_{\theta}(x|c)\), we adapt the conditional generalization of DPG introduced in Korbak et al. (2022). Given a distribution over contexts \(\tau(c)\) and a map from a context \(c\) to a target distribution \(p_{c}\), we have (see App. E for details): **Fact 2**.: \(f\)_-DPG is generalized to the conditional case by optimizing the loss_ \[E_{c\sim\tau(c)}\left[\nabla_{\theta}D_{f}(\pi_{\theta}(\cdot|c)||p_{c}(\cdot ))\right]. \tag{9}\] ## 4 Experiments We study four instantiations of \(f\)-DPG, namely KL-DPG, RKL-DPG, TV-DPG and JS-DPG, corresponding to minimizing the forward KL, reverse KL, Total Variation, and Jensen-Shannon divergences, respectively. We use an exponential moving average baseline with weight \(\alpha=0.99\) for all, except for KL-DPG, where we use the analytically computed value of the pseudo-reward expectation, which amounts to \(1\)(Korbak et al., 2022). We evaluate them on a diverse array of tasks including imposing sentiment constraints (Sec. 4.1), lexical constraints (Sec. 4.2), debiasing genders' prevalence and religious groups' regard (Sec. 4.3), and context-conditioned tasks, such as enforcing factual consistency in summarization (Sec. 4.4) or compilability of generated code (see App. E.1). Unless specified otherwise, we use a pretrained GPT-2 "small" (Radford et al., 2019) with 117M parameters for the initial model. Implementation details and hyper-parameters are available in App. C. MetricsWe report the following key metrics. We add task-specific metrics if needed. 1. \(D_{f}(\pi_{\theta}||p)\), the \(f\)-divergence between \(p\) and \(\pi_{\theta}\), with four different \(f\)'s corresponding to forward KL, \(\mathrm{KL}(p||\pi_{\theta})\); reverse KL, \(\mathrm{KL}(\pi_{\theta}||p)\); Total Variation, \(\mathrm{TV}(\pi_{\theta}||p)\); and Jensen-Shannon, JS\((\pi_{\theta}||p)\). We use importance sampling to estimate these divergences. 2. \(\mathrm{KL}(\pi_{\theta}||a)\), a measure of the divergence from original LM \(a\)(Ziegler et al., 2019; Khalifa et al., 2021). 3. Moments \(\mathbb{E}_{x\sim\pi_{\theta}}\phi(x)\) of a feature of interest \(\phi(x)\). 4. Normalized Entropy (Berger et al., 1996), a measure of diversity in probability distribution normalized by number of tokens. 5. Standard deviation of a minibatch's pseudo-rewards, \(\mathrm{std}(r_{\theta}(x))\), where \(r_{\theta}\) is defined as in Sec. 3.3. ### Alignment with scalar preferences TaskWe begin with the task of maximizing a scalar preference with KL penalties, whose target distribution, \(p_{\mathrm{RLKL}}\), is defined in Eq. 3. We set \(r(x)=\log\phi(x)\) where \(\phi(x)\) is the probability returned by a sentiment classifier fine-tuned from Distil-BERT (HF Canonical Model Maintainers, 2022). This reward function is optimal for modeling a decision-maker which given \(k\) different samples \(x_{1},\dots,x_{k}\), will pick \(x_{i}\) with probability proportional to \(\phi(x_{i})\) (see Appendix F). We set \(\beta=0.1\), which is in line with the range of values explored by Ziegler et al. (2019). Note that applying RKL-DPG on \(p_{\mathrm{RLKL}}\) is equivalent to the RL with KL penalties method, as described in Sec. 3.4. However, through \(f\)-DPG we can explore alternative objectives to approximate the same target. ResultsFig. 2 shows the evolution of the above-mentioned metrics. Further details are given in Fig. 10 in the Appendix. We observe that whereas RKL-DPG achieves by far the best performance in terms of reverse KL, \(\mathrm{KL}(\pi_{\theta}||p)\) (top-right), it fails to minimize all other divergence metrics. This shows that minimizing one divergence does not necessarily imply that other divergences will follow. Notably, RKL-DPG yields the highest value of \(E_{\pi_{\theta}}[\phi(x)]\) at the cost of a significant departure from \(a\). We connect this to the strong influence that low values \(p(x)\) have on RKL-DPG, which induces a large pseudo-reward for strongly reducing \(\pi_{\theta}(x)\) on those samples (see Sec 5) and produces the spike at the beginning of training in \(\mathrm{std}(\mathrm{rewards})\). This can lead \(\pi_{\theta}(x)\) to concentrate on high-probability regions of \(p(x)\), at the cost of diversity, which can also be seen in the low entropy of the generated samples. Interestingly, the three remaining variants of DPG (KL, TV and JS) consistently minimize all four tracked divergences, with JS-DPG performing best overall. In App. D.1, we show additional metrics on generated sentences, which show low diversity but high quality for RKL-DPG, compared to other \(f\)-DPGs, suggesting it captures a subset of the target distribution ("mode collapse"), as commonly observed in other generative models (Huszar, 2015; Che et al., 2017; Mescheder et al., 2018). ### Alignment with lexical constraints TaskIn this task, we constrain the presence of a specific word in the generated text. Following Khalifa et al. (2021), we formulate this goal as a binary preference on the LM by using a target distribution \(p_{\mathrm{GDC\_bin}}\), where \(b(x)=1\) iff the target word appears in the sequence \(x\), and using a scalar preference target distribution \(p_{\mathrm{RLKL}}\) where \(r(x)\) is set in the same way as \(b(x)\) above. Note that in the GDC framework, \(p_{\mathrm{GDC\_bin}}(x)=0\) when \(b(x)=0\), implying that reverse KL, namely \(\mathrm{KL}(\pi_{\theta}||p)\), becomes infinite, so RKL-DPG cannot be used (nor measured) for that target. We use four words with different occurrence frequency: "amazing"(\(1\cdot 10^{-3}\)), "restaurant" (\(6\cdot 10^{-4}\)), "amusing" (\(6\cdot 10^{-5}\)), and "Wikileaks" (\(8\cdot 10^{-6}\)). ResultsThe aggregated evolution of the metrics for both GDC and RL with KL penalties framework is presented in Fig. 3 (Fig. 1 shows a simplified view of Fig. 3 (a)). Disaggregated results for each task are presented on App. G. We see that all variants of \(f\)-DPG reduce the divergence from the target distribution across all measured \(f\)-divergences. Furthermore, as expected, convergence to the target is connected with the success ratio in producing the desired word, \(\mathbb{E}_{\pi_{\theta}}\left[b(x)\right]\), while balancing it with a moderate divergence from \(a\), \(\mathrm{KL}(\pi_{\theta}||a)\). This reflects that approaching the optimal distribution \(p\) translates into metrics in the downstream task. Strinklingly, the original KL-DPG is outperformed by all other variants of \(f\)-DPG, even in terms of forward KL. We hypothesize that this is linked to the high variance of the pseudo-rewards in KL-DPG, as visualized in the last panel of Fig. 3 (a) and (b). In Sec. 5, we suggest an interpretation for this. We also observe that RKL-DPG tends to produce distributions with lower normalized entropy. Despite this effect, we found no significant difference in diversity among the generated sentences (see Tab. 4 in App. D.1) ### Alignment with distributional constraints TaskWe now investigate enforcing distributional preferences on the LM. We focus on debiasing the pretrained model on two kinds of preferences, namely genders' prevalence (Khalifa et al., 2021) and regard relative to religious groups. The preferences for the genders' debiasing task are defined as \(\phi_{1}(x)=1\) iff \(x\) contains more female than male pronouns, with desired moment \(\bar{\mu}_{1}=0.5\) and \(\phi_{2}(x)=1\) iff \(x\) contains at least one of the words in the'science' word list Figure 2: Comparison of \(f\)-DPG on sentiment preference. Evaluation metrics: four \(f\)-divergences \(D_{f}(\pi_{\theta}||p)\) (\(\downarrow\) better), \(E_{\pi_{\theta}}[\phi(x)]\) (\(\uparrow\) better), Entropy (\(\uparrow\) better), standard deviation of pseudo-reward \(\mathrm{std}(\tau_{\theta}(x))\). compiled by Dathathri et al. (2020), with desired moment \(\bar{\mu}_{2}=1\). For regard debiasing, we use a single distributional constraint where \(0<\phi(x)<1\) is a regard score of the sentence when prompted with Muslims, evaluated with a pretrained classifier (Sheng et al., 2019). We set the desired moment \(\bar{\mu}=0.568\), the regard score observed Christimans. The initial average regard score given Muslims is \(0.385\). For the first experiment, we use GPT-2 small as the initial model \(a\), additionally fine-tuned on the WikiBio dataset (Lebret et al., 2016), whereas for the last one we use vanilla GPT-2 small. ResultsWe report the results of both experiments on Fig. 4 For the regard score rebalancing, we considerably reduce bias in the regard score for two different demographic groups, from initial regard score ratio \(E\left[\phi(x)|\texttt{Christians}\right]:E\left[\phi(x)|\texttt{Muslims} \right]=1:0.677\) to \(E\left[\phi(x)|\texttt{Christians}\right]:E\left[\phi(x)|\texttt{Muslims} \right]=1:0.801\) on average. Interestingly, this task showcases a weakness of TV-DPG: Because the original distribution is already close to the target, the hard-thresholded pseudo-reward has a large variance (last panel of Fig 4(b)), inducing noisy gradient estimates and, consequently, sub-optimal convergence. Concerning the gender debiasing experiments, we can see that all other variants of \(f\)-DPG outperform the original KL-DPG explored in Khalifa et al. (2021), with RKL-DPG giving the best results and better matching the pointwise constraint although seemingly at the cost of lower diversity as measured by the entropy. ### Alignment with conditional constraints TaskWe adopt the conditional task from Korbak et al. (2022), which aims to constrain the T5 (Raffel et al., 2020) language model to generate more factually faithful summaries (Maynez et al., 2020; Nan et al., 2021). Specifically, let \(\text{NER}(\cdot)\) denote the set of named entities found in a text. Then, \(b(x,c)=1\) iff \([\text{NER}(x)\subseteq\text{NER}(c)]\wedge[|\text{NER}(x)|\geq 4]\), and \(0\) otherwise. Following the authors, we sample source documents from the the CNN/Daily Mail dataset (Nallapati et al., 2016), i.e. \(\tau(c)\) is a uniform distribution over a given subset of source documents. In addition to the divergences, we evaluate the performance using Rouge (Lin, 2004), a measure of summarization quality in terms of Figure 4: Comparison of \(f\)-DPG aggregated on distributional constraints. Evaluation metrics: four \(f\)-divergences \(D_{f}(\pi_{\theta}||p)\) (\(\downarrow\) better), \(E_{\pi_{\theta}}[\phi(x)]\) (\(\uparrow\) better), Entropy (\(\uparrow\) better), standard deviation of pseudo-reward \(\text{std}(r_{\theta}(x))\). Figure 3: Comparison of \(f\)-DPG aggregated on four single-word targets. Standard deviations are suppressed for clarity. Evaluation metrics: four \(f\)-divergences \(D_{f}(\pi_{\theta}||p)\) (\(\downarrow\) better), \(E_{\pi_{\theta}}[\phi(x)]\) (\(\uparrow\) better), Entropy (\(\uparrow\) better), standard deviation of pseudo-reward \(\text{std}(r_{\theta}(x))\). unigram overlap between the source document and ground truth summary (See App. E for additional metrics and more experiments on code generation with compilability preferences). ResultsWe present the evolution of metrics in Fig. 5. The results show that \(f\)-DPG increases the fraction of consistent named entities in summarization, and interestingly, this also leads to indirect improvement in the overall quality of generated summaries compared to ground truth, even though ground truth summaries are not used in training. As also observed in Sec. 4.2, JS-DPG leads to better convergence to \(p\) than KL-DPG as used in Korbak et al. (2022). ### Ablation study This section presents just the key findings of our study. Full results and detailed discussions can be found in App. H. Effect of parameter family capacityAll experiments presented so far correspond to possibly mis-specified target distributions. To understand whether the observed behavior of different variants \(f\)-DPG is affected by this factor, we used pre-trained models with the same architecture as \(\pi_{\theta}\) and \(p\). We found that KL-DPG again lags considerably in terms of divergence, while presenting a high variance of in the pseudo-reward. RKL-DPG shows a significant drop of entropy in the initial phase, but with full capacity of parameter family, the model can recover, and cover the rest of the distribution. Additionally, applying zero-shot the fine-tuned LMs to a summarization task, following Radford et al. (2019), we found that the they recover to a large extent the quality of the target distribution. Effect of training schemeWe examined different training schemes for the lexical constraint on "amazing" with a scalar preference from Sec. 4.2. We saw that the use of a baseline technique improves the performance of the \(f\)-DPG method, with RKL-DPG showing the greatest benefit. Additionally, we found that even though a large batch size is effective at reducing the variance of KL-DPG, we still observe KL-DPG to perform comparatively worse than other divergences. Finally, we observe that our importance sampling estimates converged to the true value of \(Z\). ## 5 Discussion and conclusion A plausible hypothesis would have been that each variant of \(f\)-DPG is comparatively better at least in terms of the \(f\)-divergence objective being optimized. Surprisingly, we found that, save for a few exceptions (Sec. 4.1), for a given target there is one or a few variants that are the best across all measured divergences. Furthermore, we observed that divergence measures can have a significant impact on the performance of the model depending on the target distribution. Fig. 6 illustrates the differences between pseudo-rewards for distinct \(f\)-divergences. The forward KL loss aims to ensure coverage of the subset where \(p(x)>0\), giving a large pseudo-reward for samples with \(p(x)>>\pi(x)\). However, the optimization can be sensitive to sampling noise in the finite sample approximation (see, e.g., Sec. 4.2). Conversely, the reverse KL loss results in extreme negative rewards for samples with \(p(x)<<\pi_{\theta}(x)\), leading \(\pi_{\theta}\) to avoid such regions and resulting in distributional collapse (Sec. 4.1). Total Variation loss is robust to outliers thanks to its hard-thresholded pseudo-reward, however it can lead to high variance behavior when \(\pi_{\theta}\approx p\) (Sec. 4.3). On the other hand, the Jensen-Shannon loss gives smooth and robust rewards in both directions and prevents \(\pi_{\theta}\) from heavily relying on a single direction, making it a reasonable default choice as confirmed by our experiments. To conclude, we propose a flexible framework for approximating a target distribution by minimizing any \(f\)-divergence, unifying earlier approaches for aligning language models. Our results on a diverse array of tasks show that minimizing well-chosen \(f\)-divergences leads to significant gains over previous work. Figure 5: Comparison of \(f\)-DPG on factual summarization. Evaluation metrics: 3 \(f\)-divergences \(D_{f}(\pi_{\theta}||p)\) (\(\downarrow\) better), number of named entities (\(\uparrow\) better), Rouge \(E_{\pi_{\theta}}[\phi(x)]\) (\(\uparrow\) better). Figure 6: Pseudo-rewards for various \(f\)-divergences. The \(x\)-axis denotes \(\frac{p(x)}{\pi_{\theta}(x)}\) and the \(y\)-axis denotes the pseudo-reward. The dotted line denotes the point where \(p(x)=\pi_{\theta}(x)\).
2301.11214
Returning The Favour: When Regression Benefits From Probabilistic Causal Knowledge
A directed acyclic graph (DAG) provides valuable prior knowledge that is often discarded in regression tasks in machine learning. We show that the independences arising from the presence of collider structures in DAGs provide meaningful inductive biases, which constrain the regression hypothesis space and improve predictive performance. We introduce collider regression, a framework to incorporate probabilistic causal knowledge from a collider in a regression problem. When the hypothesis space is a reproducing kernel Hilbert space, we prove a strictly positive generalisation benefit under mild assumptions and provide closed-form estimators of the empirical risk minimiser. Experiments on synthetic and climate model data demonstrate performance gains of the proposed methodology.
Shahine Bouabid, Jake Fawkes, Dino Sejdinovic
2023-01-26T16:44:15Z
http://arxiv.org/abs/2301.11214v2
# Returning The Favour: ###### Abstract A directed acyclic graph (DAG) provides valuable prior knowledge that is often discarded in regression tasks in machine learning. We show that the independences arising from the presence of collider structures in DAGs provide meaningful inductive biases, which constrain the regression hypothesis space and improve predictive performance. We introduce _collider regression_, a framework to incorporate probabilistic causal knowledge from a collider in a regression problem. When the hypothesis space is a reproducing kernel Hilbert space, we prove a strictly positive generalisation benefit under mild assumptions and provide closed-form estimators of the empirical risk minimiser. Experiments on synthetic and climate model data demonstrate performance gains of the proposed methodology. Machine Learning, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian, Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian, Inference, Bayesian Inference, Inference, Bayesian, Inference, Bayesian Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference Inference, Bayesian, Bayesian Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Inference, Bayesian Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian, Bayesian Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Bayesian Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian, Bayesian Inference, Bayesian Inference Inference, Bayesian, Inference Inference, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference, Inference Inference Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference Inference, Bayesian Inference comply with the independences arising from the collider. By considering the projection operator \(P\) that maps onto this subspace, we propose a framework called _collider regression_ to incorporate inductive biases arising from colliders into any regressor. We show that when the data generating process follows a collider, projecting any given regressor onto this subspace provides a positive generalisation benefit. We then consider the specific case where the hypothesis space is a reproducing kernel Hilbert space (RKHS). Because RKHSs are rich functional spaces that also enjoy closed analytical solutions to the least-squares regression problem, they allow us to build intuition for the general case. We prove a strictly positive generalisation benefit from projecting the least-squares empirical risk minimiser in a RKHS, where the size of the generalisation gap increases with the complexity of the problem. We also show that for a RKHS, it is possible to solve the least-squares regression problem directly inside the projected hypothesis subspace and provide closed-form estimators. We experimentally validate the effectiveness of our methodology on a synthetic dataset and on a real world climate science dataset. Results demonstrate that collider regression consistently provides an improvement in generalisation at test time in comparison with standard least-squares regressors. Results also suggest that collider regression is particularly beneficial when few training samples are available, but samples from the covariates can easily be obtained, i.e. in a semi-supervised learning setting. ## 2 Background Regression notationLet \(Y\) be our target variable over \(\mathcal{Y}\subseteq\mathbb{R}\) and \(X\) be our covariates over \(\mathcal{X}\). Our goal is a standard regression task where we have access to a dataset \(\mathcal{D}=\{\mathbf{x},\mathbf{y}\}\in(\mathcal{X}\times\mathcal{Y})^{n}\) of \(n\) samples \((x^{(i)},y^{(i)})\) from \((X,Y)\). We aim to minimise the regularised empirical risk \[\hat{f}=\operatorname*{arg\,min}_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n} \left(y^{(i)}-f(x^{(i)})\right)^{2}+\lambda\Omega(f) \tag{1}\] where \(\mathcal{F}\) is a specified hypothesis space of functions \(f\colon\mathcal{X}\!\!\to\!\!\mathcal{Y}\), \(\lambda>0\) and \(\Omega(f)>0\) is a regularisation term. This corresponds to finding a function \(\hat{f}\) that best estimates the optimal regression function for the squared loss \[f^{*}(x)=\mathbb{E}[Y|X=x]. \tag{2}\] For any two functions \(h,h^{\prime}\in\mathcal{F}\), the squared-error generalisation gap between \(h\) and \(h^{\prime}\) is defined as the difference in their true risk \[\Delta(h,h^{\prime})=\mathbb{E}[(Y-h(X))^{2}]-\mathbb{E}[(Y-h^{\prime}(X))^{2 }]. \tag{3}\] Therefore if \(\Delta(h,h^{\prime})\geq 0\), it means that \(h^{\prime}\) generalises better from the training data than \(h\). Reproducing kernel Hilbert spacesLet \(\mathcal{X}\) be some non-empty space. A real-valued RKHS \((\mathcal{H},\langle\cdot,\cdot\rangle_{\mathcal{H}})\) is a complete inner product space of functions \(f:\mathcal{X}\to\mathbb{R}\) that admits a bounded evaluation functional. For \(x\in\mathcal{X}\), the Riesz representer of the evaluation functional is denoted \(k_{x}\in\mathcal{H}\) and satisfies the _reproducing property_\(f(x)=\langle f,k_{x}\rangle_{\mathcal{H}}\), \(\forall f\in\mathcal{H}\). The bivariate symmetric positive definite function defined by \(k(x,x^{\prime})=\langle k_{x},k_{x^{\prime}}\rangle_{\mathcal{H}}\) is referred to as the _reproducing kernel_ of \(\mathcal{H}\). Conversely, the Moore-Aronszaj theorem (Aronszajn, 1950) shows that any symmetric positive definite function \(k\) is the unique reproducing kernel of an RKHS. For more details on RKHS theory, we refer the reader to Berlinet & Thomas-Agnan (2011). Conditional Mean EmbeddingsConditional mean embeddings (CMEs) provide a powerful framework to represent conditional distributions in a RKHS (Fukumizu et al., 2004; Song et al., 2013; Muandet et al., 2016). Given random variables \(X,Z\) on \(\mathcal{X},\mathcal{Z}\) and an RKHS \(\mathcal{H}\subseteq\mathbb{R}^{\mathcal{X}}\) with reproducing kernel \(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\), the CME of \(\mathbb{P}(X|Z=z)\) is defined as \[\mu_{X|Z=z}=\mathbb{E}[k_{X}|Z=z]\in\mathcal{H}. \tag{4}\] It corresponds to the Riesz representer of the conditional expectation functional \(f\mapsto\mathbb{E}[f(X)|Z=z]\) and can thus be used to evaluate conditional expectations by taking an inner product \(\mathbb{E}[f(X)|Z=z]=\langle f,\mu_{X|Z=z}\rangle_{\mathcal{H}}\). Introducing a second RKHS \(\mathcal{G}\subseteq\mathbb{R}^{\mathcal{Z}}\) with reproducing kernel \(\ell:\mathcal{Z}\times\mathcal{Z}\to\mathbb{R}\), Grunewalder et al. (2012) propose an alternative view of CMEs as the solution to the least-square regression of canonical feature maps \(\ell_{Z}\) onto \(k_{X}\) \[\begin{cases}E^{*}=\operatorname*{arg\,min}_{C\in\mathsf{B}_{2}(\mathcal{G}, \mathcal{H})}\mathbb{E}[\|k_{X}-C\ell_{Z}\|_{\mathcal{H}}^{2}]\\ \mu_{X|Z=z}=E^{*}\ell_{z}\end{cases} \tag{5}\] where \(\mathsf{B}_{2}(\mathcal{G},\mathcal{H})\) denotes the space of Hilbert-Schmidt operators1 from \(\mathcal{G}\) to \(\mathcal{H}\). Given a dataset \(\mathcal{D}=\{\mathbf{x},\mathbf{z}\}\), this perspective allows to compute an estimate of the associated operator \(E^{*}:\mathcal{G}\to\mathcal{H}\) as the solution to the regularised empirical least-squares problem as Footnote 1: i.e. bounded operators \(A:\mathcal{G}\to\mathcal{H}\) such that \(\operatorname{Tr}(A^{*}A)<\infty\). \(\mathsf{B}_{2}(\mathcal{G},\mathcal{H})\) has a Hilbert space structure for the inner product \(\langle A,B\rangle_{\mathsf{B}_{2}}=\operatorname{Tr}(A^{*}B)\). \[\begin{cases}\hat{E}^{*}&=\operatorname*{arg\,min}_{C\in\mathsf{B}_{2}(\mathcal{G },\mathcal{H})}\frac{1}{n}\sum_{i=1}^{n}\|k_{x^{(i)}}\!-\!C\ell_{z^{(i)}}\|_{ \mathcal{H}}^{2}+\gamma\|C\|_{\mathsf{B}_{2}}^{2}\\ &=\mathbf{k}_{\mathbf{x}}^{*}(\mathbf{L}+\gamma\mathbf{I}_{n})^{-1}\mathbf{\ell}_{ \mathbf{z}}\\ \hat{\mu}_{X|Z=z}=\tilde{E}^{*}\ell_{z}=\mathbf{k}_{\mathbf{x}}^{*}(\mathbf{L}+ \gamma\mathbf{I}_{n})^{-1}\mathbf{\ell}_{\mathbf{z}}(z)\end{cases} \tag{6}\] where \(\gamma>0\), \(\mathbf{L}=\ell(\mathbf{z},\mathbf{z})\), \(\mathbf{k}_{\mathbf{x}}=k(\mathbf{x},\cdot)\) and \(\mathbf{\ell}_{\mathbf{z}}=\ell(\mathbf{z},\cdot)\). We refer the reader to (Muandet et al., 2017) for a comprehensive review of CMEs. ## 3 DAG inductive biases for regression In this section, we aim to answer how knowledge of the causal graph of the underlying data generating process can help to perform regression. We start by reviewing the concept of Markov boundaries and how it is used for feature selection. We then show that even after feature selection has been performed, there is still residual information from colliders that is relevant for a regression problem. ### Markov boundary for feature selection Since we are focusing on regression, we are interested in how the DAG can inform us about \(\mathbb{P}(Y|X)\). Suppose that for some vertex \(X_{i}\), the DAG informs us that \(Y\perp\!\!\!\perp X_{i}\mid\ X\setminus X_{i}\). Stated in terms of mutual information we have that2\(I(Y;X)=I(Y;X\setminus X_{i})\), therefore we can discard \(X_{i}\) from our set of covariates without any loss of probabilistic information for \(\mathbb{P}(Y|X)\). Footnote 2: This follows from \(I(Y;X)=I(Y;X\setminus X_{i})+I(Y;X_{i}|X\setminus X_{i})\) and the conditional independence gives \(I(Y;X_{i}|X\setminus X_{i})=0\). From a functional perspective, we can interpret this as incorporating the inductive bias that the regressor need only depend on \(X\setminus X_{i}\), allowing us to learn simpler functions which should generalise better from the training set. By repeating the process of removing features, we can iteratively construct a minimal set of necessary covariates that still retain all the probabilistic information about \(\mathbb{P}(Y|X)\). This is known as feature selection (Dash and Liu, 1997). Such a set, \(S\), should satisfy \(Y\perp\!\!\!\perp X\setminus S|S\) and we should not be able to remove a vertex from \(S\) without losing information about \(\mathbb{P}(Y|X)\). A set of this form is known as the Markov boundary of \(Y\)(Statnikov et al., 2013), denoted by \(\operatorname{Mb}(Y)\). If the only independences in the distribution are those implied by the DAG structure3 then the Markov boundary is uniquely given by Footnote 3: An assumption known as faithfulness (Meek, 1995) which we take throughout. \[\operatorname{Mb}(Y)=\operatorname{Pa}(Y)\cup\operatorname{Ch}(Y)\cup \operatorname{Sp}(Y), \tag{7}\] where \(\operatorname{Pa}(Y)\) are the parents of \(Y\), \(\operatorname{Ch}(Y)\) are the children of \(Y\) and \(\operatorname{Sp}(Y)\) are the spouses of \(Y\), i.e. the children's other parents. In Figure 2 the Markov boundary of \(Y\) is highlighted in blue. ### Extracting inductive bias for regression By construction the Markov boundary of \(Y\) cannot contain independence relationships of the form \(Y\perp\!\!\!\perp X_{i}|X\setminus X_{i}\). However, it can still contain unused independence statements that involve \(Y\), and therefore provides useful information about the conditional distribution \(\mathbb{P}(Y|X)\). For example, the graphical structure in Figure 2 gives that \(Y\perp\!\!\!\perp X_{4}\) and \(Y\perp\!\!\!\perp X_{5}\mid X_{3}\). This implies that \(\mathbb{P}(Y|X_{4})=\mathbb{P}(Y)\) and \(\mathbb{P}(Y|X_{3},X_{5})=\mathbb{P}(Y|X_{3})\) which by marginalisation constrains \(\mathbb{P}(Y|X)\) and so gives us extra information about it. The presence of these independence relationships inside \(\operatorname{Mb}(Y)\) is only possible because a collider, \(X_{6}\), has allowed for the spouses \(X_{4}\) and \(X_{5}\) to be within the Markov boundary without being adjacent to \(Y\). Hence, the presence of collider structures within the Markov boundary of \(Y\) provides additional independence relationships involving \(Y\). The following proposition shows that the presence of a collider is not only a sufficient condition, but also necessary. **Proposition 3.1**.: _The Markov boundary of \(Y\) contains a collider if and only if there exists \(Z\in\operatorname{Mb}(Y)\) and \(S_{Z}\subset\operatorname{Mb}(Y)\) such that \(Y\perp\!\!\!\perp Z\mid S_{Z}\)._ Proof.: We have a conditional independence between two variables if and only if they are not adjacent (Lemma 3.1, 3.2 Koller and Friedman (2009)) and \(\operatorname{Mb}(Y)\) contains a variable not adjacent to \(Y\) if and only if it contains a collider. The collider structures are thus the only graphical structures that provide conditional independence statement relevant to \(\mathbb{P}(Y|X)\) within the Markov boundary. To the best of our knowledge, this information is currently left unused when addressing a regression problem. However, unlike for the feature selection process, we cannot simply use these independence statements to discard covariates and reduce the set of features. This is because while the spouses of \(Y\) are uninformative on their own, they become informative in the presence of other covariates. Namely in Figure 2, while \(Y\perp\!\!\!\perp X_{4}\) we have \(Y\perp\!\!\!\perp X_{4}|X_{6}\) because \(X_{6}\) is a collider. Therefore, we have that \(I(Y;X)>I(Y;X\setminus X_{4})\) and discarding \(X_{4}\) would constitute a loss of information. Figure 2: A causal graph with the Markov boundary of \(Y\) highlighted in blue and vertices outside the Markov boundary highlighted in red. Whilst \(Y\) and \(X_{4}\) are marginally independent, the presence of the collider \(X_{6}\) opens the path between \(Y\) and \(X_{4}\). ## 4 Collider Regression In this section, we present a method for incorporating probabilistic inductive bias from a collider structure into a regression problem, and provide guarantees of improved generalisation error. For the sake of clarity, our exposition focuses on the simple collider structure depicted in Figure 3. We however emphasise this simplification does not harm the generality of our contribution and Section 5 shows how collider regression can be extended to general DAG structures. ### Simple collider regression setup Let \(X_{1},X_{2},Y\) be random variables following the DAG structure in Figure 3 and taking values in \(\mathcal{X}_{1}\subseteq\mathbb{R}^{d_{1}}\), \(\mathcal{X}_{2}\subseteq\mathbb{R}^{d_{2}}\) and \(\mathcal{Y}\subseteq\mathbb{R}\) respectively. Without loss of generality, we assume that \(\mathbb{E}[Y]=0\). Under the squared loss, the optimal regressor is given by \[f^{*}(x_{1},x_{2})=\mathbb{E}[Y|X_{1}=x_{1},X_{2}=x_{2}]. \tag{8}\] Since the collider gives the independence relationship \(Y\perp\!\!\!\perp X_{2}\), we have that \[\mathbb{E}[f^{*}(X_{1},X_{2})|X_{2}] =\mathbb{E}\big{[}\mathbb{E}[Y|X_{1},X_{2}]\mid X_{2}\big{]} \tag{9}\] \[=\mathbb{E}[Y|X_{2}]\] \[=\mathbb{E}[Y]=0.\] Hence, the optimal regressor \(f^{*}\) lies in the subspace of functions that have zero \(X_{2}\)-conditional expectation. To incorporate the knowledge from the DAG into our regression procedure, we should therefore ensure that our estimate \(\hat{f}\) lies within the same subspace of functions, i.e. we want \[\hat{f}\in\big{\{}f\in\mathcal{F}\mid\mathbb{E}[f(X_{1},X_{2})|X_{2}]=0\big{\}}.\] ( \[\star\] ) We propose to investigate how such a constraint can be enforced onto our hypothesis and how it benefits generalisation, starting by the general case of square-integrable functions. In what follows, we will use shorthand concatenated notations \(X=(X_{1},X_{2})\), \(\mathcal{X}=\mathcal{X}_{1}\times\mathcal{X}_{2}\), \(x=(x_{1},x_{2})\in\mathcal{X}\) and \(\mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2})\in\mathcal{X}^{n}\). ### Respecting the collider structure in the hypothesis Let \(L^{2}(X)\) denote the space of square-integrable functions with respect to the probability measure induced by \(X\) and suppose \(\mathcal{F}=L^{2}(X)\). Let \(E:L^{2}(X)\to L^{2}(X)\) denote the conditional expectation operator defined by \[Ef(x_{1},x_{2})=\mathbb{E}[f(X_{1},X_{2})|X_{2}=\pi_{2}(x_{1},x_{2})], \tag{10}\] where \(\pi_{2}(x_{1},x_{2})=x_{2}\) is simply the mapping that discards the first component4. Footnote 4: This notation emphasises that \(Ef\) is formally a function of \((x_{1},x_{2})\) and belongs in \(L^{2}(X)\) The operator \(E\) classically defines an orthogonal projection over the subspace of \(X_{2}\)-measurable functions. \(L^{2}(X)\) thus orthogonally decomposes into its image, denoted \(\mathrm{Range}(E)\), and its null-space, denoted \(\mathrm{Ker}(E)\), as \[L^{2}(X)=\mathrm{Ker}(E)\oplus\mathrm{Range}(E). \tag{11}\] Using this notation, satisfying condition (\(\star\) *> 3) corresponds to having \(\hat{f}\in\mathrm{Ker}(E)\). Alternatively, if we denote \[P=\mathrm{Id}-E, \tag{12}\] the orthogonal projection onto \(\mathrm{Ker}(E)\), then we want to take \(\mathcal{F}=\mathrm{Range}(P)\) as our hypothesis space. In general, it may be hard to constrain the hypothesis space directly to be \(\mathrm{Range}(P)\). However, the solution to the empirical risk minimisation problem (1) will always orthogonally decompose within \(L^{2}(X)\) as \[\hat{f}=P\hat{f}+E\hat{f}, \tag{13}\] where only \(P\hat{f}\in\mathrm{Range}(P)\) satisfies (\(\star\) *> 3). It turns out that discarding \(E\hat{f}\) -- the part that does not satisfy the constraint -- will always yield generalisation benefits. **Proposition 4.1**.: _Let \(h\in L^{2}(X)\) be any regressor from our hypothesis space. We have_ \[\Delta(h,Ph)=\|Eh\|_{L^{2}(X)}^{2}. \tag{14}\] The generalisation gap is always greater than zero. Hence, for any given regressor \(\hat{f}\), we can always improve its test performance by projecting it onto \(\mathrm{Range}(P)\). In practice, a simple estimator of \(P\hat{f}\) can be obtained by subtracting an estimate of \(\mathbb{E}[\hat{f}(X_{1},X_{2})|X_{2}]\) as \[\hat{P}\hat{f}(x_{1},x_{2})=\hat{f}(x_{1},x_{2})-\hat{\mathbb{E}}[\hat{f}(X_{1 },X_{2})|X_{2}=x_{2}] \tag{15}\] by following the procedure outlined in Algorithm 1. ``` 1:Regress \((X_{1},X_{2})\to Y\) to get \((x_{1},x_{2})\mapsto\hat{f}(x_{1},x_{2})\) 2:Regress \(X_{2}\to\hat{f}(X_{1},X_{2})\) to get \(x_{2}\mapsto\hat{\mathbb{E}}[\hat{f}(X_{1},X_{2})|X_{2}=x_{2}]\) 3:Take \(\hat{P}\hat{f}(x_{1},x_{2})\!=\!\hat{f}(x_{1},x_{2})-\hat{\mathbb{E}}[\hat{f} (X_{1},X_{2})|X_{2}\!=\!x_{2}]\) ``` **Algorithm 1** General procedure to estimate \(P\hat{f}\) It is worth noting that the second step of Algorithm 1 does not require observations from \(Y\). As such, it naturally fits a semi-supervised setup where additional observations \(\mathcal{D}^{\prime}=\{\mathbf{x}^{\prime}_{1},\mathbf{x}^{\prime}_{2}\}\) are available, and can be used to produce a better estimate of the conditional expectation \(\mathbb{E}[\hat{f}(X_{1},X_{2})|X_{2}]\). Figure 3: Simple collider structure ### Theoretical guarantees in a RKHS RKHSs are mathematically convenient functional spaces and under mild assumptions on the reproducing kernel, they can be proven to be dense in \(L^{2}(X)\)(Sriperumbudur et al., 2011). This makes them a powerful tool for theoretical analysis and building intuition which can be expected to carry over to more general function spaces. For this reason, in this section we study the case where the hypothesis space is a RKHS \(\mathcal{F}=\mathcal{H}\). We denote its inner product by \(\langle\cdot,\cdot\rangle_{\mathcal{H}}\) and its reproducing kernel \(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\). When solving the least-square regression problem in a RKHS, it is known that for Tikhonov regularisation \(\Omega(f)=\|f\|_{\mathcal{H}}^{2}\), the solution to the empirical risk minimisation problem (1) in \(\mathcal{H}\) enjoys a closed-form expression given by \[\hat{f}=\mathbf{y}^{\top}\left(\mathbf{K}+\lambda\mathbf{I}_{n}\right)^{-1} \boldsymbol{k}_{\mathbf{x}}, \tag{16}\] where \(\mathbf{K}=k(\mathbf{x},\mathbf{x})\) and \(\boldsymbol{k}_{\mathbf{x}}=k(\mathbf{x},\cdot)\). Therefore, if we now project \(\hat{f}\) onto \(\mathrm{Range}(P)\) as previously, the projected empirical risk minimiser writes \[P\hat{f}=\mathbf{y}^{\top}\left(\mathbf{K}+\lambda\mathbf{I}_{n}\right)^{-1} P\boldsymbol{k}_{\mathbf{x}} \tag{17}\] with notation abuse \(P\boldsymbol{k}_{\mathbf{x}}=[Pk_{x^{(1)}}\dots Pk_{x^{(n)}}]^{\top}\). Leveraging these analytical expressions, the following result establishes a strictly non-zero generalisation benefit from projecting \(\hat{f}\). The proof techniques follows that of Elesedy (2021), but is adapted to our particular setup with relaxing assumptions about the projection orthogonality5 and the form of the data generating process. Footnote 5: \(P\) is not necessarily orthogonal anymore as a projection of \(\mathcal{H}\) Footnote 5: \(E\) then corresponds to what is referred to as a conditional mean operator in the kernel literature (Fukumizu et al., 2004). **Theorem 4.2**.: _Suppose \(M=\sup_{x\in\mathcal{X}}k(x,x)<\infty\). Then, the generalisation gap between \(\hat{f}\) and \(P\hat{f}\) satisfies_ \[\mathbb{E}[\Delta(\hat{f},P\hat{f})]\geq\frac{\mathbb{E}\big{[}Y^{2}\|\mu_{X |X_{2}}(X)\|_{L^{2}(X)}^{2}\big{]}}{\left(\sqrt{n}M+\lambda/\sqrt{n}\right)^{2}} \tag{18}\] _where \(\mu_{X|X_{2}}=\mathbb{E}[k_{X}|X_{2}]\) is the CME of \(\mathbb{P}(X|X_{2})\)._ This demonstrates that in a RKHS, projecting the empirical risk minimiser is strictly beneficial in terms of generalisation error. Specifically, if there exists a set with non-zero measure on which \(Y\neq 0\) and \(\mu_{X|X_{2}}\neq 0\) almost-everywhere, then the lower bound is strictly positive. The magnitude of the lower bound depends on the variance of \(Y\|\mu_{X|X_{2}}(X)\|_{L^{2}(X)}\). This indicates that problems with more complex marginals \(\mathbb{P}(Y)\) and conditional distributions \(\mathbb{P}(X|X_{2})\) should enjoy a larger generalisation gap. The theorem also suggests that the lower bound on the generalisation benefit decreases at the rate \(\mathcal{O}(1/n)\) as the number of samples \(n\) grows. Since for the well-specified kernel ridge regression problem, the excess risk upper bound also decreases at rate \(\mathcal{O}(1/n)\)(Bach, 2021; Caponnetto De Vito, 2007), we have that \(\mathbb{E}[\Delta(\hat{f},P\hat{f})]=\Theta(1/n)\). In a RKHS, \(P\hat{f}\) can be rewritten using CMEs as \[Pf(x_{1},x_{2})=f(x_{1},x_{2})-\langle f,\mu_{X|X_{2}=x_{2}}\rangle_{ \mathcal{H}}. \tag{19}\] Therefore, introducing a kernel \(\ell:\mathcal{X}_{2}\times\mathcal{X}_{2}\to\mathbb{R}\), the CME estimate from (6) allows to devise an estimator of \(P\hat{f}\) as: \[\hat{P}\hat{f}\!=\!\mathbf{y}^{\top}(\mathbf{K}\!+\!\lambda\mathbf{I}_{n})^{-1 }\big{(}\boldsymbol{k}_{\mathbf{x}}\!-\!\mathbf{K}(\mathbf{L}\!+\!\gamma \mathbf{I}_{n})^{-1}\!\boldsymbol{\ell}_{\mathbf{x}_{2}}\big{)} \tag{20}\] where \(\mathbf{L}=\ell(\mathbf{x}_{2},\mathbf{x}_{2})\), \(\boldsymbol{\ell}_{\mathbf{x}_{2}}=\ell(\mathbf{x}_{2},\cdot)\) and \(\gamma>0\). ### Respecting the collider structure in a RKHS Similarly to the \(L^{2}(X)\) case, the solution to the empirical risk minimisation problem in \(\mathcal{H}\) will also decompose as \(\hat{f}=P\hat{f}+E\hat{f}\). Thus, we can proceed similarly by simply discarding \(E\hat{f}\) to improve performance. However, it turns out that using elegant functional properties of RKHSs, it is possible to take a step further and directly take \(\mathcal{F}=\mathrm{Range}(P)\). In doing so, we can ensure that our hypothesis space only contains functions that satisfy constraint (\(\star\)). Under assumptions detailed in Appendix C, we can view the projection \(P\) as a well-defined RKHS projection6\(P\ :\ \mathcal{H}\ \to\ \mathcal{H}\). In particular, an important assumption is that the kernel takes the form Footnote 6: \(E\) then corresponds to what is referred to as a conditional mean operator in the kernel literature (Fukumizu et al., 2004). \[k\left(x,x^{\prime}\right)=\left(r\left(x_{1},x_{1}^{\prime}\right)+1\right) \ell\left(x_{2},x_{2}^{\prime}\right), \tag{21}\] where \(r:\mathcal{X}_{1}\times\mathcal{X}_{1}\to\mathbb{R}\) and \(\ell:\mathcal{X}_{2}\times\mathcal{X}_{2}\to\mathbb{R}\) are also positive semi-definite kernels. This ensures that \(\mathcal{H}\) contains functions that are constant with respect to \(x_{1}\). Thus, the conditional expectation mapping \((x_{1},x_{2})\mapsto\mathbb{E}[f(X_{1},X_{2})|X_{2}\ =\ x_{2}]\) belongs to the same RKHS. If these assumptions are met, we denote \(\mathcal{H}_{P}=\mathrm{Range}(P)\). The following result characterises \(\mathcal{H}_{P}\) as a RKHS. **Proposition 4.3**.: _Let \(P^{*}\) be the adjoint operator of \(P\) in \(\mathcal{H}\). Then \(\mathcal{H}_{P}\) is also a RKHS with reproducing kernel_ \[k_{P}(x,x^{\prime})=\langle P^{*}k_{x},P^{*}k_{x^{\prime}}\rangle_{\mathcal{H}} \tag{22}\] _with \(P^{*}k_{x}=k_{x}-\mu_{X|X_{2}=\pi_{2}(x)}\)._ Using the projected RKHS kernel \(k_{P}\), it becomes possible to solve the least-square regression problem directly inside \(\mathcal{F}=\mathcal{H}_{P}\). By taking \(\Omega(f)=\|f\|_{\mathcal{H}_{P}}^{2}\), the empirical risk minimisation problem becomes a standard kernel ridge regression problem in \(\mathcal{H}_{P}\) which admits closed-form solution \[\hat{f}_{P}=\mathbf{y}^{\top}\left(\mathbf{K}_{P}+\lambda\mathbf{I}_{n}\right)^{- 1}\boldsymbol{k}_{P,\mathbf{x}}, \tag{23}\] where \(\mathbf{K}_{P}=k_{P}(\mathbf{x},\mathbf{x})\) and \(\boldsymbol{k}_{P,\mathbf{x}}=k_{P}(\mathbf{x},\cdot)\). From a learning theory perspective, performing empirical risk minimisation inside \(\mathcal{H}_{P}\) should provide tighter bounds on the generalisation error than on the entire space \(\mathcal{H}\). This is because since \(\mathcal{H}_{P}\subset\mathcal{H}\), the Rademacher complexity of \(\mathcal{H}_{P}\) is smaller than that of \(\mathcal{H}\). It should be noted that \(k_{P}\) depends on the CME \(\mu_{X|X_{2}=\pi_{2}(x)}\), which needs to be estimated. Therefore, in practice, our hypothesis will not lie in the true \(\mathcal{H}_{P}\) but in an approximation of \(\mathcal{H}_{P}\) and the approximation error will depend directly on the CME estimation error. ``` 1: Let \(\hat{P}^{*}=\operatorname{Id}-\boldsymbol{k}_{\mathbf{x}}^{\top}(\mathbf{L}+ \gamma\mathbf{I}_{n})^{-1}\boldsymbol{\ell}_{\mathbf{x}_{2}}\) 2: Let \(\hat{k}_{P}(x,x^{\prime})=\langle\hat{P}^{*}k_{x},\hat{P}^{*}k_{x^{\prime}} \rangle_{\mathcal{H}}\) 3: Evaluate \(\hat{\mathbf{K}}_{P}=\hat{k}_{P}(\mathbf{x},\mathbf{x})\) and \(\boldsymbol{k}_{P,\mathbf{x}}=\hat{k}_{P}(\mathbf{x},\cdot)\) 4: Take \(\hat{f}_{P}=\mathbf{y}^{\top}(\hat{\mathbf{K}}_{P}+\lambda\mathbf{I}_{n})^{-1 }\hat{\boldsymbol{k}}_{P,\mathbf{x}}\) ``` **Algorithm 2** RKHS procedure to estimate \(\hat{f}_{P}\) The estimation of (23) is again a two-stage procedure outlined in Algorithm 2. The distinction with the general \(L^{2}(X)\) case is that we do not estimate the conditional expectation of any specific function. Instead, we estimate the conditional expectation operator through \(\hat{\mu}_{X|X_{2}=x_{2}}\), and then use it through \(\hat{P}^{*}\) to constrain the hypothesis space. This is possible because in a RKHS, the estimation of the conditional expectation operator can be achieved independently from the function it is applied to. The estimation of \(P^{*}\) in line 1 only requires observations from \(X_{1},X_{2}\). Thus, like in the \(L^{2}(X)\) case, additional observations \(\mathcal{D}^{\prime}=\{\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime}\}\) can help better estimate CMEs, and thus better approximate the projected RKHS \(\mathcal{H}_{P}\). ## 5 Collider Regression on a general DAG We now return to a general Markov boundary. Any Markov boundary may be partitioned following Figure 5, where \(X_{1}\) contains all direct children of \(Y\), \(X_{3}\) contains all parents of \(Y\) and all other variables belong to \(X_{2}\). This provides us with the probabilistic information that \(Y\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\lower 3.0pt\hbox{$\mskip-1.0mu \mskip-1.0mu \perp $}}X_{2}\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\lower 3.0pt\hbox{$ \mskip-1.0mu \mskip-1.0mu \perp$}}X_{3}\) but \(Y\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\lower 3.0pt\hbox{$\mskip-1.0mu \mskip-1.0mu \perp$}}X_{2}\mathrel{\hbox to 0.0pt{ \kern 2.0pt\perp}\lower 3.0pt\hbox{$\mskip-1.0mu \mskip-1.0mu \perp$}}X_{3},X_{1}\), which implies in expectation that \(\mathbb{E}[Y|X_{3}]=\mathbb{E}[Y|X_{2},X_{3}]\). If we now denote \(X=(X_{1},X_{2},X_{3})\) and \(f_{0}(x)=\mathbb{E}[Y|X_{3}=x_{3}]\), then the optimal least-square regressor \(f^{*}(x)=\mathbb{E}[Y|X=x]\) satisfies \[\mathbb{E}\big{[}f^{*}(X)-f_{0}(X)\mid X_{2},X_{3}\big{]} \tag{24}\] \[= \mathbb{E}\left[\mathbb{E}[Y|X]\mid X_{2},X_{3}\right]-\mathbb{E }\left[\mathbb{E}[Y|X_{3}]\mid X_{2},X_{3}\right]\] \[= \mathbb{E}[Y|X_{2},X_{3}]-\mathbb{E}\left[\mathbb{E}[Y|X_{2},X_{3 }]\mid X_{2},X_{3}\right]\] \[= 0.\] Therefore, if we center our hypothesis space on \(f_{0}\), then like in Section 4.1, we want our centered estimate \(\hat{f}-f_{0}\) to lie within the following subspace: \[\hat{f}-f_{0}\in\big{\{}f\in\mathcal{F}\mid\mathbb{E}\big{[}f(X)\mid X_{2},X_{3 }\big{]}=0\big{\}}. \tag{25}\] When \(\mathcal{F}=L^{2}(X)\), this space can again be seen as the range of an orthogonal projection, this time defined by \[P^{\prime}=\operatorname{Id}-E^{\prime} \tag{26}\] where \(E^{\prime}:L^{2}(X)\to L^{2}(X)\) denotes the conditional expectation functional with respect to \((X_{2},X_{3})\) \[E^{\prime}f(x_{2},x_{3})=\mathbb{E}[f(X)|X_{2}=x_{2},X_{3}=x_{3}]. \tag{27}\] While we focus in Section 4 on the simple collider structure for the sake of exposition, our result are stated for a general projection operator and still hold for \(P^{\prime}\) -- modulo a shift by \(f_{0}\). Hence, we can still apply the techniques we have presented to encode probabilistic information from the general DAG in Figure 5 into a regression problem, with similar guarantees on the generalisation benefits. **Proposition 5.1**.: _Let \(h\in L^{2}(X)\) be any regressor from our hypothesis space. We have_ \[\Delta(h,f_{0}+P^{\prime}h)=\|E^{\prime}h-f_{0}\|_{L^{2}(X)}^{2}. \tag{28}\] This means that, for any given regressor \(\hat{f}\), we can always improve its test performance by first projecting it onto \(\operatorname{Range}(P)\), and then shifting it by \(f_{0}\). In practice, the estimation strategies introduced in Section 4 can still be applied to obtain an estimate of \(P^{\prime}\hat{f}\). An additional procedure to estimate \(f_{0}\) will however be needed. This can be achieved by regressing \(Y\) onto \(X_{3}\). ## 6 Experiments This section provides empirical evidence that incorporating probabilistic causal knowledge into a regression problem benefits performance. First, we demonstrate our method on an illustrative simulation example. We conduct an ablation study on the number of training samples, the dimensionality of \(X_{2}\) and the use of additional semi-supervised samples. Then, we address a challenging climate science problem that respects the collider structure. Our results underline the benefit of enforcing constraint (\(\star\)) onto the hypothesis. Code and data are made available (Bouabid et al., 2023). Figure 5: General Markov boundary collider structure. ModelsWe compare five models: 1. [noitemsep,topsep=0pt] 2. _RF_: A baseline random forest model. 3. _P-RF_: The baseline RF model projected following Algorithm 1 and using a linear regression to estimate \(\mathbb{E}[\hat{f}(X_{1},X_{2})|X_{2}=x_{2}]\). 4. _KRR_: A baseline kernel ridge regression. 5. _P-KRR_: The KRR model projected following (19). 6. _\(\mathcal{H}_{P}\)-KRR_: A kernel ridge regression model fitted directly in the projected RKHS following Algorithm 2. For both KRR and RF, we use Proposition 4.1 to compute Monte Carlo estimates of the expected generalisation gap \(\mathbb{E}[\Delta(\hat{f},P\hat{f})]\), which we denote as \(\Delta\)-KRR and \(\Delta\)-RF respectively. This provides an indicator of the greatest achievable generalisation gain if we had access to the exact projection \(P\). Hyperparameters are tuned using a cross-validated grid search and model details are specified in Appendix D. ### Simulation example Data generating processWe propose the following construction that follows the simple collider structure from Figure 3. Let \(d_{1},d_{2}\geq 1\) denote respectively the dimensionalities of \(X_{1}\) and \(X_{2}\). We first generate a fixed positive definite matrix \(\Sigma\) of size \((d_{1}+d_{2}+1)\) which has zero off-diagonals on the \((d_{1}+d_{2})^{\text{th}}\) row and column. We then follow the generating process described in Algorithm 3 and generate a dataset of \(n\) observations \(\mathcal{D}=\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{y}\}\). The zero off-diagonal terms in \(\Sigma\) ensure that we satisfy \(Y\perp\!\!\!\perp X_{2}\) and \(g_{1},g_{2}\) are nontrivial mappings that introduce a non-linear dependence (details in Appendix D). ``` 1:Input:\(\Sigma\succcurlyeq 0,\sigma>0\), \(g_{1}\!:\!\mathbb{R}^{d_{1}}\!\rightarrow\!\mathbb{R}^{d_{1}}\), \(g_{2}:\mathbb{R}^{d_{2}}\!\rightarrow\!\mathbb{R}^{d_{2}}\) 2:\(\left[X_{1}\quad X_{2}\quad Y\right]^{\top}\sim\mathcal{N}(0,\Sigma)\), \(\varepsilon\sim\mathcal{N}(0,\sigma^{2})\) 3:\(X_{1}\gets g_{1}(X_{1})+\varepsilon\) 4:\(X_{2}\gets g_{2}(X_{2})\) 5:return\(X_{1},X_{2},Y\) ``` **Algorithm 3** Data generating process simulation example ResultsFigure 4(a) provides empirical evidence that, for both KRR and RF, incorporating probabilistic inductive biases from the collider structure in the hypothesis benefits the generalisation error. In addition, Figure 4(b)(c)(d) shows that the empirical generalisation benefit is greatest when : fewer training samples are available, semi-supervised samples can be easily obtained and the dimensionality of \(X_{2}\) is larger. This is in keeping with Theorem 4.2 which predicts the benefit will be larger when we have fewer labeled samples and a more complicated relationship between \(X_{2}\) and \(X_{1}\). Because the decision nodes learnt by RF largely rely on \(X_{1}\) and the early dimensions of \(X_{2}\), increasing the dimensionality of \(X_{2}\) has little to negative effect as shown in Figure 4(d). ### Aerosols radiative forcing BackgroundThe radiative forcing is defined as the difference between incoming and outgoing flux of energy in the Earth sytem. At equilibirum, the radiative forcing should be of 0 W m\({}^{\text{-2}}\). Carbon dioxide emissions from human activity contribute a positive radiative forcing of +1.89 W m\({}^{\text{-2}}\) which causes warming of the Earth system (Bellouin et al., 2020). Aerosols are microscopic particles suspended in the atmosphere (e.g. dust, sea salt, black carbon) that contribute a negative radiative forcing by helping reflect solar radiation, which cools the Earth. However, the magnitude of their forcing represents the largest uncertainty in assessments of global warming, with uncertainty bounds that could offset global warming or double its effects. It is thus critical to obtain better estimate of the aerosol radiative forcing. The carbon dioxide and aerosol forcings are independent factors6 that contribute to the observed global temperatures. Figure 4: (a) : Test MSEs for the simulation experiment ; dataset is generated using \(d_{1}=3\), \(d_{2}=3\), \(n=50\) and \(100\) semi-supervised samples ; experiments is run for 100 datasets generated with different seeds ; statistical significance is confirmed in Appendix D ; (b, c, d) : Ablation study on the number of training samples, number of semi-supervised samples and dimensionality of \(X_{2}\) ; experiments are run for 40 datasets generated with different seeds ; \(\uparrow/\downarrow\) indicates higher/lower is better ; we report 1 s.d. ; \(\uparrow\) indicates our proposed methods. Hence, by setting \(Y=\) "aerosol forcing", \(X_{2}=\)"CO\({}_{2}\) forcing" and \(X_{1}=\) "global temperature", this problem has a collider structure and observations from global temperature and CO\({}_{2}\) forcing can be used to regress the aerosol forcing. Data generating processFaIR (for Finite amplitude Impulse Response) is a deterministic model that proposes a simplified low-order representation of the climate system (Millar et al., 2017; Smith et al., 2018). Surrogate climate models like FaIR -- referred to as _emulators_ -- have been widely used, notably in reports of the Intergovernmental Panel on Climate Change (Masson-Delmotte et al., 2021), because they are fast and inexpensive to compute. We use a modified version of FaIRv2.0.0 (Leach et al., 2021) where we introduce variability by adding white noise on the forcing to account for climate internal variability (Hasselmann, 1976; Cummins et al., 2020). To generate a sample, we run the emulator over historical greenhouse gas and aerosol emission data and retain scalar values for \(y=\) "aerosol forcing in 2020", \(x_{2}=\) "CO\({}_{2}\) forcing in 2020" and \(x_{1}=\) "global temperature anomaly in 2020". We perform this \(n\) times to generate dataset \(\mathcal{D}=\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{y}\}\). ResultsResults are reported in Table 1. We observe that the incorporation of inductive bias from the collider resulted in consistently improved performance for both RF and KRRs. This shows that while the proposed methodology is only formulated in terms of squared error, it can also improve performance for other metrics. ## 7 Discussion and Related Work Regression and Causal InferenceCurrently causal inference is most commonly used in regression problems when reasoning about invariance (Peters et al., 2016; Arjovsky et al., 2019). These methods aim to use the causal structure to guarantee the predictors will transfer to new environments (Gulrajani and Lopez-Paz, 2020) and recent work discusses how causal structure plays a role in the effectiveness of these methods (Wang and Veitch, 2022). Our work takes a complimentary route in asking how causal structure can benefit in regression, and, in contrast to prior work, focuses on a fixed environment. Causal and Anti-causal learningOur work is closely related to work on anti-causal learning (Scholkopf et al., 2012) which argues that \(\mathbb{P}(X)\) will only provide additional information about \(\mathbb{P}(Y|X)\) if we are working in an anti-causal prediction problem \(Y\to X\). This leads the authors to hypothesise that additional unlabelled semi-supervised samples will be most helpful in the anti-causal direction. In our work, we go further and prove a concrete generalisation benefit from using additional samples from \(\mathbb{P}(X)\) when the data generating process follows a collider, a graphical structure which is inherently anti-causal as it relies on \(Y\) having shared children with another vertex. Independence Regularisation and Fair LearningOur work is related to the large body of recent work aiming to force conditional independence constraints, either for fairness (Kamishima et al., 2011) or domain generalisation (Pogodin et al., 2022). However, it is important to note that if \(Y\) satisfies a conditional independence this does not mean that the optimal least-square regressor \(\mathbb{E}[Y|X]\) will satisfy the same conditional independence. For example, let \[\begin{cases}Y,X_{2}\sim\mathcal{N}(0,1)\text{ with }Y\perp\!\!\!\perp X_{2}\\ X_{1}=Y\!\!\!\perp\{X_{2}>0\}\end{cases} \tag{29}\] Then we have \(\mathbb{E}[Y|X_{1},X_{2}]=X_{1}\!\!\!\perp\{X_{2}>0\}\), hence \(\mathbb{E}[Y|X_{1},X_{2}]\) is constant when \(X_{2}<0\) but not otherwise. Therefore \(\mathbb{E}[Y|X_{1},X_{2}]\not\perp\!\!\perp X_{2}\), even though \(Y\perp\!\!\!\perp X_{2}\). Therefore, our methodology is more similar to ensuring independence in expectation. Specifically, the RKHS methodology is related to work on fair kernel learning (Perez-Suay et al., 2017; Li et al., 2022b). However, in contrast to the work on fair kernel learning where regularisation terms for encouraging independence are proposed, we go further by enforcing the mean independence constraint directly onto the hypothesis space. ## 8 Conclusion In this work we have demonstrated that collider structures within causal graphs constitute a useful form of inductive bias for regression that benefits generalisation performance. Whilst we focused on least-square regression, we expect that the collider regression framework should benefit a wider range of machine learning problems that aim to make inferences about \(\mathbb{P}(Y|X)\). For example, a natural extension of this work should investigate collider regression for classification or quantile regression tasks. \begin{table} \begin{tabular}{r c c c} \hline \hline & MSE \(\downarrow\) & SNR \(\uparrow\) & Correlation \(\uparrow\) \\ \hline RF & 0.90\(\pm\)0.04 & 0.44\(\pm\)0.19 & 0.32\(\pm\)0.08 \\ \(P\)-RF\({}^{\dagger}\) & **0.89\(\pm\)0.03** & **0.49\(\pm\)0.15** & **0.34\(\pm\)0.07** \\ \hline KRR & 0.88\(\pm\)0.04 & 0.56\(\pm\)0.21 & 0.37\(\pm\)0.05 \\ \(P\)-KRR\({}^{\dagger}\) & **0.86\(\pm\)0.03** & **0.65\(\pm\)0.14** & **0.40\(\pm\)0.02** \\ \(\mathcal{H}_{P}\)-KRR\({}^{\dagger}\) & **0.86\(\pm\)0.03** & **0.64\(\pm\)0.14** & 0.39\(\pm\)0.03 \\ \hline \hline \end{tabular} \end{table} Table 1: MSE, signal-to-noise ratio (SNR) and correlation on test data for the aerosol radiative forcing experiment ; \(n=50\) and \(200\) semi-supervised samples ; statistical significance is confirmed in Appendix D ; experiments is run for 100 datasets generated with different seeds ; \(\uparrow/\downarrow\) indicates higher/lower is better ; we report 1 standard deviation ; \(\uparrow\) indicates our proposed methods.
2310.08004
On the Rational Degree of Boolean Functions and Applications
We study a natural complexity measure of Boolean functions known as the (exact) rational degree. For total functions $f$, it is conjectured that $\mathrm{rdeg}(f)$ is polynomially related to $\mathrm{deg}(f)$, where $\mathrm{deg}(f)$ is the Fourier degree. Towards this conjecture, we show that symmetric functions have rational degree at least $\mathrm{deg}(f)/2$ and monotone functions have rational degree at least $\sqrt{\mathrm{deg}(f)}$. We observe that both of these lower bounds are tight. In addition, we show that all read-once depth-$d$ Boolean formulae have rational degree at least $\Omega(\mathrm{deg}(f)^{1/d})$. Furthermore, we show that almost every Boolean function on $n$ variables has rational degree at least $n/2 - O(\sqrt{n})$. In contrast to total functions, we exhibit partial functions that witness unbounded separations between rational and approximate degree, in both directions. As a consequence, we show that for quantum computers, post-selection and bounded-error are incomparable resources in the black-box model.
Vishnu Iyer, Siddhartha Jain, Matt Kovacs-Deak, Vinayak M. Kumar, Luke Schaeffer, Daochen Wang, Michael Whitmeyer
2023-10-12T03:14:44Z
http://arxiv.org/abs/2310.08004v1
# On the Rational Degree of Boolean Functions and Applications ###### Abstract We study a natural complexity measure of Boolean functions known as the (exact) rational degree. For total functions \(f\), it is conjectured that \(\operatorname{rdeg}(f)\) is polynomially related to \(\deg(f)\), where \(\deg(f)\) is the Fourier degree. Towards this conjecture, we show that symmetric functions have rational degree at least \(\deg(f)/2\) and monotone functions have rational degree at least \(\sqrt{\deg(f)}\). We observe that both of these lower bounds are tight. In addition, we show that all read-once depth-\(d\) Boolean formulae have rational degree at least \(\Omega(\deg(f)^{1/d})\). Furthermore, we show that almost every Boolean function on \(n\) variables has rational degree at least \(n/2-\mathcal{O}(\sqrt{n})\). In contrast to total functions, we exhibit partial functions that witness unbounded separations between rational and approximate degree, in both directions. As a consequence, we show that for quantum computers, post-selection and bounded-error are incomparable resources in the black-box model. ## 1 Introduction Starting with the seminal work of Minsky and Papert [14], a long line of research has sought to relate various measures of Boolean function complexity. In [13], Nisan and Szegedy proved that the deterministic decision tree complexity \(\operatorname{D}(f)\) of a Boolean function \(f\) is polynomially related to its degree \(\deg(f)\) as a multilinear polynomial. The same paper posed two open questions. One of them conjectures that the sensitivity and block sensitivity of a Boolean function are polynomially related. This conjecture was recently proven in a breakthrough by Huang [12]. Huang's result brought sensitivity into a "happy flock" of complexity measures on total Boolean functions that are all polynomially related: sensitivity, degree, approximate degree, and notions of query complexity. Another natural measure of Boolean function complexity is the minimal degree of a rational polynomial which represents the function exactly, called the _rational degree_ (denoted rdeg). However, rdeg is _not_ known to be either polynomially related to or separated from the complexity measures mentioned above. In fact, this was the other open question posed over 30 years ago in the paper of Nisan and Szegedy (via personal communication with Fortnow) [13]. This question was reiterated by Aaronson _et al._[1] yet very little progress has been made toward its resolution. **Question 1** (Fortnow [13]).: Does there exist \(c>1\) such that for all total Boolean functions \(f\), \(\deg(f)\leq\mathcal{O}(\operatorname{rdeg}(f)^{c})\)? One of the motivations for Fortnow's question was complexity-theoretic: is the intersection of \(\mathsf{C}_{=}\mathsf{P}\) and \(\mathsf{coC}_{=}\mathsf{P}\) strictly contained in \(\mathsf{PP}\) with respect to a generic oracle [10]? \(\mathsf{C}_{=}\mathsf{P}\) and \(\mathsf{coC}_{=}\mathsf{P}\) are "counting classes" [1] which we define later, and rational degree corresponds to the black-box version of their intersection. The rational degree is also related to quantum query complexity. In particular, de Wolf defined the _non-deterministic degree_\(\operatorname{ndeg}(f)\) of a Boolean function \(f\) as the minimal degree of a polynomial whose zero set is precisely the set of inputs on which \(f\) evaluates to false [15], and related it to the rational degree through the identity \(\operatorname{rdeg}(f)=\max\{\operatorname{ndeg}(f),\operatorname{ndeg}(\bar{ f})\}\). de Wolf also proved that the non-deterministic degree \(\operatorname{ndeg}(f)\) equals the _non-deterministic quantum query complexity_ up to a constant factor. In the same manuscript, de Wolf stated the following conjecture which, together with the inequality \(\deg(f)\leq\operatorname{D}(f)\), would resolve Fortnow's question in the affirmative with \(c=2\). **Conjecture 1** (de Wolf [15]).: _For all Boolean functions \(f\), \(\operatorname{D}(f)\leq\mathcal{O}(\operatorname{ndeg}(f)\operatorname{ndeg}( \bar{f}))\)._ Mahadev and de Wolf showed [16] an even tighter connection between the notion of rational degree and quantum query complexity: denoting by \(\operatorname{rdeg}_{\varepsilon}(f)\) the minimum degree of a rational polynomial that \(\varepsilon\)-approximates \(f\) pointwise, they showed that \(\operatorname{rdeg}_{\varepsilon}(f)\) equals (up to a constant factor) the query complexity of a quantum algorithm with _post-selection_1 that computes \(f\) with error \(\varepsilon\). For partial functions, it can be shown that the rational degree gives a lower bound on the query complexity of algorithms with post-selection, though the opposite direction is not known to be true. Furthermore, this result extends to the case of \(\varepsilon=0\), the so-called "zero-error" setting. Footnote 1: Post-selection is an operation that allows for projection onto an efficiently computable set of basis states for free, even if this set accounts for an arbitrarily small fraction of the probability mass. ### Our Results We prove lower bounds on the rational degree for certain classes of total Boolean functions. We summarize our results according to section: 1. For symmetric functions we show that \(\deg(f)/2\leq\operatorname{rdeg}(f)\). This lower bound is tight, as witnessed by the \(\mathsf{PARITY}_{n}\) function. Our technique for symmetric functions generalizes to classes of functions including ones which are constant on many Hamming weights. 2. We employ the lower bound on symmetric functions to show that for depth-\(d\) Boolean formulae, \(\operatorname{rdeg}(f)\geq\Omega(\deg(f)^{1/d})\). For \(d=2\) this is tight, as witnessed by the \(\mathsf{AND}_{n}\circ\mathsf{OR}_{n}\) function. 3. For monotone functions we prove that \(\operatorname{rdeg}(f)=\operatorname{s}(f)\geq\sqrt{\deg(f)}\). This is also tight as witnessed by the \(\mathsf{AND}_{n}\circ\mathsf{OR}_{n}\) function. 4. Our final lower bound on total functions is extremal: we prove that almost all Boolean functions on \(n\) bits have rational degree at least \(n/2-\mathcal{O}(\sqrt{n})\). On the other hand, we show that for partial functions, the rational and approximate degrees can be unboundedly separated in both directions. These separations also resolve an open question of Fortnow [14]. 1. We give a partial function \(\textsc{MajOrNone}_{n}\) on \(n\) bits with constant quantum query complexity yet rational degree \(\Omega(n)\). As a result, \(\textsc{MajOrNone}_{n}\) has constant approximate degree and \(\Omega(n)\) zero-error post-selected quantum query complexity. 2. On the other hand, we give a partial function \(\textsc{Imbalance}_{n}\) on \(n\) bits with approximate degree \(\Omega(n)\) yet constant rational degree. As a result, \(\textsc{Imbalance}_{n}\) has constant zero-error post-selected quantum query complexity and quantum query complexity \(\Omega(n)\). Now, employing the framework of standard complexity results such as [10], we can argue that post-selection and bounded error are incomparable resources in the black-box setting. To formalise this, we define \(\mathsf{PostEQP}\) as the class of decision problems which can be decided deterministically in polynomial time by quantum computers with access to post-selection. In particular, there exists a bidirectional separation between \(\mathsf{PostEQP}\) and \(\mathsf{BQP}\) with respect to generic oracles. Formally, combining the results of Corollaries 18 and 22 we get the following statement. **Corollary 1**.: _There exist oracles \(O_{1}\) and \(O_{2}\) such that \(\mathsf{BQP}^{O_{1}}\not\subseteq\mathsf{PostEQP}^{O_{1}}\) yet \(\mathsf{PostEQP}^{O_{2}}\not\subseteq\mathsf{BQP}^{O_{2}}\)._ These complexity-theoretic consequences are summarized in Figure 2. As the figure illustrates, these are the strongest possible separations in the black-box model. In addition to these consequences for \(\mathsf{PostEQP}\), our lower bound also resolves Fortnow's complexity-theoretic question. We show that not only is \(\mathsf{C}_{=}\mathsf{P}\cap\mathsf{coC}_{=}\mathsf{P}\) strictly contained in \(\mathsf{PP}\), even \(\mathsf{RP}\) is not in this intersection with respect to a generic oracle. The class \(\mathsf{C}_{=}\mathsf{P}\) is the set of languages decidable by an \(\mathsf{NP}\) machine such that if the string is in the language, the number of accepting paths is _exactly_ equal to the number of rejecting paths. Finally, to contextualize the power of \(\mathsf{PostEQP}\), we provide strong evidence that zero-error post-selection can offer advantage over efficient classical computation, even in the non-relativized setting. See 4.3 We show that \(\mathsf{PostEQP}\) contains \(\mathsf{NP}\cap\mathsf{coNP}\). We remark that \(\mathsf{NP}\cap\mathsf{coNP}\) is not even believed to be contained in \(\mathsf{BPP}\). ## 2 Preliminaries In this section we review some of the notation and definitions used in our paper. For a more comprehensive introduction to the analysis of Boolean functions see [11, 12]. We denote by \([n]\) the set \(\{1,2,...,n\}\). Given a function \(f\colon S\to\mathbb{R}\) we denote by \(\left\lVert f\right\rVert_{1}\) its \(l_{1}\) norm, \(\left\lVert f\right\rVert_{1}=\sum_{x\in S}|f(x)|\). For a bitstring \(x\in\{0,1\}^{n}\), we denote by \(|x|\) the Hamming weight of \(x\): the number of indices equal to \(1\). If \(x\in\{-1,1\}^{n}\) the Hamming weight is the number of bits that equal \(-1\). ### Boolean Functions A (total) Boolean function is any function \(f\colon\Sigma^{n}\to\Sigma\) where \(\Sigma\) is some two-element set. We will refer to the set \(\Sigma^{n}\) as the Boolean hypercube. We will primarily work over the sets \(\Sigma=\{0,1\}\) and Figure 1: A table summarizing our lower bounds on rational degree for total functions. The third column gives an example of a function that demonstrates the tightness of our lower bound, where applicable. \(\Sigma=\{-1,1\}\). The mapping \(t\mapsto(t+1)/2\) maps \(\{-1,1\}\) onto \(\{0,1\}\). While not all Boolean complexity measures are left invariant by this change of representation, all of the measures considered in this paper are preserved. We also consider restrictions of Boolean functions to proper subsets of the Boolean cube \(D\subset\Sigma^{n}\). We refer to such functions \(f\colon D\to\Sigma\) as _partial_ Boolean functions. Given a Boolean function \(f\), we denote its negation by \(\bar{f}\). We can define an inner product on the space of functions \(f\colon\{-1,1\}^{n}\to\mathbb{R}\): \[\langle f,g\rangle=2^{-n}\sum_{x\in\{-1,1\}^{n}}f(x)g(x).\] For each \(S\subseteq[n]\) we define the _character function \(\chi_{S}\) on \(S\)_ as \(\chi_{S}(x)=\prod_{i\in S}x_{i}\). The character functions \(\chi_{S}\) form an orthonormal basis under the above inner product. Thus each function over \(\{-1,1\}^{n}\) can be uniquely expressed via its _Fourier representation_: \[f=\sum_{S\subseteq[n]}\widehat{f}(S)\cdot\chi_{S},\] where we refer to \(\widehat{f}(S)=\langle f,\chi_{S}\rangle\) as the _Fourier coefficient of \(f\) at \(S\)_. We say an input \(i\in[n]\) is _relevant_ for \(f\) if \(x_{i}\) appears in the Fourier expansion for \(f\). In other words, \(f\) depends on \(x_{i}\) in a nontrivial manner. ### Polynomials As described above, each Boolean function can be represented uniquely as a formal multilinear polynomial through its Fourier representation. We define the Fourier _degree_ (or simply degree) Figure 2: Relevant complexity classes. We are able to obtain the strongest possible oracle separations in this picture. An arrow \(\mathsf{A}\to\mathsf{B}\) means \(\mathsf{A}\subseteq\mathsf{B}\) relative to all oracles. A dashed arrow \(\mathsf{A}\dashrightarrow\mathsf{B}\) means \(\mathsf{A}\not\subseteq\mathsf{B}\) relative to some oracle. of \(f\) as \(\deg(f)=\max\{|S|:\hat{f}(S)\neq 0\}\). We can extend this notion to polynomials that pointwise approximate \(f\): **Definition 2**.: Let \(D\subseteq\{-1,1\}^{n}\) and \(f\colon D\to\{-1,1\}\). A polynomial \(p:\{-1,1\}^{n}\to\mathbb{R}\) is said to \(\varepsilon\)-approximate \(f\) if for all \(x\in D\), \(\underline{|p(x)-f(x)|\leq\varepsilon}\) and for all \(x\in\{-1,1\}^{n}\), \(|p(x)|\leq 1\). The _\(\varepsilon\)-approximate degree_ of \(f\), denoted \(\operatorname{deg}_{\varepsilon}(f)\), is defined as the minimum degree of any polynomial that \(\varepsilon\)-approximates \(f\). The _degree_ of \(f\), denoted \(\deg(f)\), is defined as \(\widetilde{\operatorname{deg}}_{0}(f)\). The _approximate degree_ of \(f\), denoted \(\operatorname{deg}(f)\), is defined as \(\widetilde{\operatorname{deg}}_{1/3}(f)\). In this paper, we are primarily concerned with representations of \(f\) via rational polynomials. This gives rise to a measure known as _rational degree_, which is formally defined as follows. **Definition 3**.: Let \(D\subseteq\{-1,1\}^{n}\) and \(f\colon D\to\{-1,1\}\). If \(p\colon D\to\mathbb{R}\) and \(q\colon D\to\mathbb{R}\) are polynomials such that \(|f(x)-p(x)/q(x)|\leq\varepsilon\) for all \(x\in D\), we say that \(p/q\) is an \(\varepsilon\)-approximate rational representation of \(f\). The _\(\varepsilon\)-approximate rational degree_ of \(f\), denoted \(\operatorname{deg}_{\varepsilon}(f)\), is defined as the minimum value of \(\max\{\deg(p),\deg(q)\}\) such that \(p/q\) is an \(\varepsilon\)-approximate rational representation of \(f\). The _rational degree_ of \(f\), denoted \(\operatorname{rdeg}(f)\), is defined as \(\operatorname{rdeg}_{0}(f)\). Unlike in the definition of approximate degree, there is no requirement for an approximate rational representation to be bounded outside of \(D\). Whether or not such a boundedness condition is imposed matters significantly for the degree (see [1]) but not for the rational degree (see Appendix A). ### Sensitivity and Certificate Complexity We now define some useful combinatorial measures of Boolean function complexity. Let \(f\) be a Boolean function, \(x\in\{-1,1\}^{n}\), and \(B\subseteq[n]\). We say that \(B\) is a sensitive block of \(f\) at \(x\) if \(f(x)\neq f(x^{B})\) where \(x^{B}\) denotes the bitstring obtained by flipping all bits of \(x\) indexed by \(B\). We define, and denote by \(\operatorname{bs}_{f}(x)\), the _block sensitivity of \(f\) at \(x\)_ as the maximum number of disjoint blocks that are all sensitive at \(x\). By restricting our attention to sensitive blocks that are singletons we obtain the analogous notion of the _sensitivity of \(f\) at \(x\)_, denoted \(\operatorname{s}_{f}(x)\). The block sensitivity of \(f\) is defined as \(\operatorname{bs}(f)=\max_{x}\operatorname{bs}_{f}(x)\). Similarly the _sensitivity of \(f\)_ is defined as \(\operatorname{s}(f)=\max_{x}\operatorname{s}_{f}(x)\). For \(b\in\{0,1\}\), we also write \(\operatorname{s}^{(b)}(f)=\max_{x\in f^{-1}(b)}\operatorname{s}_{f}(x)\). A partial assignment is some function \(\rho:[n]\to\{-1,1,\star\}\). We define, and denote by \(|\rho|\), the size of the partial assignment \(\rho\) as cardinality of the set \(\{i\in[n]:\rho(i)\neq\star\}\). We say that a partial assignment \(\rho\) is _consistent_ with some \(x\in\{-1,1\}^{n}\) if \(x_{i}=\rho(i)\) for all \(i\) with \(\rho(i)\neq\star\). Given a Boolean function \(f\) we denote by \(f|_{\rho}\) the restriction of \(f\) to the set of inputs \(x\in\{-1,1\}^{n}\) that are consistent with \(\rho\). Given \(b\in\{-1,1\}\), we say that a partial assignment \(\rho\) is a _\(b\)-certificate for \(f\)_ if \(f|_{\rho}(x)=b\) for all \(x\in\operatorname{Dom}(f|_{\rho})\). The _\(b\)-certificate complexity_ of \(f\) is defined as \[\operatorname{C}_{b}(f)=\max_{x\in f^{-1}(b)}\min\{|\rho|:\rho\text{ is a $b$- certificate for $f$ consistent with $x$}\}.\] The _certificate complexity of \(f\)_ is defined as \(\operatorname{C}(f)=\max_{b\in\{-1,1\}}\operatorname{C}_{b}(f)\). ### Sign and Non-Deterministic Degree For a Boolean function \(f\) we say that a polynomial \(p:\{-1,1\}^{n}\to\mathbb{R}\) is a _(strong) sign representation_ if \(\operatorname{sgn}(p(x))=f(x)\) for all \(x\in\{-1,1\}^{n}\) and \(p(x)\neq 0\) on the entire hypercube. The _(strong) sign degree_ of \(f\) is defined as the minimum degree of any polynomial that strongly sign represents \(f\). Alon [1] and Anthony [1] have shown that all but a negligible fraction of \(n\)-bit Boolean functions have sign degree at least \(n/2\). Later, O'Donnell and Servedio proved [10] that almost every Boolean function has sign degree at most \(n/2+\mathcal{O}(\sqrt{n\log n})\). A less common but somewhat similar notion is that of a _non-deterministic polynomial_ introduced by de Wolf [20]. In this context, it is customary to consider Boolean functions using the \(\{0,1\}^{n}\to\{0,1\}\) representation. We say that \(p:\{0,1\}^{n}\to\mathbb{R}\) is a non-deterministic polynomial for \(f\colon\{0,1\}^{n}\to\{0,1\}\) if \(p(x)=0\) if and only if \(f(x)=0\). An easy calculation establishes the following relationship between the rational and non-deterministic degrees \[\operatorname{rdeg}(f)=\max\{\operatorname{ndeg}(f),\operatorname{ndeg}( \bar{f})\}.\] As mentioned in the introduction de Wolf conjectured that \(\operatorname{D}(f)\leq\mathcal{O}(\operatorname{ndeg}(f)\operatorname{ndeg }(\bar{f}))\) for all total Boolean functions. By showing that \(\operatorname{ndeg}(f)\leq\operatorname{C}_{1}(f)\), de Wolf also established the inequality \(\operatorname{rdeg}(f)\leq\operatorname{C}(f)\)[20]. ### Quantum Query Complexity and Post-selection We assume basic familiarity with concepts in quantum information. While we review some of these, we direct the reader to, e.g. [11], for background. Consider a Boolean function \(f\) over a domain \(D\). We say an \(\varepsilon\)-error quantum algorithm computes \(f\) if it outputs a bit \(a(x)\) such that for all \(x\in D\), \(\Pr[a(x)=f(x)]\geq 1-\varepsilon\). \(\mathsf{BQP}\) is the class of problems that have efficient (polynomial-time) quantum algorithms with error \(1/3\) and \(\mathsf{EQP}\) is the analogous class of zero-error algorithms. We can also define complexity classes corresponding to quantum algorithms augmented with the power of _post-selection_. **Definition 4**.: \(\mathsf{PostBQP}\) is the set of languages decidable by a polynomial time quantum algorithm that outputs two bits \(a,b\) such that for all inputs \(x\in\{0,1\}^{n}\) 1. \(\Pr[a(x)=1]>0\). 2. If \(x\in L\), then \(\Pr[b(x)=1|a(x)=1]\geq 2/3\). 3. If \(x\not\in L\), then \(\Pr[b(x)=1|a(x)=1]\leq 1/3\). \(\mathsf{PostEQP}\) is the corresponding class of _zero-error_ algorithms with post-selection. Each of these computational complexity classes has an associated query measure. Formally, we say a function has query access to a string \(w\) if it has black-box access to a unitary s.t. \(U\ket{i}\ket{b}=\ket{i}\ket{b\oplus w_{i}}\). When the input \(w\) encodes the truth table of a Boolean function \(f\), we will often write this as \(U\ket{x}\ket{b}=\ket{x}\ket{b\oplus f(x)}\), where \(x\in\{0,1\}^{n}\). The number of calls an algorithm makes to the unitary \(U\) is its _query complexity_. By \(\mathsf{Q}_{\varepsilon}(f)\) and \(\mathsf{Q}_{E}(f)\) we denote the query complexities of \(\varepsilon\)-error and zero-error quantum algorithms, respectively. \(\mathsf{PostQ}_{\varepsilon}(f)\) and \(\mathsf{PostQ}_{E}(f)\) are defined analogously for quantum algorithms with post-selection. For simplicity of notation, \(\mathsf{Q}(f)\) and \(\mathsf{PostQ}(f)\) are understood to correspond to \(\varepsilon=1/3\). A seminal result by Beals _et al._ gives a lower bound quantum query complexity using polynomials [1]. Formally, we have \(\mathsf{Q}_{\varepsilon}(f)\geq\widetilde{\operatorname{deg}}_{\varepsilon}( f)/2\) for all (possibly partial) Boolean functions \(f\). As a special case, \(\mathsf{Q}_{E}(f)\geq\operatorname{deg}(f)/2\). This result gave rise to the so-called _polynomial method_ for quantum query lower bounds. Similarly, it was shown by Mahadev and de Wolf that \(\mathsf{PostQ}_{\varepsilon}(f)=\Theta(\operatorname{rdeg}_{\varepsilon}(f))\) and \(\mathsf{PostQ}_{E}(f)=\Theta(\operatorname{rdeg}_{0}(f))\) for total functions \(f\)[14]. It is not difficult to extend this result for partial functions, see Appendix A. Nonetheless, it is surprising that the result does still hold for partial functions since the analogous result for quantum query complexity and approximate degree was recently shown to be false in [1]. ## 3 Rational Degree Lower Bounds In this section, we present rational degree lower bounds for certain classes of Boolean functions. Our results constitute progress towards showing that rational degree is polynomially related to Fourier degree for total functions. First, we establish the tight lower bound \(\deg(f)/2\leq\operatorname{rdeg}(f)\) for symmetric functions. This result then becomes key in proving a rational degree lower bound for read-once Boolean formulae, which is tight for formulae of depth \(2\). Next, we prove that the rational degree equals the sensitivity for monotone functions, which implies that \(\sqrt{\deg(f)}\leq\operatorname{rdeg}(f)\) for monotone functions. This lower bound is also tight. Finally, we show that almost all Boolean functions \(f\colon\{-1,1\}^{n}\to\{-1,1\}\) have rational degree at least \(n/2-\mathcal{O}(\sqrt{n})\). ### Symmetric Functions Our first lower bounds are for (possibly partial) functions which are constant on a large number of Hamming slices. Of course, this subsumes the class of symmetric functions. This lemma will later be useful in obtaining an unbounded separation of rational degree from quantum query complexity (and thus approximate degree) in the case of partial functions. **Lemma 5**.: _Let \(f\) be a (possibly partial) nonconstant Boolean function over input domain \(D\subseteq\{0,1\}^{n}\) and define \(S_{0}=\{k\in[n]\colon|x|=k\implies f(x)=0\}\), \(S_{1}=\{k\in[n]\colon|x|=k\implies f(x)=1\}\). Then \(\operatorname{rdeg}(f)\geq\max(|S_{0}|,|S_{1}|)\)._ We use the Minsky-Papert symmetrization technique, which converts a multivariate polynomial over \(\{0,1\}^{n}\) to a univariate polynomial over \(\mathbb{R}\)[13]. Formally, given \(p:\{0,1\}^{n}\to\mathbb{R}\) we define \(P(k)\coloneqq\mathbb{E}_{|x|=k}[p(x)]\). Proof.: Since \(\operatorname{rdeg}(f)=\operatorname{rdeg}(\bar{f})\), we can assume without loss of generality that \(|S_{0}|\geq|S_{1}|\). It suffices to show that \(\operatorname{rdeg}(f)\geq|S_{0}|\). Indeed, let \(f=p/q\) be a rational representation of \(f\). Applying the Minsky-Papert symmetrization technique to \(p(x)\) we obtain a univariate polynomial \(P(k)\) such that \(\deg(p)\geq\deg(P)\) and \(P(k)=0\) for any \(k\in S_{0}\). On the other hand, there exists at least one \(k\in[n]\) such that \(P(k)\neq 0\), since \(f\) is nonconstant. Thus \(\deg(p)\geq\deg(P)\geq|S_{0}|\). Since this holds for every rational representation of \(f\), the result follows. Of course, a special case of this result is a strong lower bound for symmetric total functions. **Corollary 6**.: _If \(f\colon\{0,1\}^{n}\to\{0,1\}\) is symmetric then \(\operatorname{rdeg}(f)\geq\deg(f)/2\)._ ### Read-Once Formulae We now turn our attention to a generalized version of read-once Boolean formulae, where each gate is an arbitrary nonconstant symmetric gate. The key observations behind the lower bound are that these formulae can be written as compositions of symmetric gates, and that any depth \(d\) tree must contain a node with branching factor \(\geq n^{1/d}\). **Lemma 7**.: _Let \(f\colon\{0,1\}^{n}\to\{0,1\}\) and \(g_{i}\colon\{0,1\}^{n_{i}}\to\{0,1\}\) be Boolean functions where every variable in each function is relevant. Defining \(h:\{0,1\}^{\sum n_{i}}\to\{0,1\}\) to be \(h(x^{1},\ldots,x^{n})=f(g_{1}(x^{1}),\ldots,g_{n}(x^{n}))\), we have that_ \[\operatorname{rdeg}(h)\geq\max\{\operatorname{rdeg}(f),\operatorname{rdeg}(g_{1 }),\ldots,\operatorname{rdeg}(g_{n})\}.\] Proof.: Since every variable is relevant for each \(g_{i}\), we know there exists restrictions \(\rho^{i}\) to all but \(1\) variable in each \(x^{i}\) such that \(g_{i}|_{\rho^{i}}(x^{i})=x^{i}_{k_{i}}\) or \((1-x^{i}_{k_{i}})\) for some \(1\leq k_{i}\leq n_{i}\). Considering the restriction \(\rho=\rho^{1}\cup\dots\cup\rho^{n}\), it is evident that \(h|_{\rho}(x)=f(x^{1}_{k_{1}},\dots,x^{n}_{k_{n}})\) up to negations. Therefore, \[\operatorname{rdeg}(h)\geq\operatorname{rdeg}(h|_{\rho})\geq \operatorname{rdeg}(f). \tag{1}\] Now pick an arbitrary \(i\). Since, by assumption, every variable of \(f\) is relevant, there exists an assignment \(x_{j}=z_{j}\) for all \(j\neq i\) such that \(f(z_{1},\dots z_{i-1},x_{i},z_{i+1},\dots,z_{n})=x_{i}\) or \(\overline{x_{i}}\). Since \(g_{i}\) is nonconstant, it follows that there exists an assignment to the variables \((x^{j})_{j\neq i}\) such that each \(g_{j}(x^{j})\) is fixed to \(z_{j}\). Let \(\tau\) be the restriction induced by this partial assignment. Then \[h|_{\tau}(x)=f(z_{1},\dots z_{i-1},g_{i}(x^{i}),z_{i+1},\dots z_{n })=g_{i}(x^{i})\text{ or }\overline{g_{i}(x^{i})}.\] Consequently, \[\operatorname{rdeg}(h)\geq\operatorname{rdeg}(h|_{\tau})\geq \operatorname{rdeg}(g_{i}). \tag{2}\] Combining Equations (1) and (2) gives the desired result. **Lemma 8**.: _Let \(f\) be written as a read-once formula with symmetric gates where the maximum branching factor of any node is \(w\). Then \(\operatorname{rdeg}(f)=\Omega(w)\)._ Proof.: Let \(p/q\) be a rational representation of \(f\). We can assume without loss of generality that \(f\) is monotone (i.e. only the literals \(x_{i}\), and not \(\overline{x_{i}}\) appear in the formula). Now consider the node \(G\) with branching factor \(w\). Let \(F\) be the subformula with top gate \(G\) and let \(F_{1},\dots,F_{w}\) be the read-once subformulas below \(G\). Each \(F_{i}\) is nonconstant, which implies the existence of a restriction \(\rho_{i}\) of all but \(1\) variable in each \(V_{i}\) such that toggling the sole live variable (say \(x_{k_{i}}\)) toggles the value of \(F_{i}\) (i.e. \(F_{i}|_{\rho_{i}}=x_{j}\) or \(\overline{x_{k}}\) for some \(k\)). As the \(V_{i}\) are disjoint (as \(f\) is read-once), these restrictions together define a unified restriction \(\rho\) such that \(F|_{\rho}(x)=G(x_{k_{1}},\dots,x_{k_{w}})\) up to negations. Inductively using Lemma 7 on the formula \(f|_{\rho}\) by starting at the top node and going down the path to \(G\), it follows that \[\operatorname{rdeg}(f)\geq\operatorname{rdeg}(f|_{\rho})\geq \operatorname{rdeg}(F|_{\rho})=\operatorname{rdeg}(G)\geq w/2, \tag{3}\] where the last inequality follows from Lemma 5. Now we can prove polynomial rational degree lower bounds on read-once formulae. **Corollary 9**.: _Let \(f\) be written as a depth-\(d\) read-once formula with symmetric gates. Then \(\operatorname{rdeg}(f)=\Omega(\operatorname{deg}(f)^{1/d})\)._ Proof.: The result follows from Lemma 8 and a simple contradiction argument: if all nodes have branching factor strictly less than \(n^{1/d}\) then there must be strictly fewer than \(n\) literals. Note that \(n\geq\operatorname{deg}(f)\) so the lower bound \(\Omega(n^{1/d})\) is stronger. This lower bound is tight for \(d=2\), as witnessed by the AND-of-ORs function \(f=\operatorname{AND}_{\sqrt{n}}\circ\operatorname{OR}_{\sqrt{n}}\). Indeed, \(\operatorname{rdeg}\left(f\right)\leq\operatorname{C}(f)=n^{1/2}=\sqrt{ \operatorname{deg}(f)}\). However, for larger depth \(d>2\), it is unclear whether a depth-\(d\) read-once \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \operatorname{\operatorname{\operatornameoperatorname \operatornameoperatornameoperatornameoperatorname \, {\operatornameoperatornameoperatornameoperatornameoperatorname\, \, \,\,\,\,\,}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}}}\ \ \ \ \ \ \ \ \}}}\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\ ### Monotone Functions A Boolean function \(f\colon\{0,1\}^{n}\to\{0,1\}\) is said to be monotone if \(\forall x,y\in\{0,1\}^{n}\), \(x\leq y\) implies \(f(x)\leq f(y)\) where \(x\leq y\) is taken pointwise. In this subsection we prove that \(\operatorname{rdeg}(f)=\operatorname{s}(f)\) for monotone Boolean function \(f\). We note that it suffices to prove that \(\operatorname{s}(f)\leq\operatorname{rdeg}(f)\). This is because the certificate complexity of a monotone Boolean functions \(f\) equals its sensitivity \(\operatorname{C}(f)=\operatorname{s}(f)\)[10]. Combining this with the fact that \(\operatorname{rdeg}(f)\leq C(f)\) we can already conclude the other inequality. **Claim 10**.: _For monotone Boolean functions \(f\colon\{0,1\}^{n}\to\{0,1\}\), \(s(f)\leq\operatorname{rdeg}(f)\)._ Our proof is similar to the proof that for all monotone Boolean functions \(s(f)\leq\operatorname{deg}(f)\) as presented in [1, Proposition 4]. Proof.: Suppose without loss of generality that \(f\) is monotone increasing. We prove the claim by showing that \[\operatorname{s}_{0}(f)\leq\operatorname{ndeg}(\bar{f})\quad\text{and}\quad \operatorname{s}_{1}(f)\leq\operatorname{ndeg}(f). \tag{4}\] We only prove the first inequality as the second can be proven analogously. Let \(x\) be such that \(\operatorname{s}_{0}(f)=s_{f}(x)\). All sensitive variables must be \(0\) in \(x\) since \(f\) is monotone increasing. Moreover, setting any sensitive variable to \(1\) changes the value of \(f\) from \(0\) to \(1\). Therefore, fixing all variables in \(x\) except for the \(\operatorname{s}_{0}(f)\) many sensitive variables yields the \(\operatorname{\mathsf{OR}}_{m}\) function on \(m:=\operatorname{s}_{0}(f)\) variables. Since \(\operatorname{ndeg}(\overline{\operatorname{\mathsf{OR}}}_{m})\geq m\), \(\operatorname{ndeg}(\bar{f})\geq s_{0}(f)\). Since \(\operatorname{s}(f)=\operatorname{C}(f)\) and \(\sqrt{\operatorname{deg}(f)}\leq\operatorname{s}(f)\) for monotone functions, we have the following corollary. **Corollary 11**.: _For monotone Boolean functions \(f\), \(\operatorname{rdeg}(f)=\operatorname{s}(f)\). In particular, \(\sqrt{\operatorname{deg}(f)}\leq\operatorname{rdeg}(f)\)._ Figure 3: A depth-\(d\)AND-OR tree with certificate complexity \(\sqrt{n}\), and thus rational degree at most \(\sqrt{n}\). Indeed, setting all input wires to \(1\) for AND functions and a single input wire to \(1\) for OR functions along a single path to root gives a \(1\) certificate, and setting all input wires to \(0\) for OR functions and a single input wire to \(0\) for AND functions gives a \(0\)-certificate. One can easily verify that both of these certificates are of size \(\sqrt{n}\). Note that this bound is tight, as witnessed by the AND-of-ORs function, \(f=\mathsf{AND}_{n}\circ\mathsf{OR}_{n}\), on \(n^{2}\) bits, which has \(\operatorname{rdeg}\left(f\right)\leq\operatorname{C}(f)=n=\sqrt{\deg(f)}\). We remark that Claim 10 cannot be extended to all Boolean functions, as evidenced by the Kushilevitz function \(K_{m}\colon\{0,1\}^{6^{m}}\to\{0,1\}\)[13]. Indeed, \(K_{m}\) has full sensitivity, but its degree is \(3^{m}\). ### Random Functions As our final piece of evidence that rational degree is polynomially related to degree, we prove that all but a negligible fraction of Boolean functions \(f\colon\{-1,1\}^{n}\to\{-1,1\}\) have rational degree at least \(n/2-\mathcal{O}(\sqrt{n})\). As mentioned in the introduction Alon [1] and Anthony [1] used counting arguments to show that all but a negligible fraction of \(n\)-bit Boolean functions have sign degree at least \(n/2\). Below we restate a variant of the function counting theorem used by Athony. Given a finite set \(X\) and a mapping \(\phi\colon X\to\mathbb{R}^{d}\), we say that a \(\phi\)-separable dichotomy of \(X\) is a partition of \(X\) into subsets \(X^{+}\cup X^{-}\) such that there exists some \(w\in\mathbb{R}^{d}\) for which \(w\cdot\phi(x)>0\) for all \(x\in X^{+}\) and \(w\cdot\phi(x)<0\) for all \(x\in X^{-}\). **Theorem 12** (Function counting theorem, [12]).: _Let \(\phi\colon S\to\mathbb{R}^{d}\). Let \(X=\{x_{1},\ldots,x_{N}\}\subseteq S\). If a \(\phi\)-surface (i.e., a set of the form \(\{x\in S:w\cdot\phi(x)=0\}\) for some \(w\in\mathbb{R}^{d}\)) contains a set of points \(Y=\{y_{1},y_{2},\ldots,y_{k}\}\subseteq S\), where \(\phi(y_{i})\) are linearly independent for all \(i\), and where the projection of \(\phi(x_{1}),\ldots,\phi(x_{N})\) onto the orthogonal subspace to the space spanned by the \(\phi(y_{i})\)'s is in general position, then there are \(C(N,d-k)\) many \(\phi\)-separable dichotomies of \(X\), where_ \[C(N,d)=2\sum_{i=0}^{d-1}\binom{N-1}{i}.\] We consider the following adaptation of the above theorem. Consider a set of \(N\) points \(S=\{v_{1},\ldots,v_{n}\}\) in \(\mathbb{R}^{D}\). Given a \(2\)-coloring of the points \(f\colon[N]\to\{-1,1\}\), we say that the coloring \(f\) is separable by two hyperplanes if there exist hyperplanes \(H_{j}=\{v:\ \alpha_{j}\cdot v=0\}\) for \(j=1,2\) such that \[\forall i\in[N]\colon\ f(i)=\operatorname{sgn}((\alpha_{1}\cdot v_{i})( \alpha_{2}\cdot v_{i})).\] **Corollary 13**.: _Given \(N\) points in \(\mathbb{R}^{M}\), the number of two colorings \(f\colon[N]\to\{-1,1\}\) that are separable by two hyperplanes is at most \(C(N,M)^{2}\)._ Proof.: Let \(S\subset\mathbb{R}^{M}\) be given and suppose that \(f\) is a coloring that is separated by the hyperplanes \(H_{1}\) and \(H_{2}\). Then there exist colorings \(f_{1},f_{2}\) that are separated by the hyperplanes \(H_{1}\) and \(H_{2}\) respectively. Since there are at most \(C(N,M)\) choices for each of \(f_{1}\) and \(f_{2}\), the number of such colorings \(f\) is bounded by \(C(N,M)^{2}\). **Lemma 14**.: _Let \(m\leq n\) be two positive integers. The number of Boolean functions \(f\colon\{-1,1\}^{n}\to\{-1,1\}\) with rational degree at most \(m\) is at most \(C(2^{n},\binom{n}{\leq m})^{2}\)._ Proof.: Let \(M=\binom{n}{\leq m}\). For each \(x\in\{-1,1\}^{n}\) define \(v_{x}\in\mathbb{C}^{M}\) by letting \((v_{x})_{S}=\chi_{S}(x)\) for \(S\subseteq[n]\), \(|S|\leq m\). Suppose \(p/q\) is a rational representation of \(f\) of degree at most \(m\). Then \[f(x)=\operatorname{sgn}(p(x)/q(x))=\operatorname{sgn}(p(x)q(x))= \operatorname{sgn}((\widehat{p}\cdot v_{x})(\widehat{q}\cdot v_{x})).\] Thus the coloring given by \(f\) is separated by the hyperplanes defined by \(\widehat{p}\) and \(\widehat{q}\). The result follows by Corollary 13. We can now state and prove our extremal lower bound on rational degree. **Corollary 15**.: _All but a negligible fraction of Boolean functions on \(n\) variables have rational degree at least \(n/2-\mathcal{O}(\sqrt{n})\)._ Proof.: Write \(m=n/2-\sqrt{cn}\) for some constant \(c\), and let \(N=2^{n}\). Then by Corollary 13 there are at most \(C\big{(}N,{n\choose\leq m}\big{)}^{2}\) Boolean functions on \(n\) variables of rational degree less than \(m\). By the Chernoff bound \({n\choose\leq n/2-\lambda}<2^{n}e^{-2\lambda^{2}/n}\) and the Hamming bound, we have that the proportion of Boolean functions with rational degree strictly less \(m\) is bounded above by \[\frac{C\big{(}N,{n\choose\leq m}\big{)}^{2}}{2^{N}}\leq\frac{C(N,Ne^{-2c})^{2} }{2^{N}}\leq O\big{(}2^{N(2h_{2}(e^{-2c})-1)}\big{)},\] where \(h_{2}(\cdot)\) denotes the binary entropy function. Solving the inequality \(h_{2}(e^{-2c})<1/2\) numerically we find that for \(c\geq 1.104\), the above bound tends to \(0\). ## 4 Applications in Complexity Theory In this section, we give two functions: one whose rational degree is unboundedly higher than its approximate degree and one which has approximate degree unboundedly higher than its rational degree. These examples in turn give bidirectional separations between \(\mathsf{BQP}\) and \(\mathsf{PostEQP}\) with respect to generic oracles. We conclude the section by giving evidence that zero-error quantum computation with post-selection gives advantage over bounded-error randomized algorithms, providing context to our results. ### Post-Selection can be a Weak Resource In this subsection, we give an oracle which witnesses that \(\mathsf{BQP}\not\subseteq\mathsf{PostEQP}\) (in fact, even \(\mathsf{RP}\not\subseteq\mathsf{PostEQP}\)). This is accomplished by constructing a partial function which has constant \(1\)-sided error randomized query complexity but maximal \(\mathsf{PostQ}_{E}\). In fact, this problem also demonstrates that the rational degree can be arbitrarily higher than the approximate degree for partial functions. **Problem 16** (Majority or None).: _The \(\textsc{MajOrNone}_{n}\) function is defined as a partial Boolean function on the set of bitstrings \(x\in\{0,1\}^{n}\) that have Hamming weight either \(0\) or at least \(n/2\). The function \(\textsc{MajOrNone}_{n}\) takes value \(0\) in the former case, and takes value \(1\) otherwise._ **Theorem 17**.: _The \(\textsc{MajOrNone}_{n}\) function can be decided by a quantum algorithm using constantly many queries, yet its rational degree is at least \(\Omega(n)\). Consequently, \(\textsc{MajOrNone}_{n}\) witnesses the following separations:_ \[\widetilde{\deg}(\textsc{MajOrNone}_{n}) \leq\mathcal{O}(1)\quad\text{ yet }\quad\operatorname{rdeg}(\textsc{MajOrNone}_{n}) \geq\Omega(n),\] \[\quad\mathsf{Q}(\textsc{MajOrNone}_{n}) \leq\mathcal{O}(1)\quad\text{ yet }\quad\mathsf{PostQ}_{E}(\textsc{MajOrNone}_{n}) \geq\Omega(n).\] Proof.: The \(\textsc{MajOrNone}_{n}\) function even has constant \(\mathsf{RP}\) query complexity. Indeed, we may simply query a constant number of random bits and output \(1\) if any of them are \(1\). Therefore, \(\textsc{MajOrNone}_{n}\) has constant quantum query complexity, which in turn implies a constant approximate degree. We show via a rational degree lower bound that \(\mathsf{PostQ}_{E}(\textsc{MajOrNone}_{n})=\Omega(n)\). In particular, we show that \(\operatorname{rdeg}(\textsc{MajOrNone}_{n})=\Omega(n)\). Using the notation of Lemma 5, for \(\textsc{MajOrNone}_{n}\) we have \(|S_{1}|\geq n/2\), giving us \[\mathsf{PostQ}_{E}(\textsc{MajOrNone}_{n})\geq\operatorname{rdeg}(\textsc{ MajOrNone}_{n})=\Omega(n).\qed\] The complexity classes \(\mathsf{Q}\) and \(\mathsf{PostQ}_{E}\) are the query complexity equivalents of \(\mathsf{BQP}\) and \(\mathsf{PostEQP}\), respectively. As such, our unbounded separation between these complexity measures gives a separation of \(\mathsf{BQP}\) and \(\mathsf{PostEQP}\) with respect to a generic oracle. **Corollary 18**.: _There exists an oracle \(O\) such that \(\mathsf{RP}^{O}\not\subseteq\mathsf{PostEQP}^{O}\). _ ### Post-Selection can be a Strong Resource On the other hand, we can give an oracle which witnesses \(\mathsf{PostEQP}\not\subseteq\mathsf{BQP}\). We do this by constructing a promise problem \(f\) for which \(\mathsf{PostQ}_{E}(f)=\mathcal{O}(1)\) but \(\mathsf{Q}_{\varepsilon}(f)\geq\Omega(n)\). This problem also witnesses the fact that approximate degree can be unboundedly larger than rational degree. **Problem 19** (Imbalance).: _Let \(n=4m+2\) for some positive integer \(m\). Define the functions \(L,R:\{-1,1\}^{n}\to\mathbb{R}\) as \(L(x)=x_{1}+x_{2}+\ldots+x_{2m+1}\) and \(R(x)=x_{2m+2}+\ldots+x_{4m+2}\). Then the Imbalance\(\colon\{-1,1\}^{n}\to\mathbb{R}\) function is defined as \(\textsc{Imbalance}(x)=\frac{L(x)}{R(x)}\)._ Note that we assumed \(4\nmid n\) to ensure that the denominator \(R(x)\) cannot be \(0\). **Problem 20** (Boolean Imbalance).: _Let \(m\) and \(n\) be as in the above problem. We define the Boolean Imbalance \(\mathrm{BI}_{n}\) function as the restriction of Imbalance to the union \(S_{-}\cup S_{+}\) where we let_ \[S_{+} =\{(x_{L},x_{R})\colon|x_{L}|=|x_{R}|=m\},\] \[S_{-} =\{(x_{L},x_{R})\colon|x_{L}|+|x_{R}|=2m+1\text{ and }|x_{L}|,|x_{R}| \geq m\}.\] Note that \(\mathrm{BI}_{n}(x)=1\) for any \(x\in S_{+}\) since the numerator and denominator evaluate to the same quantity. On the other hand, for any \(x\in S_{-}\) we have that \(\mathrm{BI}_{n}(x)=-1\) since both \(L(x)\) and \(R(x)\) must be \(\pm 1\) but they must be different. By a generalisation of the equivalence of Mahadev and de Wolf (Lemma 25) we have an upper bound of 2 on \(\mathsf{PostQ}_{E}(\mathrm{BI}_{n})\). We now show that it has a linear lower bound on the approximate degree. **Lemma 21**.: _The \(\mathrm{BI}_{n}\) function can be decided by a postselected quantum algorithm using only 2 queries, yet its rational degree is at least \(\Omega(n)\). Consequently, \(\mathrm{BI}_{n}\) witnesses the following separations:_ \[\mathrm{rdeg}(\mathrm{BI}_{n}) \leq\mathcal{O}(1)\] _vet_ \[\quad\widetilde{\mathrm{deg}}(\mathrm{BI}_{n}) \geq\Omega(n),\] \[\mathsf{PostQ}_{E}(\mathrm{BI}_{n}) \leq\mathcal{O}(1)\] _vet_ \[\quad\mathsf{Q}(\mathrm{BI}_{n}) \geq\Omega(n).\] Proof.: Note that \(\mathrm{BI}_{n}\) is defined on inputs of Hamming weight \(2m\) and \(2m+1\). By a result of Nayak and Wu any function which is constant on Hamming slices \(l,l+1\) and flips its value has approximate degree \(\Omega(\max\{l,n-l\})\)[11]. In this case, since the function value flips on Hamming weights \(2m,2m+1\) we get a lower bound of \(\Omega(\max\{2m+1,2m\})=\Omega(n)\). Finally, just like in the previous section, this separation between complexity measures allows us to construct an oracle relative to which \(\mathsf{PostEQP}\) is not contained in \(\mathsf{BQP}\). **Corollary 22**.: _There exists an oracle \(O\) such that \(\mathsf{PostEQP}^{O}\not\subseteq\mathsf{BQP}^{O}\). _ Our unbounded separation of rational degree and approximate degree gives a generic oracle separation of \(\mathsf{PostEQP}\) and \(\mathsf{BQP}\). Combined with Corollary 18, this tells us that zero-error post-selection and bounded error are "incomparable" resources in the black-box model: one is not stronger than the other. ### Post-Selection and Non-Determinism To conclude the section, we provide more context to our results by giving evidence that zero-error quantum computation with post-selection gives advantage over efficient classical computation. **Claim 23**.: \(\mathsf{NP}\cap\mathsf{coNP}\subseteq\mathsf{PostEQP}\)_._ Proof.: Let \(L\in\mathsf{NP}\cap\mathsf{coNP}\). Since \(L\in\mathsf{NP}\), there is an efficient algorithm \(M_{1}\) and a polynomial \(p_{1}\) such that for every \(x\in L\), there exists \(u_{1}\in\{0,1\}^{p_{1}(|x|)}\) such that \(M_{1}(x,u_{1})=1\) and for every \(x\not\in L\) and \(u\in\{0,1\}^{p_{1}(|x|)}\) we have \(M_{1}(x,u)=0\). Similarly since \(L\in\mathsf{coNP}\) there is an efficient algorithm \(M_{2}\) and polynomial \(p_{2}\) such that for every \(x\not\in L\), there exists \(u_{2}\in\{0,1\}^{p_{2}(|x|)}\) such that \(M_{2}(x,u_{2})=1\) and for every \(x\in L\) and \(u_{2}\in\{0,1\}^{p_{2}(|x|)}\) we have \(M_{2}(x,u_{2})=0\). Now, given \(x\), our quantum computer can generate a uniform superposition over all the possible certificates for both \(M_{1}\) and \(M_{2}\) (concatenated together), and post-select on the event that either \(M(x,u_{1})=1\) or \(M_{2}(x,u_{2})=1\). Then, the quantum algorithm can measure all registers and simulate both \(M_{1}(x,u_{1})\) and \(M_{2}(x,u_{2})\) and see which one is \(1\). By definition, only one of \(M_{1}\) and \(M_{2}\) will accept, and whichever one accepts tells us if \(x\in L\) or not. It is widely believed that \(\mathsf{NP}\cap\mathsf{coNP}\) is not contained in \(\mathsf{P}\) or even \(\mathsf{BPP}\). As such, there is reason to believe that zero-error quantum algorithms with post-selection can offer advantage over efficient classical computation. ## 5 Open Questions In this paper, we considered the problem of lower bounding the rational degree of Boolean functions in terms of their Fourier degree. While we could not answer this question in its full generality, we showed that the square root of the degree lower bounds the rational degree for both monotone and symmetric Boolean functions. We conjecture that this lower bound extends to all total Boolean functions. **Conjecture 2**.: _For all Boolean functions \(f\colon\{-1,1\}^{n}\to\{-1,1\}\), \(\sqrt{\deg(f)}\leq\operatorname{rdeg}(f)\)._ Answering this conjecture in the affirmative would place rational degree within a plethora of Boolean function complexity measures all of which are polynomially related. Recall that for partial functions, we have unbounded separations between the rational and approximate degrees in both directions. In this work, we showed that a hypothetical total function that witnesses any such separation must lack a certain level of structure: in particular, it cannot be symmetric, monotone, or expressible by a low-depth read-once Boolean formula. In this direction, an easier question is whether there are other classes of functions for which rational degree cannot be separated from Fourier degree. Some candidates that may be amenable to current techniques include unate and transitive-symmetric functions. In particular, showing that unate functions have polynomial rational degree would, in turn, imply polynomial rational degree lower bounds for read-once \(\mathsf{TC}_{0}\) circuits by adapting our result for read-once Boolean formulae with symmetric gates. We also proved that almost all Boolean functions \(f\colon\{-1,1\}^{n}\to\{-1,1\}\) have rational degree at least \(n/2-\mathcal{O}(\sqrt{n})\). As mentioned in the preliminaries O'Donnell and Servedio proved [1] that almost all Boolean functions \(f\colon\{-1,1\}^{n}\to\{-1,1\}\) have sign degree at most \(n/2+\mathcal{O}(\sqrt{n\log n})\). It would be interesting to know if an analogous result can be established for the rational degree. **Conjecture 3**.: _All but a negligible fraction of Boolean functions \(f\colon\{-1,1\}^{n}\to\{-1,1\}\) have rational degree at most \(n/2+o(n)\)._ ### Acknowledgements The authors thank Scott Aaronson, Yuval Filmus, Lance Fortnow, Sabee Grewal, Robin Kothari, Geoffrey Mon, Rocco Servedio, Avishay Tal, Ronald de Wolf, and David Zuckerman for helpful conversations. VI and SJ are supported by Scott Aaronson's Vannevar Bush Fellowship from the US Department of Defense, the Berkeley NSF-QLCI CIQC Center, a Simons Investigator Award, and the Simons "It from Qubit" collaboration. VI is supported by a National Science Foundation Graduate Research Fellowship. MKD and DW acknowledge support from the Army Research Office (grant W911NF-20-1-0015) and the Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing program. VMK acknowledges support from NSF Grant CCF-2008076 and a Simons Investigator Award (#409864, David Zuckerman). MW was supported by NSF grant CCF-2006359.
2302.00891
Role of Bootstrap Averaging in Generalized Approximate Message Passing
Generalized approximate message passing (GAMP) is a computationally efficient algorithm for estimating an unknown signal $w_0\in\mathbb{R}^N$ from a random linear measurement $y= Xw_0 + \epsilon\in\mathbb{R}^M$, where $X\in\mathbb{R}^{M\times N}$ is a known measurement matrix and $\epsilon$ is the noise vector. The salient feature of GAMP is that it can provide an unbiased estimator $\hat{r}^{\rm G}\sim\mathcal{N}(w_0, \hat{s}^2I_N)$, which can be used for various hypothesis-testing methods. In this study, we consider the bootstrap average of an unbiased estimator of GAMP for the elastic net. By numerically analyzing the state evolution of \emph{approximate message passing with resampling}, which has been proposed for computing bootstrap statistics of the elastic net estimator, we investigate when the bootstrap averaging reduces the variance of the unbiased estimator and the effect of optimizing the size of each bootstrap sample and hyperparameter of the elastic net regularization in the asymptotic setting $M, N\to\infty, M/N\to\alpha\in(0,\infty)$. The results indicate that bootstrap averaging effectively reduces the variance of the unbiased estimator when the actual data generation process is inconsistent with the sparsity assumption of the regularization and the sample size is small. Furthermore, we find that when $w_0$ is less sparse, and the data size is small, the system undergoes a phase transition. The phase transition indicates the existence of the region where the ensemble average of unbiased estimators of GAMP for the elastic net norm minimization problem yields the unbiased estimator with the minimum variance.
Takashi Takahashi
2023-02-02T05:46:32Z
http://arxiv.org/abs/2302.00891v3
# Role of Bootstrap Averaging in Generalized Approximate Message Passing ###### Abstract Generalized approximate message passing (GAMP) is a computationally efficient algorithm for estimating an unknown signal \(w_{0}\in\mathbb{R}^{N}\) from a random linear measurement \(y=Xw_{0}+\epsilon\in\mathbb{R}^{M}\), where \(X\in\mathbb{R}^{M\times N}\) is a known measurement matrix and \(\epsilon\) is the noise vector. The salient feature of GAMP is that it can provide an unbiased estimator \(\hat{\mathbf{r}}^{\hat{G}}\sim\mathcal{N}(w_{0},\hat{s}^{2}I_{N})\), which can be used for various hypothesis-testing methods. In this study, we consider the bootstrap average of an unbiased estimator of GAMP for the elastic net. By numerically analyzing the state evolution of _approximate message passing with resampling_, which has been proposed for computing bootstrap statistics of the elastic net estimator, we investigate when the bootstrap averaging reduces the variance of the unbiased estimator and the effect of optimizing the size of each bootstrap sample and hyperparameter of the elastic net regularization in the asymptotic setting \(M,N\to\infty,M/N\to\alpha\in(0,\infty)\). The results indicate that bootstrap averaging effectively reduces the variance of the unbiased estimator when the actual data generation process is inconsistent with the sparsity assumption of the regularization and the sample size is small. Furthermore, we find that when \(w_{0}\) is less sparse, and the data size is small, the system undergoes a phase transition. The phase transition indicates the existence of the region where the ensemble average of unbiased estimators of GAMP for the elastic net norm minimization problem yields the unbiased estimator with the minimum variance. ## I Introduction Consider estimating an unknown signal \(\mathbf{w}_{0}\in\mathbb{R}^{N}\) from a random linear measurement \(\mathbf{y}\in\mathbb{R}^{M}\) in the form, \[\mathbf{y}=X\mathbf{w}_{0}+\mathbf{\epsilon}. \tag{1}\] In (1), \(X\in\mathbb{R}^{M\times N}\) is a known measurement matrix whose elements are independent and identically distributed (i.i.d.) standard Gaussian variables, and \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\Delta I_{M})\) is the measurement noise. We also assume that each element of the unknown signal \(\mathbf{w}_{0}\) is i.i.d., according to a distribution \(q_{0}\). Generalized approximate message passing (GAMP) [1, 2] is a computationally efficient algorithm for solving this problem. A striking feature of GAMP is its applicability to various hypothesis testing [3]. Specifically, GAMP can provide an unbiased estimator \(\hat{\mathbf{r}}^{\hat{G}}\sim\mathcal{N}(\mathbf{w}_{0},\hat{s}^{2}I_{M})\)[2] in a high-dimensional asymptotic setting with \(M,N\to\infty,M/N\to\alpha\in(0,\infty)\), where the variance \(\hat{s}^{2}\) depends on the quality of the measurement \(\mathbf{y}\) and denoising function used in GAMP. This unbiased estimator has been used to test the significance of estimated signals [3, 4, 5, 6]. The statistical power of these tests depends on the variance \(\hat{s}^{2}\), with a lower variance leading to a higher statistical power. However, reducing the variance \(\hat{s}^{2}\) is not a trivial task. Replacing the denoising function used in GAMP with a powerful one based on nonconvex regularization, for example, can worsen the convergence of GAMP [7] or, even if convergence is achieved, the improvement may be insignificant [8]. This study aims to find an alternative way to reduce the variance without nonconvex regularization. To reduce the variance, we use the _bootstrap averaging_[9] of computational statistics (commonly known as _ensemble learning_ in machine learning [10, 11]). Specifically, we consider averaging the unbiased estimators of GAMP for multiple bootstrap samples with arbitrary size \(M\mu_{B},\,\mu_{B}\in(0,\infty]\). For denoising function of GAMP, we consider the one for the elastic net [12]. However, the efficient computation and theoretical analysis of an averaged unbiased estimator remains unresolved. To resolve this problem, we use _AMP with resampling_ (AMPR) [13, 14] that has been proposed for computing bootstrap statistics of the elastic net estimator by running a variant of GAMP once. We will argue that AMPR is actually computing the bootstrap average of the unbiased estimators of GAMP. That is, the averaged unbiased estimator can be computed efficiently, and its variance can be analyzed using the state evolution (SE) of AMPR, which has been developed to analyze the performance of AMPR. We then conduct a thorough numerical analysis of the SE of the AMPR to investigate when bootstrap averaging reduces the variance of the unbiased estimator, and what phenomena occur when optimizing the bootstrap sample size and the hyperparameter of the elastic net regularization. The findings of this study are summarized as follows: * As mentioned above, the averaged unbiased estimator is obtained by AMPR. Furthermore, its variance can be estimated using the output of AMPR without knowing the actual signal \(\mathbf{w}_{0}\). Thus, we can minimize the variance by adjusting the bootstrap sample size and the hyperparameters of the elastic net (see Section III and IV) * The variance of the averaged unbiased estimator can be reduced via bootstrapping, especially when the true data generation process is inconsistent with the sparsity assumption of the regularization and the data size is insufficient (see Section V-B) * When \(\mathbf{w}_{0}\) is less sparse, and the data size is small, a phase transition occurs. This phase transition indicates the existence of the region where the value of the regulariza tion parameter is infinitesimally small, and the number of unique data points in each bootstrap sample is less than the dimension of \(\mathbf{w}_{0}\). That is, in this region, the ensemble average of the unbiased estimators of GAMP for the elastic norm minimization problem (also known as the minimum norm _interpolation_ in machine learning [15, 16, 17]) yields the best averaged unbiased estimator (see Section V-C) ### _Notation_ \(\mathcal{N}(\mu,\sigma^{2})\) denotes a Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\) and \(\mathrm{Poisson}(\mu_{B})\) denotes a Poisson distribution with mean \(\mu_{B}\). For a random variable \(X\sim p_{X}\), we denote by \(\mathbb{E}_{X}[\dots]\) an average \(\int(\dots)p_{X}(x)dx\). Given a vector \(\mathbf{x}\in\mathbb{R}^{N}\) and scalar function \(f:\mathbb{R}\rightarrow\mathbb{R}\), we write \(f(\mathbf{x})\) for the vector obtained by applying \(f\) componentwise. For a vector \(\mathbf{x}=(x_{1},x_{2},\dots,x_{N})^{\top}\in\mathbb{R}^{N}\), we denote by \(\mathbf{x}^{2}=(x_{1}^{2},x_{2}^{2},\dots,x_{N}^{2})^{\top}\) the componentwise operations and by \(\langle\mathbf{x}\rangle=N^{-1}\sum_{i=1}^{N}x_{i}\) the empirical average. ## II Background on AMPR ### _Ampr_ Algorithm 1 shows AMPR [13] with an elastic net denoising function. Function \(g:\mathbb{R}\times(0,\infty)\rightarrow\mathbb{R}\) is the elastic net denoising function and \(g^{\prime}\) is a derivative of \(g\) with respect to the first argument: \[g(h,\hat{Q})=\left\{\begin{aligned} & 0&\text{if }|h|\leq\lambda\gamma,\\ &\frac{h-\mathrm{sgn}(h)\gamma\lambda}{\hat{Q}+\lambda(1-\gamma)}& \text{otherwise,}\end{aligned}\right. \tag{2}\] \[g^{\prime}(h,\hat{Q})=\left\{\begin{aligned} & 0&\text{if }|h|\leq\lambda\gamma,\\ &\frac{1}{\hat{Q}+\lambda(1-\gamma)}&\text{otherwise.} \end{aligned}\right. \tag{3}\] where \(\lambda>0\) represents the regularization strength and \(\gamma\in[0,1]\) is the \(\ell_{1}\)-ratio. At a fixed point, AMPR offers bootstrap statistics of the elastic net estimator as follows: **Proposition 1** (bootstrap statistics based on AMPR [13]): _Let \(\hat{\mathbf{w}}^{*}\) be the elastic net estimator for a bootstrap sample \(D^{*}\) of size \(\mu_{B}M\)._ \[\hat{\mathbf{w}}^{*}=\operatorname*{argmin}_{\mathbf{w}\in\mathbb{R}^{N}}\sum_{\mu=1} ^{M}\frac{c_{\mu}}{2\mu_{B}}(y_{\mu}-\mathbf{x}_{\mu}^{\top}\mathbf{w})^{2}+\sum_{i=1} ^{N}\lambda(\gamma|w_{i}|+\frac{1-\gamma}{2}w_{i}^{2}), \tag{4}\] _where \(c_{\mu}\sim_{\mathrm{i.i.d.}}\mathrm{Poisson}(\mu_{B})\) represents the number of times the data point \((\mathbf{x}_{\mu},y_{\mu})\), \(\mathbf{x}_{\mu}\) being the \(\mu\)-th row of \(X\), appears in the bootstrap sample \(D^{*}\). Then once the AMPR reaches its fixed point at sufficiently large \(T_{\mathrm{it}}\), the bootstrap statistics of \(\hat{\mathbf{w}}^{*}\) can be computed as_ \[\mathbb{E}_{\mathbf{c}}[\psi(\hat{w}_{i}^{*})]=\mathbb{E}_{\eta}[\psi(g(h_{i}+ \sqrt{\hat{v}}\eta,\hat{Q}))],\quad\eta\sim\mathcal{N}(0,1), \tag{5}\] _where \(\psi:\mathbb{R}\rightarrow\mathbb{R}\) is such that the expression in (5) is well-defined and otherwise arbitrary. The variables without iteration indexes \((h_{i},\hat{v},\hat{Q})\) are the output of AMPR at a fixed point._ _Note that the average \(\mathbb{E}_{\eta}[g(h+\sqrt{\hat{v}}\eta;\hat{Q})],\mathbb{E}_{\eta}[g(h+ \sqrt{\hat{v}}\eta;\hat{Q})^{2}]\) and \(\mathbb{E}_{\eta}[g^{\prime}(h+\sqrt{\hat{v}}\eta;\hat{Q})]\) have closed-form expressions and \(\mathbb{E}_{c}[\dots]\) is an average over a one-dimensional random variable. Hence the computational complexity of computing the RHS of (5) is dominated by the matrix-vector product operations in lines 10-11 of Algorithm 1 instead of repeatedly computing \(\hat{\mathbf{w}}^{*}\) for numerous realizations of \(\mathbf{c}\), making AMPR a computationally efficient algorithm for computing the bootstrap statistics._ ### _SE of AMPR_ AMPR displays remarkable behavior. Let \(\hat{\mathbf{r}}_{t}=\mathbf{h}_{t}/\hat{Q}_{t},t=1,2,\dots,T_{\mathrm{it}}\). Then \(\hat{\mathbf{r}}_{t}\) behaves like a white Gaussian noise-corrupted version of the true signal \(\mathbf{w}_{0}\)[13]. Furthermore, the variance can be estimated using SE. **Proposition 2** (SE of AMPR [13]): \(\hat{\mathbf{r}}_{t}\) behaves as a white Gaussian noise-corrupted version of the actual signal \(\mathbf{w}_{0}\): \[\hat{\mathbf{r}}_{t}\sim\mathcal{N}(\mathbf{w}_{0},\hat{\sigma}_{t}^{2}),\quad\hat{ \sigma}_{t}^{2}=\hat{\chi}_{t}/\hat{Q}_{t}^{2}, \tag{6}\] for some positive value \(\hat{\chi}_{t}\), indicating that \(\hat{\mathbf{r}}_{t}\) can be used as an unbiased estimate of \(\hat{\mathbf{w}}_{0}\). The variance is predicted in the asymptotic setting \(M,N\rightarrow\infty,M/N\rightarrow\alpha\in(0,\infty)\) using the scalar SE specified in Algorithm 2. There, \(\mathcal{E}_{t},t=1,2,\dots\) corresponds to the mean squared error (MSE) of the AMPR estimate \(\hat{\mathbf{w}}_{t}\): \(\mathcal{E}_{t}=N^{-1}\|\hat{\mathbf{w}}_{t}-\mathbf{w}_{0}\|_{2}^{2}\). To track the performance of AMPR, \(\mathcal{E}_{0}\) should be inputted as the MSE of \(\hat{\mathbf{w}}_{0}\). Using the SE of the AMPR, we can predict the variance of the unbiased estimator in the asymptotic setting \(M,N\rightarrow\infty,M/N\rightarrow\alpha\in(0,\infty)\) for each value of \(\mu_{B},\lambda\), and \(\gamma\). Hence variance can be minimized by tuning \((\mu_{B},\lambda,\gamma)\) using versatile black-box optimization methods implemented in various optimization libraries [18, 19]. ## III Averaged unbiased estimator Here, we explain the meaning of the SE of AMPR. Subsequently, we argue that \(\hat{\mathbf{r}}_{t}\) of AMPR is the bootstrap average of the unbiased estimators of GAMP. The first point is the meaning of \(\xi\) and \(\eta\) that appear in SE. Propositions 1 and 2 indicate that, once the AMPR reaches its fixed point at sufficiently large \(T_{\rm it}\), for any functions \(\phi,\psi:\mathbb{R}\to\mathbb{R}\) such that the following expression is well defined, the following holds in the asymptotic setting \(M,N\to\infty,M/N\to\alpha\in(0,\infty)\): \[\frac{1}{N}\sum_{i=1}^{N}\phi(\mathbb{E}_{\rm c}[\psi(\hat{w}_{i} ^{*})])\to\mathbb{E}_{w_{0},\xi}\Big{[}\phi\Big{(}\mathbb{E}_{\eta}[\psi(g(\hat {Q}w_{0}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ GAMP algorithm for computing the elastic net estimator of \(D\)[1]. Although SE prediction of the variance of \(\hat{\mathbf{r}}_{t}\) requires information on the unknown signal \(\mathbf{w}_{0}\), we can predict the variance from the data. In other words, estimating the variance of \(\hat{\mathbf{r}}_{t}\) does not require explicit knowledge on \(\mathbf{w}_{0}\). **Proposition 4** (Variance estimation from data): _In the asymptotic setting \(M,N\rightarrow\infty,M/N\rightarrow\alpha\in(0,\infty)\), the variance \(\hat{\sigma}_{t}^{2}\) of the unbiased estimate \(\hat{\mathbf{r}}_{t}\) can be estimated as_ \[\hat{\sigma}_{t}^{2}=\alpha\langle\mathbf{a}_{t}^{2}\rangle/\hat{Q}_{t}^{2}. \tag{9}\] In SE of AMPR, \(\hat{\chi}_{t}/\hat{Q}_{t}^{2}\) is determined by the MSE \(\mathcal{E}_{t-1}\) and variance of measurement noise \(\Delta\) as \(\hat{\chi}_{t}/\hat{Q}_{t}^{2}=\alpha^{-1}(\mathcal{E}_{t-1}+\Delta)\). For linear models, \(\mathcal{E}_{t-1}+\Delta\) corresponds to the prediction error for a new sample and can be estimated using the leave-one-out error (LOOE) [20]. LOOE can be estimated from \(\mathbf{a}_{t}\) because it is proportional to the leave-one-out estimate for the data point \(\mu\): \(a_{t,\mu}=\hat{Q}_{t}\alpha^{-1}(y_{\mu}-\mathbf{x}_{\mu}^{\top}\hat{\mathbf{w}}_{t}^{ \left)\mu\right)\), where \(\hat{\mathbf{w}}_{t}^{\left)\mu\right.}\) is the AMPR's estimate of \(\mathbf{w}_{0}\) without the sample \(\mu\) (Equation (19) of reference [13]). Propositions 3 and 4 indicate that the variance \(\hat{\sigma}^{2}\) can be minimized even if the signal \(\mathbf{w}_{0}\) is unknown. However, we will use SE for the theoretical assessment in the next section for convenience. ## V Numerical analysis In the sequel, by numerically minimizing the variance using SE, we investigate when bootstrapping reduces the variance of the unbiased estimator and the phenomena that occur when optimizing the bootstrap sample size \(\mu_{B}\) and the hyperparameter of the elastic net regularization \((\lambda,\gamma)\). For this, we searched for the optimal parameter \((\mu_{B}^{\star},\lambda^{\star},\gamma^{\star})\) that yielded the minimum variance using the SE of AMPR and the Nelder-Mead algorithm in the _Optim.jl_ library [18]. We obtained the fixed point of the SE by iterating the SE a sufficient number of times. For comparison, the same optimization was performed for the non-bootstrap case. For the signal distribution \(q_{0}\), we consider the Gauss-Bernoulli model: \(q_{0}=\rho\delta_{0}+(1-\rho)\mathcal{N}(0,1)\), with sparsity \(\rho\in(0,1)\). In this section, we denote the outputs of the AMPR or GAMP at fixed points by unindexed variables. ### _Distribution of the unbiased estimator_ We verified the interpretation of AMPR's output \(\hat{\mathbf{r}}\) described in Section III. For this, we compared the output of GAMP \(\hat{\mathbf{r}}^{\mathrm{G}}(\mathbf{c})\) for each realization of \(\mathbf{c}\) as in (4), and the output of AMPR. The parameters used to produce the figure were set as \((N,\alpha,\Delta,\lambda,\gamma,\mu_{B},\rho)=(4096,0.8,0.25,0.1,0.5,0.5,0.1)\). Fig. 1 shows the sample quantiles of \(\hat{\mathbf{r}}-\hat{\mathbf{r}}^{\mathrm{G}}(\mathbf{c})\) versus the normal distribution with variance \(\hat{v}/\hat{Q}^{2}\). The scattered points are approximately aligned with a line with a slope of \(1\) and an intercept of \(0\). Fig. 2 shows the scatter plot of \(\hat{\mathbf{r}}\) versus \(\mathbb{E}_{c}[\hat{\mathbf{r}}^{\mathrm{G}}(\mathbf{c})]\). Again, the scattered points are approximately aligned with a line with a slope of \(1\) and an intercept of \(0\). This is consistent with the decomposition of the Gaussian noise (8). Thus, AMPR computes the bootstrap average of the unbiased estimator of GAMP. ### _Variance reduction_ We quantitatively compare the variance \(\hat{\sigma}^{2}\) of the averaged unbiased estimator with that of \(\hat{s}^{2}\) without bootstrapping. Fig. 3 shows the ratio of the optimal \(\hat{\sigma}^{2}\) and the optimal \(\hat{s}^{2}\). Panels (a) and (b) show the results when the \(\ell_{1}\)-ratio \(\gamma\) is fixed and panel (c) shows when the \(\ell_{1}\)-ratio is optimized. As expected from Proposition 3, the variance is reduced compared to the case without bootstrapping. The magnitude of the reduction is larger when the \(\ell_{1}\)-ratio is fixed and close to \(1\) (LASSO [21] case). In particular, the largest improvement is obtained when \(\rho\) is large (less sparse) and the measurement ratio \(\alpha\) is small. The improvement is minor when \(\ell_{1}\)-ratio is optimized. This suggests that using the bootstrap average is effective when the actual data generation process is inconsistent with the sparsity assumption of the regularization and the data size is insufficient. However, when the data size is too small, meaningful improvement cannot be obtained. ### _Phase transition to an ensemble of interpolators_ Fig. 4 shows the optimal regularization strength \(\lambda^{\star}\) for \(\hat{\sigma}^{2}\). Panels (a) and (b) show the results when the \(\ell_{1}\)-ratio is Fig. 3: Ratio of optimal variances \(\hat{\sigma}^{2}/\hat{s}^{2}\) is plotted against the sparsity of the true signal \(\rho\) and the measurement ratio \(\alpha\) as heat maps. In panels (a) and (b), the \(\ell_{1}\)-ratio of the elastic net regularization is fixed. In panel (c), the \(\ell_{1}\)-ratio is also optimized. The measurement noise is set as \(\Delta=0.15\). fixed, and panel (c) shows when the \(\ell_{1}\)-ratio is optimized. In all cases, it is clear that a phase transition has occurred in which \(\lambda^{*}\) drops to an infinitesimally small value (although for numerical reasons, \(\lambda\) is constrained to exceed \(10^{-7}\)) as the measurement ratio \(\alpha\) decreases or \(\rho\) increases. Moreover, Fig. 5 shows \(\alpha(1-\mu_{B}^{*})\), the typical number of unique data points in each bootstrap sample scaled by \(N\): \(\lim_{M,N\to\infty}(M\mu_{B})/N\). From Fig. 5, it is clear that the typical number of unique data points is always smaller than \(1\) in the region where \(\lambda^{*}\simeq+0\). This holds, even if the measurement ratio \(\alpha>1\). Thus, when \(\lambda^{*}\simeq+0\), the elastic net estimator in (4) becomes the minimum elastic net norm estimator as \[\hat{\mathbf{w}}^{*}= \min_{\mathbf{w}\in\mathbb{R}^{N}}\sum_{i=1}^{N}\gamma|w_{i}|+\frac{ 1-\gamma}{2}w_{i}^{2},\,\mathrm{subject\,to}\] \[\leavevmode\hbox{\hbox to 0.0pt{1}\kern 1.422638pt\vrule height 6.45pt width 0.4pt depth -0.215pt \kern-3.01pt}[c_{\mu}>0](y_{\mu}-\mathbf{x}_{\mu}^{\top}w)=0,\,\mu=1,2,\ldots,M, \tag{10}\] which is commonly known as minimum elastic net norm _interpolator_ in machine learning [15, 16, 17]. These suggest that when elastic net regularization cannot determine an appropriate sparse structure of \(\mathbf{w}_{0}\), it is better to use an over-parameterized setting in which the number of unique data points of each bootstrap data is smaller than the dimension of \(\mathbf{w}_{0}\) and use an ensemble of interpolators. ## VI Summary and discussion In this study, we investigated the behavior of the bootstrap-averaged unbiased estimator of GAMP using AMPR and its SE. We found that the bootstrap averaging procedure can effectively reduce the variance of the unbiased estimator when the actual data generation process is inconsistent with the sparsity assumption of the regularization and the data size is insufficient. We also found a phase transition where the regularization strength drops to infinitesimally small values by decreasing the measurement ratio \(\alpha\) or increasing \(\rho\). Although increasing the variance of weak learners is a key to the success of ensemble learning [22, 23, 24], the phase transition to an ensemble of interpolators may be unexpected. Investigating whether similar phase transitions occur in other more sophisticated machine-learning models, such as neural networks, would be an interesting future direction. On the technical side, the key to this study was the precise performance characterization of the averaged estimator by AMPR, which was developed by combining GAMP and the replica method of statistical physics [25, 26]. Such a combination of approximate inference algorithms and the replica method has been used to develop approximate computation algorithms [13, 27, 28, 29, 30] and has not been applied to precise performance analysis of ensemble methods. It would be interesting direction to try similar performance analysis for Fig. 4: The optimal regularization parameter \(\log_{10}\lambda\) for the bootstrap averaged unbiased estimator are plotted against the sparsity of the true signal \(\rho\) and the measurement ratio \(\alpha=M/N\) as a heat map. The measurement noise is set as \(\Delta=0.15\). Fig. 5: The region where the typical number of unique samples in each bootstrap sample \(\alpha(1-e^{-\mu_{B}^{*}})\) scaled by \(N\) is visualized. In the purple region, \(\alpha(1-e^{-\mu_{B}^{*}})<1\), and in the yellow region, \(\alpha(1-e^{-\mu_{B}^{*}})\geq 1\). The red dashed line shows \(\alpha=1\). The measurement noise is set as \(\Delta=0.15\). other bootstrap methods or ensemble learning. ## Acknowledgement This study was supported by JSPS KAKENHI Grant Number 21K21310.
2308.04474
HMC real numbers in Countable Mathematical Analysis
We develop a theory of real numbers as rational Cauchy sequences, in which any two of them, $(a_n)$ and $(b_n)$, are equal iff $\lim\,(a_n-b_n)=0$. We need such reals in the Countable Mathematical Analysis ([4]) which allows to use only hereditarily at most countable (HMC) sets.
Martin Klazar
2023-08-08T13:17:26Z
http://arxiv.org/abs/2308.04474v1
# HMC real numbers in Countable Mathematical Analysis ###### Abstract We develop a theory of real numbers as rational Cauchy sequences, in which any two of them, \((a_{n})\) and \((b_{n})\), are equal iff \(\lim\left(a_{n}-b_{n}\right)=0\). We need such reals in the Countable Mathematical Analysis ([4]) which allows to use only hereditarily at most countable (HMC) sets. ## 1 Introduction A set is _at most countable_ if it is finite or countable, where the latter means that the set is in bijection with \(\omega=\{0,1,\dots\}\). A set is _uncountable_ if it is not at most countable. A set \(x\) is _hereditarily at most countable_, abbreviated HMC, if for every \(n\in\omega\) and every chain of sets \[x_{n}\in x_{n-1}\in\dots\in x_{0}=x\;,\] the set \(x_{n}\) is at most countable. (By the axiom of foundation every chain of sets \(x_{0}\ni x_{1}\ni\dots\) is finite.) Refer to [3] for set-theoretical terminology and notions. Does one really need uncountable sets in Mathematical Analysis and in Number Theory? For example, does one need them to prove by the identities \[\int_{0}^{+\infty}x^{n}\mathrm{e}^{-x}\,\mathrm{dx}=n!,\ n\in\omega\;,\] that the Euler number \(\mathrm{e}=2.71828\dots\) is transcendental? One does not, in [4] we carry out the proof, due to D. Hilbert, just with HMC sets. Thus the transcendence of \(\mathrm{e}\) belongs to _Countable Mathematical Analysis_, abbreviated CMA, respectively to _Countable Number Theory_, abbreviated CNT, where one can use only HMC sets. What did we do with the fact that the integrands \[\{(x,\,x^{n}\mathrm{e}^{-x})\mid x\in[0,\,+\infty)\}\] are uncountable sets? In [4] we work with their HMC restrictions to fractions \[\{(x,\,x^{n}\mathrm{e}^{-x})\mid x\in\mathbb{Q}\wedge x\geq 0\}\;.\] And what did we do with the _min-max principle_ by which every continuous function \(f\colon[a,b]\to\mathbb{R}\), where \(a<b\) are real numbers, attains on the interval its minimum and maximum? For rational restrictions to \[[a,\,b]_{\mathbb{Q}}:=\{x\in\mathbb{Q}\mid a\leq x\leq b\}\] the principle fails as stated, there are unbounded continuous functions from \([a,b]_{\mathbb{Q}}\) to \(\mathbb{R}\). In [4] we use a HMC variant of the principle where compactness is replaced with uniform continuity. It is based on an extension theorem. **Theorem 1.1** (extensions): _Let \(x\) be a real number, \(M\subset\mathbb{Q}\) and \(f\colon M\to\mathbb{R}\) be a uniformly continuous function. Then for every sequence \((a_{n})\subset M\) with \(\lim a_{n}=x\), the sequence_ \[(f(a_{n}))=(f(a_{1}),\,f(a_{2}),\,\dots)\] _converges to a unique real number independent of \((a_{n})\) and denoted by \(f(x)\)._ If there exists a sequence \((a_{n})\subset M\) with \(\lim a_{n}=x\), we say that \(x\)_is close to_\(M\). The theorem says that every uniformly continuous real function defined on a set of fractions \(M\) has a unique limit extension to any real \(x\) close to \(M\). Our HMC _min-max principle_ is as follows. **Theorem 1.2** (HMC min-max principle): _For every uniformly continuous function \(f\colon M\to\mathbb{R}\) defined on a nonempty bounded set \(M\subset\mathbb{Q}\) there exist real numbers \(y\) and \(y^{\prime}\) that are close to \(M\) and are such that_ \[\forall\,x\in M\left(f(y)\leq f(x)\leq f(y^{\prime})\right)\,.\] Thus the extended \(f\) attains "on \(M\)" at \(y\) a minimum value and at \(y^{\prime}\) a maximum value. In the displayed formula one can replace \(M\) with any set \(M^{\prime}\) arising from \(M\) by adding to it at most countably many real numbers close to \(M\). The result in [4] on which everything hinges is not the min-max principle but a HMC version of the _vanishing derivative principle_. One of the standard formulations of it says that if a function \(f\colon(a,b)\to\mathbb{R}\) has derivative \(f^{\prime}(c)\neq 0\), where \(a<c<b\) are real numbers, then \(f\) does not have at \(c\) local extreme. See [4] for our HMC version of this principle. In CMA real numbers play role of ideal elements which are invoked when they are needed. In Theorem 1.2, \(f\) need not attain extremal value at any element of \(M\) but there always exist ideal elements \(y\) and \(y^{\prime}\), which are Cauchy sequences in \(M\), which do the job. Standard Cantorean real numbers are equivalence blocks in \(C/\!\sim\) where \(C\) is the set of rational Cauchy sequences and \(\sim\) is the equivalence relation given by \((a_{n})\sim(b_{n})\) iff \(\lim\left(a_{n}-b_{n}\right)=0\). We cannot use such real numbers in CMA because each of them is uncountable. This cannot be fixed by the axiom of choice (AC) by selecting from each equivalence block one representing rational Cauchy sequence. Each resulting real number is a HMC set but AC was applied to an uncountable set of uncountable sets. We need real numbers that are HMC from the start. Such real numbers are well known, they are the Dedekindean real numbers introduced in [2]. Historically this was the first formalization of real numbers, by means of (Dedekind) _cuts_ on the set of fractions \(\mathbb{Q}\). Recall that \(X\subset\mathbb{Q}\) is a _cut_ if (i) \(X,\mathbb{Q}\setminus X\neq\emptyset\), (ii) always \(a\in\mathbb{Q}\), \(b\in X\), \(a<b\Rightarrow a\in X\) and (iii) \(X\) does not have maximum element. But cuts do not capture the required feature of real numbers as arbitrarily precise rational approximations, see Theorem 1.1. Therefore in the rest of our article we develop HMC Cantorean real numbers. Also, the arithmetic of cuts is a bit cumbersome. We will proceed in a quite detailed manner because Cantor's (and Heine's and Meray's) construction of real numbers as equivalence blocks of rational Cauchy sequences is well known, but its modification that we need in CMA and CNT is, as far as we know, new. The belief in indispensability of uncountable sets in Mathematical Analysis is universal. It is supported by the fact, often taught in courses of analysis, that the set \(\mathbb{R}\) of real numbers is uncountable. Typical function in real analysis like \(f\colon I\to\mathbb{R}\), where \(I\subset\mathbb{R}\) is a nontrivial real interval, is an uncountable set. We regard uncountable sets as problematic because almost all of their elements cannot be described by finite means. But we also know that for many mathematicians they are their second nature. Individual real numbers, as originally conceived by R. Dedekind in [2], are HMC sets. Also, it is not written in stone that in analytical arguments one has to use everything of the mentioned sets \(\mathbb{R}\) and \(f\), maybe some tiny countable parts would suffice for the considered problem. Exactly this we did in [4] for the transcendence of e. We think that this approach can be extended to many other results in Mathematical Analysis and Number Theory, and regard the interest and importance of this undertaking as self-evident. In Section 2 we briefly review constructions of natural numbers, of the ring of integers and of the ordered field of fractions. Section 3 is devoted to the construction of HMC Cantorean real numbers and to the proofs that they form a weak ordered field (Theorem 3.4) and have the weak least upper bound property (Theorem 3.5). The qualification "weak" indicates that in some parts of the result the equality relation \(=\) is relaxed to the equivalence relation \(\sim\). In the last Section 4 we give concluding comments. ## 2 Natural numbers, integers, fractions We begin with the _natural numbers_ \[\omega=\{0,\,1,\,2,\,\dots\}\] where \(0=\emptyset\), \(1=\{0\}\), \(2=\{0,1\}\) and so on. More precisely, by the axiom of infinity there exists an inductive set, and we define \(\omega\) as the intersection of all inductive sets. Then one introduces standard addition \(+\) and multiplication on \(\omega\) and shows that both (binary) operations are commutative and associative, that \(\cdot\) is distributive to \(+\) and that \(0\), resp. \(1\), is neutral to \(+\), resp. \(\cdot\). But additive inverses are missing. We set \[\mathbb{Z}:=\omega\cup((\omega\setminus\{0\})\times\{0\})\] and write, as usual, \(-n\) instead of \((n,0)\in\mathbb{Z}\). We set \(-0:=0\). We call the elements of \(\mathbb{Z}\)_integers_. One easily extends both operations \(+\) and \(\cdot\) from \(\omega\) to \(\mathbb{Z}\). Their previous properties are preserved and since \[\forall\,n\in\omega\left(n+(-n)=0\right)\,,\] we get additive inverses. So \((\mathbb{Z},0,1,+,\cdot)\) is a commutative ring with identity. Multiplicative inverses are still missing. We set \(Z:=\mathbb{Z}\times(\mathbb{Z}\setminus\{0\})\) and write, as usual, \(\frac{m}{n}\) or \(m/n\) for \((m,n)\in Z\). The _identity relation_\(\sim\) on \(Z\) is \[k/l\sim m/n\iff kn=lm\;.\] It is an equivalence relation on \(Z\). Thus we set \[\mathbb{Q}:=Z/\!\sim\] and call the elements of \(\mathbb{Q}\), which are equivalence blocks \([m/n]_{\sim}\), _rational numbers_ or _fractions_. Every \(\alpha\in\mathbb{Q}\) is a countable HMC set and the question if \(k/l\sim m/n\) is algorithmicly decidable. We will abuse notation and write often, as is common, simply \(\frac{m}{n}\) or \(m/n\) instead of \([m/n]_{\sim}\). We say that a fraction \(\alpha\in\mathbb{Q}\) is _integral_ if \(\alpha=[m/1]_{\sim}\). The map \[\mathbb{Z}\ni m\mapsto[m/1]_{\sim}\in\mathbb{Q}\] is a ring isomorphism. One easily extends the operations \(+\) and \(\cdot\) on \(\mathbb{Z}\) from integral fractions to \(\mathbb{Q}\). All previous properties of \(+\) and \(\cdot\) are preserved and since \[[m/n]_{\sim}\cdot[n/m]_{\sim}=[mn/nm]_{\sim}=[1/1]_{\sim}=1_{\mathbb{Q}}\;,\] we get multiplicative inverses. Thus \((\mathbb{Q},0_{\mathbb{Q}},1_{\mathbb{Q}},+,\cdot)\) is a field. It is even an ordered field: if \(l,n>0\) (here \(>\) is the standard linear order on \(\mathbb{Z}\), obtained from the linear order \((\omega,\in)\)) then \[[k/l]_{\sim}<[m/n]_{\sim}\iff kn<lm\;.\] One shows that \((\mathbb{Q},<)\) is a linear ordering and that \[(\mathbb{Q},\,0_{\mathbb{Q}},\,1_{\mathbb{Q}},\,+,\,\cdot,\,<)\] is an ordered field. One thing is still missing. The ordered field \(\mathbb{Q}\) does not have the _least upper bound property._ For example, the nonempty set \[\{\alpha\in\mathbb{Q}\mid\alpha^{2}<2\}\subset\mathbb{Q}\] is in \(<\) bounded from above, but has no least upper bound. HMC Cantorean real numbers As is well known, in _real numbers_ the last deficiency is removed. We turn to them now. In HMC reals there will be some twists. Let \(X\) be a set, \(+:X\times X\to X\) be a (binary) operation on \(X\), \(A\subset X\times X\) be a (binary) relation on \(X\) and \(\sim\) be an equivalence relation on \(X\). We say that \(+\) is _congruent to \(\sim\)_ if for every \(a\), \(a^{\prime}\), \(b\), \(b^{\prime}\) in \(X\) it holds that \[(a\sim a^{\prime}\wedge b\sim b^{\prime})\Rightarrow a+b\sim a^{\prime}+b^{ \prime}\;.\] Similarly, \(A\) is _congruent to \(\sim\)_ if for every \(a\), \(a^{\prime}\), \(b\), \(b^{\prime}\) in \(X\) it holds that \[(a\sim a^{\prime}\wedge b\sim b^{\prime})\Rightarrow(aAb\iff a^{\prime}Ab^{ \prime})\;.\] **Definition 3.1** (ordered fields congruent to \(\sim\)): _Let \(X\neq\emptyset\) be a set and \(\sim\) be an equivalence relation on \(X\). An ordered field (on \(X\)) congruent to \(\sim\) is a six-tuple_ \[X_{\mathrm{OF}\sim}:=(X,\,0_{X},\,1_{X},\,+,\,\cdot,\,<)\] _such that \((X,\,0_{X},\,1_{X},\,+,\,\cdot)\) is a commutative ring with identity, the operations \(+\) and \(\cdot\) on \(X\) are congruent to \(\sim\), \(<\) is an irreflexive and transitive relation on \(X\) that is congruent to \(\sim\), the two ordering axioms hold, namely for every \(a,b,c\in X\) one has that_ \[a<b\Rightarrow a+c<b+c\,\text{ and }\,a,\,b>0_{X}\Rightarrow a\cdot b>0_{X}\;,\] _and \(X_{\mathrm{OF}\sim}\) has two more properties. First, weak multiplicative inverses exist,_ \[\forall\,a\in X\left(a\not\sim 0_{X}\Rightarrow\exists\,b\in X\left(a\cdot b \sim 1_{X}\right)\right)\,.\] _Second, \(<\) is weakly trichotomic,_ \[\forall\,a,\,b\in X\left(a<b\lor b<a\lor a\sim b\right)\,.\] We remind that the requirement on \((X,0_{X},1_{X},+,\cdot)\) means that \(+\) and \(\cdot\) are associative and commutative, \(\cdot\) is distributive to \(+\), the element \(0_{X}\) (resp. \(1_{X}\)) is neutral to \(+\) (resp. \(\cdot\)) and every \(a\in X\) has the additive inverse \(-a\in X\). For example, if \(=\) is the standard set-theoretic equality, which the axiom of extensionality characterizes by the equivalence \[x=y\text{ iff }\forall\,z\left(z\in x\iff z\in y\right)\,,\] then \[\mathbb{Q}_{\mathrm{OF}}=\mathbb{Q}_{\mathrm{OF}=}:=(\mathbb{Q},\,0/1,\,1/1, \,+,\,\cdot,\,<)\] is an ordered field congruent to \(=\). This is a cumbersome way of saying that \(\mathbb{Q}_{\mathrm{OF}}\) is an ordered field (we defined it in the previous section). Now we define an ordered field congruent to an equivalence relation weaker than \(=\). Symbols \(k\), \(l\), \(m\), \(n\), \(n_{0}\), \(n_{1}\),..., \(n_{1}^{\prime}\), \(n_{2}^{\prime}\),... refer to elements of \[\mathbb{N}:=\omega\setminus\{0\}\;.\] A _sequence \((a_{n})\) in (a set) \(X\)_ is a function \(a\colon\mathbb{N}\to X\) from \(\mathbb{N}\) to \(X\), i.e., a set of ordered pairs \((a_{n})\subset\mathbb{N}\times X\) such that for every \(m\in\mathbb{N}\) there is exactly one \(y\in X\) with \((m,y)\in(a_{n})\). One writes \(a_{m}\) for this unique \(y\). We denote the set of all sequences in \(X\) by \(X^{\mathbb{N}}\). We say that a sequence \((a_{n})\) in \(\mathbb{Q}\) is _Cauchy_ if \[\forall\,k\,\exists\,n_{0}\,\big{(}m,\,n\geq n_{0}\Rightarrow|a_{m}-a_{n}| \leq 1/k\big{)}\;.\] We denote the set of all such _rational Cauchy sequences_ by \(C\). The _closeness relation_\(\sim\) on \(C\) is \[(a_{n})\sim(b_{n})\;\stackrel{{\rm def}}{{\Longleftrightarrow} }\;\forall\,k\,\exists\,n_{0}\,\big{(}n\geq n_{0}\Rightarrow|a_{n}-b_{n}| \leq 1/k\big{)}\;.\] Since \((a_{n})\) and \((b_{n})\) are Cauchy, we can equivalently replace the last implication with \[m,\,n\geq n_{0}\Rightarrow|a_{m}-b_{n}|\leq 1/k\;.\] It is easy to see that \(\sim\) is an equivalence relation on \(C\). In the Introduction we mentioned that the _standard Cantorean real numbers_\(\mathbb{R}\) are \[\mathbb{R}:=C/\sim\;.\] They are not HMC sets as every \(\alpha\in\mathbb{R}\) is uncountable. We modify them as follows. **Definition 3.2** (HMC reals): _We define_ HMC _real numbers simply by setting_ \[\mathbb{R}:=C\;.\] _So (our) real numbers are exactly rational Cauchy sequences._ Clearly, every HMC real number is a HMC set. Their set \(C\) is uncountable. We define arithmetic on \(C\) by means of the ordered field \(\mathbb{Q}_{\rm OF}\). Suppose that \((a_{n})\) and \((b_{n})\) lie in \(C\). We set \(0_{C}:=(0/1,0/1,\dots)\), \(1_{C}:=(1/1,1/1,\dots)\), \((a_{n})+(b_{n}):=(a_{n}+b_{n})\), \((a_{n})\cdot(b_{n}):=(a_{n}\cdot b_{n})=(a_{n}b_{n})\) and \[(a_{n})<(b_{n})\;\stackrel{{\rm def}}{{\Longleftrightarrow}}\; \exists\,k\,\exists\,n_{0}\,\big{(}n\geq n_{0}\Rightarrow a_{n}<b_{n}-1/k \big{)}\;.\] Again, since \((a_{n})\) and \((b_{n})\) are Cauchy, we can equivalently replace the last implication with \[m,\,n\geq n_{0}\Rightarrow a_{m}<b_{n}-1/k\;.\] The notation \((a_{n})\lesssim(b_{n})\) means that \((a_{n})<(b_{n})\) or \((a_{n})\sim(b_{n})\), and similarly for \(\gtrsim\). We show that \((C,0_{C},1_{C},+,\cdot,<)\) is an ordered field congruent to \(\sim\). Its ring structure is immediate from the following more general construction. **Proposition 3.3** (\(\mathbb{N}\)-th powers of rings): \((R,0_{R},1_{R},+,\cdot)\) _is a commutative ring with identity and \(P:=R^{\mathbb{N}}\). Then_ \[P_{\rm R}:=(P,\,0_{P},\,1_{P},\,+,\,\cdot)\;,\] _where \(0_{P}:=(0_{R},0_{R},\dots)\), \(1_{P}:=(1_{R},1_{R},\dots)\) and the operations \(+\) and \(\cdot\) on \(P\) are defined component-wisely from those on \(R\), is a commutative ring with identity._ Proof.: Satisfaction of the axioms of a commutative ring with identity in \(P_{\mathrm{R}}\) is immediate because they hold in every component. CMA views HMC reals as follows. **Theorem 3.4** (HMC reals form a weak ordered field): _The structure_ \[\mathbb{R}:=(C,\,0_{C},\,1_{C},\,+,\,\cdot,\,<)\] _defined above is an ordered field congruent to the closeness relation \(\sim\), in the sense of Definition 3.1._ Proof.: It is clear that \(0_{C}\) and \(1_{C}\) are in \(C\). Let \((a_{n})\) and \((b_{n})\) lie in \(C\). Clearly, \((a_{n})+(b_{n})=(a_{n}+b_{n})\in C\). We treat \((a_{n})\cdot(b_{n})=(a_{n}b_{n})\) in more detail. For a given \(k\) there is an \(n_{0}\) such that \(m,n\geq n_{0}\Rightarrow|a_{m}-a_{n}|,|b_{m}-b_{n}|\leq 1/k\). Hence there is an \(l\) (independent of \(k\)) such that \(\forall\,n\,\bigl{(}|a_{n}|,|b_{n}|\leq l\bigr{)}\). Thus for every \(m,n\geq n_{0}\), \[|a_{m}b_{m}-a_{n}b_{n}|\leq|a_{m}|\cdot|b_{m}-b_{n}|+|a_{m}-a_{n}|\cdot|b_{n}| \leq 2l/k\] and we see that \((a_{n})\cdot(b_{n})\in C\). One can similarly prove that \(+\) and \(\cdot\) are congruent to \(\sim\). We have shown that \(C\subset\mathbb{Q}^{\mathbb{N}}\) contains \(0_{C}\) and \(1_{C}\) and is closed to the operations \(+\) and \(\cdot\). By Proposition 3.3, \((C,0_{C},1_{C},+,\cdot)\) is a commutative ring with identity. We show that it has weak multiplicative inverses. If \((a_{n})\in C\) with \((a_{n})\not\sim 0_{C}\) then \(|a_{n}|\geq 1/k\) for every \(n\geq n_{0}\) and some \(k\). We define \((b_{n})\in\mathbb{Q}^{\mathbb{N}}\) by \[b_{n}:=\left\{\begin{array}{lcl}0&\ldots&a_{n}=0\,\,\,\mbox{and}\\ 1/a_{n}&\ldots&a_{n}\neq 0\;.\end{array}\right.\] Since for a given \(l\) there is an \(n_{1}\) such that \(m,n\geq n_{1}\Rightarrow|a_{m}-a_{n}|\leq 1/l\) and we can also assume that \(n\geq n_{1}\Rightarrow|a_{n}|\geq 1/k\), for every \(m,n\geq n_{1}\) it holds that \[|b_{m}-b_{n}|=\left|\frac{1}{a_{m}}-\frac{1}{a_{n}}\right|=\frac{|a_{n}-a_{m} |}{|a_{m}|\cdot|a_{n}|}\leq\frac{k^{2}}{l}\] and \((b_{n})\in C\). Since \(a_{n}b_{n}=1/1\) for every \(n\geq n_{0}\), we see that \((a_{n})\cdot(b_{n})\sim 1_{C}\). We verify the properties of \(\mathbb{R}\) concerning \(<\). Clearly, \(<\) is irreflexive. If \((a_{n})<(b_{n})\) and \((b_{n})<(c_{n})\) then there exist \(k\) and \(n_{0}\) such that for every \(n\geq n_{0}\), \[a_{n}<b_{n}-1/k\,\,\,\mbox{and}\,\,\,b_{n}<c_{n}-1/k\;.\] Hence \(a_{n}<c_{n}-2/k<c_{n}-1/k\) for every \(n\geq n_{0}\) and \(<\) is transitive. We show that \(<\) is congruent to \(\sim\). Suppose that \((a_{n})\), \((a^{\prime}_{n})\), \((b_{n})\) and \((b^{\prime}_{n})\) lie in \(C\), \((a_{n})<(b_{n})\), \((a_{n})\sim(a^{\prime}_{n})\) and \((b_{n})\sim(b^{\prime}_{n})\). Then there exist \(k\) and \(n_{0}\) such that \[n\geq n_{0}\Rightarrow a_{n}<b_{n}-1/k\;.\] Since \((a_{n})\sim(a^{\prime}_{n})\) and \((b_{n})\sim(b^{\prime}_{n})\), there exist an \(n_{1}\geq n_{0}\) such that \[n\geq n_{1}\Rightarrow a^{\prime}_{n}<b^{\prime}_{n}-1/2k\;.\] Hence \((a^{\prime}_{n})<(b^{\prime}_{n})\). Suppose that \((a_{n}),(b_{n})\in C\) with \((a_{n})\not\sim(b_{n})\). Then there is a \(k\) such that \(|a_{n}-b_{n}|>1/k\) for infinitely many \(n\). Thus \(a_{n}<b_{n}-1/k\) for infinitely many \(n\) or \(b_{n}<a_{n}-1/k\) for infinitely many \(n\). Since \((a_{n}),(b_{n})\in C\), in the former case there is an \(n_{0}\) such that \(n\geq n_{0}\Rightarrow a_{n}<b_{n}-1/2k\) and \((a_{n})<(b_{n})\). In the latter case the same argument gives that \((b_{n})<(a_{n})\). We have shown that \(<\) is weakly trichotomic. Let \((a_{n})\), \((b_{n})\) and \((c_{n})\) lie in \(C\). If \((a_{n})<(b_{n})\) then there exist \(k\) and \(n_{0}\) such that \[n\geq n_{0}\Rightarrow a_{n}<b_{n}-1/k\;.\] Thus \(a_{n}+c_{n}<b_{n}+c_{n}-1/k\) for every \(n\geq n_{0}\) and \((a_{n})+(c_{n})<(b_{n})+(c_{n})\). Similarly, if \((a_{n}),(b_{n})>0_{C}\) then there exist \(k\) and \(n_{0}\) such that \[n\geq n_{0}\Rightarrow 1/k<a_{n},\,b_{n}\;.\] Thus \(1/k^{2}<a_{n}b_{n}\) for every \(n\geq n_{0}\) and \((a_{n})\cdot(b_{n})>0_{C}\). This proves the two ordering axioms for \(\mathbb{R}\) and concludes the proof of the theorem. \(\Box\) Before we turn to the proof of the least upper bound property for \(\mathbb{R}\) we have to clarify how \(\mathbb{Q}\) is contained in \(\mathbb{R}\). The situation is actually similar to the containment of \(\mathbb{Z}\) in \(\mathbb{Q}\). We call the constant sequences \[E(\tfrac{m}{n}):=(\tfrac{m}{n},\,\tfrac{m}{n},\,\dots)\in C,\;m/n\in\mathbb{Q}\;,\] _rational_ HMC _reals_ and denote their (countable) set by \(\mathbb{Q}_{C}\). The structure of the ordered field \(\mathbb{R}\) congruent to \(\sim\) restricts on \(\mathbb{Q}_{C}\) to the structure of an ordinary ordered field (i.e., on \(\mathbb{Q}_{C}\) the relation \(\sim\) upgrades to \(=\)). The map \(E\colon\mathbb{Q}\to\mathbb{Q}_{C}\) is then an isomorphism of ordered fields. We show that, unlike \(\mathbb{Q}_{\mathrm{OF}}\), HMC reals have the least upper bound property. In the weak sense, though, with \(=\) relaxed to \(\sim\). In CMA one can use only at most countable subsets of \(C\), but the result holds for any subset and we prove it as such. **Theorem 3.5** (\(\mathbb{R}\) has weak LUBP): HMC _real numbers have the weak least upper bound property. Namely, for every nonempty set \(B\subset C\) if \((b_{n})\lesssim(a_{n})\) for every \((b_{n})\in B\) and some \((a_{n})\in C\), then \(B\) has a least upper bound. It is a (\(\sim\)-unique) sequence \((a^{\prime}_{n})\in C\) such that_ * \((b_{n})\lesssim(a^{\prime}_{n})\) _for every_ \((b_{n})\in B\) _and_ * _for every_ \((c_{n})\in C\) _with_ \((c_{n})<(a^{\prime}_{n})\) _there is a_ \((b_{n})\in B\) _with_ \((c_{n})<(b_{n})\)_._ _Proof._ Suppose that \(B\subset C\) is a nonempty set and \((a_{n})\in C\) is an upper bound of \(B\). Clearly, we may take \((a_{n})\) to be \(E(m/1)\) for some \(m\in\mathbb{N}\). In other words, \(\mathbb{R}\) is Archimedean. In the following procedure with four commands we inductively define two rational sequences \((a^{\prime}_{n})\) and \((b_{n})\) in \(\mathbb{Q}\). 1. (initialization) \(a^{\prime}_{1}:=m/1\) and \(b_{1}:=1/1\). 2. (branching) Suppose that the fractions \(a^{\prime}_{1}\),..., \(a^{\prime}_{n}\) and \(b_{1}\),..., \(b_{n}\) have been defined. Is \(E(a^{\prime}_{n}-b_{n})\) still an upper bound of \(B\)? 3. If YES, set \(a^{\prime}_{n+1}:=a^{\prime}_{n}-b_{n}\), \(b_{n+1}:=b_{n}\) and go back to command 2. 4. If NO, set \(a^{\prime}_{n+1}:=a^{\prime}_{n}\), \(b_{n+1}:=\frac{1}{1+1/b_{n}}\) and go back to command 2. The sequence \((a^{\prime}_{n})\) is non-increasing and for every \(n\), \(E(a^{\prime}_{n})\) is an upper bound of \(B\). We show that \((a^{\prime}_{n})\in C\) and is the desired least upper bound of \(B\). Clearly, command 4 is performed infinitely many times. Thus the sequence \[(b_{n})=\left(\tfrac{1}{1},\,\tfrac{1}{1},\,\dots,\,\tfrac{1}{1},\,\tfrac{1}{ 2},\,\tfrac{1}{2},\,\dots,\,\tfrac{1}{2},\,\tfrac{1}{3},\,\tfrac{1}{3},\, \dots,\,\tfrac{1}{3},\,\tfrac{1}{4},\,\dots,\,\dots,\,\dots\right)\] and goes to \(\tfrac{0}{1}\). We denote by \(1\leq m_{1}<m_{2}<\dots\) those steps \(n=m_{i}\) in the procedure when command 4 is performed, and select elements \[b^{\prime}_{i}=(d^{i}_{n})\in B\] such that in step \(n=m_{i}\) one has that \(E(a^{\prime}_{n}-b_{n})<b^{\prime}_{i}\). The last inequality means that there exist \(l_{i}\) and \(n^{\prime}_{i}\) in \(\mathbb{N}\) such that \(n\geq n^{\prime}_{i}\Rightarrow a_{m_{i}}-1/i<d^{i}_{n}-1/l_{i}\). Then for every \(i\in\mathbb{N}\), \[n\geq m_{i}\Rightarrow E(a^{\prime}_{m_{i}})\geq E(a^{\prime}_{n})\gtrsim b^ {\prime}_{i}>E(a^{\prime}_{m_{i}}-1/i)\;.\] Thus for every \(i\) for large \(n\) one has that \(a^{\prime}_{m_{i}}\geq a^{\prime}_{n}>a^{\prime}_{m_{i}}-2/i\) and \((a^{\prime}_{n})\in C\). We show that \((a^{\prime}_{n})\) is an upper bound of \(B\). Suppose for the contradiction that \((a^{\prime}_{n})<b\) for some \(b=(d_{n})\in B\). Then there exist \(k\) and \(n_{0}\) such that \[n\geq n_{0}\Rightarrow a^{\prime}_{n}<d_{n}-1/k\;.\] Since \((a^{\prime}_{n})\in C\), there is an \(n_{1}\) such that \(m,n\geq n_{1}\Rightarrow|a^{\prime}_{m}-a^{\prime}_{n}|\leq 1/2k\). But then \[n\geq N:=\max(\{n_{0},\,n_{1}\})\Rightarrow a^{\prime}_{N}<d_{n}-1/2k\;.\] Thus \(E(a^{\prime}_{N})<b\), a contradiction. It remains to show that \((a^{\prime}_{n})\) is the least upper bound of \(B\). Let \((c_{n})\in C\) be any sequence with \((c_{n})<(a^{\prime}_{n})\). Thus there exist \(k\) and \(n_{0}\) such that \[n\geq n_{0}\Rightarrow c_{n}<a^{\prime}_{n}-1/k\;.\] But then for every \(n\geq\max(\{n_{0},m_{k}\})\), \[E(c_{n})<E(a^{\prime}_{n}-1/k)\leq E(a^{\prime}_{m_{k}}-1/k)<b^{\prime}_{k}\;.\] Thus for every \(n\geq\max(\{n_{0},m_{k},n^{\prime}_{k}\})\) one has that \[c_{n}<d^{k}_{n}-1/l_{k}\;.\] So \((c_{n})<b^{\prime}_{k}\) and \((c_{n})\) is not an upper bound of \(B\). \(\Box\) Concluding remarks \(\mathbb{R}\) is not just an ordinary set in set theory, it is a set that in a sense gave birth to set theory, and rightly [3] devotes Chapter 4 and ten pages to it. There are several modern books on real numbers, of which we explicitly mention only [1] and [5]. In this article we lighted a facet of real numbers that is not considered in these books. In [4] we developed a fragment of Countable Mathematical Analysis and Countable Number Theory. The present version 3 of [4] has to be revised and take into account the treatment of real numbers in this article. We apologize to the readers of [4] for this deficiency and hope to produce the corresponding revision soon.
2306.12962
PyKoopman: A Python Package for Data-Driven Approximation of the Koopman Operator
PyKoopman is a Python package for the data-driven approximation of the Koopman operator associated with a dynamical system. The Koopman operator is a principled linear embedding of nonlinear dynamics and facilitates the prediction, estimation, and control of strongly nonlinear dynamics using linear systems theory. In particular, PyKoopman provides tools for data-driven system identification for unforced and actuated systems that build on the equation-free dynamic mode decomposition (DMD) and its variants. In this work, we provide a brief description of the mathematical underpinnings of the Koopman operator, an overview and demonstration of the features implemented in PyKoopman (with code examples), practical advice for users, and a list of potential extensions to PyKoopman. Software is available at http://github.com/dynamicslab/pykoopman
Shaowu Pan, Eurika Kaiser, Brian M. de Silva, J. Nathan Kutz, Steven L. Brunton
2023-06-22T16:55:01Z
http://arxiv.org/abs/2306.12962v1
# PyKoopman: A Python Package for Data-Driven Approximation of the Koopman Operator ###### Abstract PyKoopman is a Python package for the data-driven approximation of the Koopman operator associated with a dynamical system. The Koopman operator is a principled linear embedding of nonlinear dynamics and facilitates the prediction, estimation, and control of strongly nonlinear dynamics using linear systems theory. In particular, PyKoopman provides tools for data-driven system identification for unforced and actuated systems that build on the equation-free dynamic mode decomposition (DMD) [1] and its variants [2, 3, 4]. In this work, we provide a brief description of the mathematical underpinnings of the Koopman operator, an overview and demonstration of the features implemented in PyKoopman (with code examples), practical advice for users, and a list of potential extensions to PyKoopman. Software is available at [https://github.com/dynamicslab/pykoopman](https://github.com/dynamicslab/pykoopman). _Keywords-_ system identification, dynamical systems, Koopman operator, open source, python ## 1 Introduction Engineers have long relied on linearization to bridge the gap between simplified, linear descriptions where powerful analytical tools exist, and the intricate complexities of nonlinear dynamics where analytical solutions are elusive [5, 6]. Local linearization, implemented via first-order Taylor series approximation, has been widely used in system identification [5], optimization [6], and many other fields to make problems tractable. However, many real-world systems are fundamentally nonlinear and require solutions outside of the local neighborhood where linearization is valid. Rapid progress in machine learning and big data methods are driving advances in the data-driven modeling of such nonlinear systems in science and engineering [7] Koopman operator theory in particular has emerged as a principled approach to embed nonlinear dynamics in a linear framework that goes beyond simple linearization [4]. In the diverse landscape of data-driven modeling approaches, Koopman operator theory has received considerable attention in recent years [8, 9, 10, 11, 12, 13]. These strategies encompass not only linear methodologies [5, 14] and dynamic mode decomposition (DMD) [1, 2, 15], but also more advanced techniques such as nonlinear autoregressive algorithms [16, 17], neural networks [18, 19, 20, 21, 22, 23, 24, 25, 26, 27], Gaussian process regression [28], operator inference, and reduced-order modeling [29, 30, 31], among others [32, 33, 34, 35, 36, 37, 38]. The Koopman operator perspective is unique within data-driven modeling techniques due to its distinct aim of learning a coordinate system in which the nonlinear dynamics become linear. This methodology enables the application of closed-form, convergence-guaranteed methods from linear system theory to general nonlinear dynamics. To fully leverage the potential of data-driven Koopman theory across a diverse range of scientific and engineering disciplines, it is critical to have a central toolkit to automate state-of-the-art Koopman operator algorithms. PyKoopman is a Python package designed to approximate the Koopman operator associated with both natural and actuated dynamical systems from measurement data. Specifically, PyKoopman offers tools for designing observables (i.e., functions of the system state) and inferring a finite-dimensional linear operator that governs the dynamic evolution of these observables in time. These steps can either be conducted sequentially [10, 39] or combined, as demonstrated in more recent neural network models [40, 41, 21, 42]. Once a linear embedding is discovered from the data, the linearity of the transformed dynamical system can be leveraged for enhanced interpretability [43] or for designing near-optimal observers [44] or controllers for the original nonlinear system [45, 46, 47, 48, 49]. The PyKoopman package is designed for both researchers and practitioners, enabling anyone with access to data to discover embeddings of nonlinear systems where the dynamics become approximately linear. Following PySINDy [50] and Deeptime [51], PyKoopman is structured to be user-friendly for those with basic knowledge of linear systems, adhering to scikit-learn standards, while also offering modular components for more advanced users. ## 2 Background PyKoopman provides Python implementations of several leading algorithms for the data-driven approximation of the Koopman operator associated with a dynamical system \[\frac{d}{dt}\mathbf{x}(t)=\mathbf{f}(\mathbf{x}(t),\mathbf{u}(t)), \tag{1}\] where \(\mathbf{x}\in\mathcal{M}\subseteq\mathbb{R}^{n}\) is the state of the system and \(\mathbf{f}\) is a vector field describing the dynamics and the effect of control input \(\mathbf{u}\in\mathbb{R}^{q}\). For the sake of simplicity, we will only present the background for the autonomous dynamical system, and more details for non-autonomous dynamical systems can be found in appendix A. Consider the autonomous system \[\frac{d}{dt}\mathbf{x}(t)=\mathbf{f}(\mathbf{x}(t)). \tag{2}\] Data are typically sampled discretely in time in intervals of \(\Delta t\), and the corresponding discrete-time dynamical system is given by the nonlinear map \(\mathbf{F}:\mathcal{M}\mapsto\mathcal{M}\), \[\mathbf{x}(t+\Delta t)=\mathbf{F}(\mathbf{x}(t)), \tag{3}\] where \(\mathbf{F}(\mathbf{x})=\mathbf{x}(t)+\int_{t}^{t+\Delta t}\mathbf{f}(\mathbf{ x}(s))\,ds\). Given data in the form of measurement vectors \(\mathbf{x}(t)\), the goal of data-driven Koopman theory (see fig. 1) is to find a new coordinate system \[\mathbf{z}:=\mathbf{\Phi}(\mathbf{x}), \tag{4}\] where the dynamics are simplified, or ideally, linearized in the sense of either continuous dynamics, \[\frac{d}{dt}\mathbf{z}=\mathbf{A}_{c}\mathbf{z}, \tag{5}\] or discrete-time dynamics, \[\mathbf{z}(t+\Delta t)=\mathbf{A}\mathbf{z}(t), \tag{6}\] where the subscript \(c\) is for continuous-time and \(\mathbf{A}=\exp(\Delta t\mathbf{A}_{c})\). For simplicity, PyKoopman is focused on the discrete dynamical system in eq. (6), which is consistent with the majority of the literature [2, 3, 4]. The goal of learning the coordinates \(\mathbf{\Phi}\) and linear dynamics \(\mathbf{A}\) may be posed as a regression problem in terms of finding the linear operator that best maps the state of the system, or a transformed version of the state, forward in time. This may be formulated in terms of the following two data matrices, \[\mathbf{X}=\begin{bmatrix}\vline&\vline&\vline&\vline&\vline\\ \mathbf{x}(t_{1})&\mathbf{x}(t_{2})&\cdots&\mathbf{x}(t_{m})\\ \vline&\vline&\vline&\vline&\vline\end{bmatrix},\quad\mathbf{X}^{\prime}= \begin{bmatrix}\vline&\vline&\vline&\vline&\vline\\ \mathbf{x}(t^{\prime}_{1})&\mathbf{x}(t^{\prime}_{2})&\cdots&\mathbf{x}(t^{ \prime}_{m})\\ \vline&\vline&\vline&\vline\end{bmatrix}, \tag{7}\] or the transformed data matrices of candidate nonlinear observations \[\mathbf{\Phi}(\mathbf{X})=\begin{bmatrix}\vline&\vline&\vline&\vline&\vline &\vline\\ \mathbf{\Phi}(\mathbf{x}(t_{1}))&\mathbf{\Phi}(\mathbf{x}(t_{2}))&\cdots& \mathbf{\Phi}(\mathbf{x}(t_{m}))\\ \vline&\vline&\vline&\vline\end{bmatrix},\mathbf{\Phi}(\mathbf{X}^{\prime} )=\begin{bmatrix}\vline&\vline&\vline&\vline&\vline\\ \mathbf{\Phi}(\mathbf{x}(t^{\prime}_{1}))&\mathbf{\Phi}(\mathbf{x}(t^{\prime} _{2}))&\cdots&\mathbf{\Phi}(\mathbf{x}(t^{\prime}_{m}))\\ \vline&\vline&\vline\end{bmatrix}. \tag{8}\] The following regression is then performed to approximately solve \[\mathbf{\Phi}(\mathbf{X}^{\prime})\approx\mathbf{A}\mathbf{\Phi}(\mathbf{X}) \tag{9}\] Figure 1: Lifting of the state \(\mathbf{x}\) of the continuous autonomous dynamical system in eq. (2) into a new coordinate system, in which the original nonlinear dynamics become linear and are easier to handle. One can also linearly reconstruct the state \(\mathbf{x}\) from the new coordinate system. This is facilitated with PyKoopman in a data-driven manner. for an unknown \(\mathbf{A}\). The choice of \(\mathbf{\Phi}\) is problem dependent. Popular choices are polynomial features [10], implicit features defined by kernel functions [39], radial basis functions [10], time delay embedding [13], and random Fourier features [52]. While most early formulations of data-driven Koopman approximation rely heavily on ordinary least squares [7] or SVD-DMD [1], one can use any regression from the DMD community (for example, using PyDMD [53]) to solve eq.9, including total least squares (tlsDMD) [54], optimized DMD (optDMD) [55], etc. Although originating in the field of fluid dynamics [1, 15] for modal analysis [56, 57, 43, 58], the Koopman operator and its variants have inspired numerous ideas in the control community, such as Koopman optimal control [47, 59], Koopman model predictive control (MPC) [60], Koopman reinforcement learning [61], and Koopman-based observers and Kalman filters [44]. Furthermore, the application of the Koopman operator has been extensively employed in control-oriented model identification in fields such as robotics [62, 63], weather forecasting [64], and time series prediction [65]. However, there is currently no standard open-source implementation for approximating the Koopman operator from data. Consequently, researchers are required to develop their own versions, even though their primary interests may be in the downstream applications of the Koopman operator. This has motivated this current work to standardize the implementation of the Koopman operator by creating PyKoopman. This platform is designed to serve as a central hub for Koopman operator education, experimentation with various techniques, and an off-the-shelf toolkit for end-users to seamlessly integrate data-driven Koopman algorithms into their task pipelines. ## 3 Features The core component of the PyKoopman package is the Koopman model class. To make this package accessible to a broader user base, this class is implemented as a scikit-learn estimator. The external package dependencies are illustrated in fig.2. Additionally, users can create sophisticated pipelines for hyperparameter tuning and model selection by integrating pykoopman with scikit-learn. As illustrated in fig.3, PyKoopman is designed to lift nonlinear dynamics into a linear system with linear actuation. Specifically, our PyKoopman implementation involves two major steps: 1. observables: the nonlinear observables used to lift \(\mathbf{x}\) to \(\mathbf{z}\), and reconstruct \(\mathbf{x}\) from \(\mathbf{z}\); 2. regression: the regression used to find the best-fit dynamics operator \(\mathbf{A}\). Additionally, we have a differentiation module that evaluates the time derivative from a trajectory and an analytics module for sparsifying arbitrary approximations of the Koopman operator. At the time of writing, we have the following features implemented: * Observable library for lifting the state \(\mathbf{x}\) into the observable space * Identity (for DMD/DMDc or in case users want to compute observables themselves): Identity * Multivariate polynomials: Polynomial [10] * Time delay coordinates: TimeDelay[13, 66] * Radial basis functions: RadialBasisFunctions[10] * Random Fourier features: RandomFourierFeatures[52] * Custom library (defined by user-supplied functions): CustomObservables * Concatenation of observables: ConcatObservables * System identification method for performing regression * Dynamic mode decomposition [15, 67, 1, 68]: PyDMDRegressor * Dynamic mode decomposition with control [69]: DMDc * Extended dynamic mode decomposition [10]: EDMD * Extended dynamic mode decomposition with control [46]: EDMDc * Kernel dynamic mode decomposition: KDMD [39] * Hankel DMD [13]: HDMD Figure 3: Broad categorization of model types that can be identified with current PyKoopman. While the dotted parts (marked with “.”) can be simultaneously discovered within the framework, they are typically ignored for control purposes. Figure 2: External package dependencies of PyKoopman. * Hankel DMD with control: HDMDC * Neural Network DMD [21, 40, 41, 42, 69]: NNDMD * Sparse construction of Koopman invariant subspace * Multi-task learning based on linearity consistency [43]: ModesSelectionPAD21 * Numerical differentiation for computing \(\dot{\mathbf{X}}\) from \(\mathbf{X}\) * Finite difference: FiniteDifference * 4th order central finite difference: Derivative(kind='finite_difference') * Savitzky-Golay with cubic polynomials: Derivative(kind='savitzky-golay') * Spectral derivative: Derivative(kind='spectral') * Spline derivative: Derivative(kind='spline') * Regularized total variation derivative: Derivative(kind='trend_filtered') * Common benchmark dynamical systems * Discrete-time random, stable, linear state-space model: drss * Van del Pol oscillator: vdp_osc * Lorenz system: lorenz * Two-dimensional linear dynamics: Linear2Ddynamics * Linear dynamics on a torus: torus_dynamics * Forced Duffing Oscillator: forced_duffing * Cubic-quintic Ginzburg-Landau equation: cqgle * Kuramoto-Sivashinsky equation: ks * Nonlinear Schrodinger equation: nls * Viscous Burgers equation: vbe * Validation routines for consistency checks ## 4 Examples The PyKoopman GitHub repository1 provides several helpful Jupyter notebook tutorials. Here, we demonstrate the usage of the PyKoopman package on three low-dimensional nonlinear systems. Footnote 1: [https://github.com/dynamicslab/pykoopman](https://github.com/dynamicslab/pykoopman) First, consider the dynamical system \[\begin{split}\dot{x}_{1}&=-0.05x_{1}\\ \dot{x}_{2}&=-x_{2}+x_{1}^{2}.\end{split} \tag{10}\] In Python, the right-hand side of eq. (10) can be expressed as follows: ``` defslow_manifold(x,t): return[ -0.05*x[0], -x[1]+x[0]*2 ] ``` To prepare training data, we draw 100 random number within \([-1,1]^{2}\) as initial conditions and then collect the corresponding trajectories by integrating eq. (10) forward in time: ``` importnumpyasnp fromscipy.integrateimportodeint dt=0.02 t=np.arange(0,50,dt) X=[] Xnext=[] forx0_0innp.linspace(-1,1,10): forx0_1innp.linspace(-1,1,10): x0=np.array([x0_0,x0_1]) x_tmp=odeint(slow_manifold,x0,t) X.append(x_tmp[:-1,:]) Xnext.append(x_tmp[1:,:]) X=np.vstack(X) Xnext=np.vstack(Xnext) Note that X and Xnext correspond to X and X\({}^{\prime}\) in eq. (7). We plot X in fig. 4, while Xnext is omitted for brevity. Almost all PyKoopman objects support this "one-step ahead" format of data, except when time delay is explicitly required, such as in HAVOK[13]. Furthermore, NNDMD not only supports the standard "one-step" ahead format but also accommodates data with multiple-step trajectories. The PyKoopman package is built around the Koopman class, which approximates the discrete-time Koopman operator from data. To begin, we can create an observable function and an appropriate Figure 4: Demonstration on the slow manifold problem. **Left:** measurement data simulated using the slow manifold in eq. (10). **Right:** Trajectories of ground truth and predictions from EDMD implemented in PyKoopman given unseen initial conditions. regressor. These two objects will then serve as input for the Koopman class. For instance, we can employ EDMD to approximate the slow manifold dynamics as shown in eq. (10). ``` frompykoopmanimportKoopman frompykoopman.observablesimportPolynomial frompykoopman.regressionimportEDMD model=Koopman(observables=Polynomial(2),regressor=EDMD()) model.fit(X,Xnext) ``` Once the Koopman object has been fit, we can use the model.simulate method to make predictions over an arbitrary time horizon. For example, the following code demonstrates the usage of model.simulate to make predictions for 50 unseen initial conditions sampled on the unit circle. ``` plt.figure(figsize=(4,4)) theta=np.random.rand(1,50)^2^np.pi x0_test_array=np.stack((np.cos(theta),np.sin(theta)),axis=0).T forx0_testinx0_test_array: xtest_true=odeint(slow_manifold,x0_test.flatten(),t) xtest_pred=model.simulate(x0_test,n_steps=t.size-1) xtest_pred=np.vstack([xtest_true[0],xtest_pred]) plt.plot(xtest_true[:,0],xtest_true[:,1],'k') plt.plot(xtest_pred[:,0],xtest_pred[:,1],'r--') plt.xlabel(r'$x_1$') plt.ylabel(r'$x_2$') ``` Figure 4 displays the excellent agreement between ground truth and the EDMD prediction from the aforementioned Koopman model on randomly generated unseen test data. The official GitHub repository2 contains additional useful examples. Footnote 2: [https://github.com/dynamicslab/pykoopman/tree/master/docs](https://github.com/dynamicslab/pykoopman/tree/master/docs) ## 5 Practical tips In this section, we offer practical guidance for using PyKoopman effectively. We discuss potential pitfalls and suggest strategies to overcome them. ### Observables selection The use of nonlinear observables makes the approximation of the Koopman operator fundamentally different from DMD. However, choosing observables in practice can be a highly non-trivial task. Although we used monomials as observables in the previous example, such polynomial features are not scalable for practical systems in robotics or fluid dynamics. As a rule of thumb in practice, one can try the thin-plate radial basis function [12] as a first choice. If the number of data snapshots in time is only a few hundred (e.g., as in fluid dynamics), one can opt for kernel DMD [43], but tuning the hyperparameters within the kernel function can be critical. If the number of data points exceeds a few thousand (e.g., multiple trajectories of simulated robotic systems), one can choose to approximate the kernel method with random Fourier features in observables.RandomFourierFeatures as observables [52]. Another useful approach is time-delay observables [13], which can be interpreted as using the reverse flow map function recursively as observables. However, it does not self-start. Just like autoregressive models, the number of delays determines the maximum number of linearly superposable modes that the model can capture. The number of delays also has a somewhat surprising effect on the numerical condition [70]. Furthermore, one may find it beneficial to use customized observables informed by the governing equation in eq. (2) [71] by calling observables.CustomObservables with lambda functions. If all the above methods fail, one may choose to use a neural network to search for the observables; this approach is typically more expressive but is also more computationally expensive. ### Optimization Once the observables are chosen, the optimization step finds the best-fit linear operator that maps observable at the current time step to the next time step. Although most of the time the standard least-squares regression or pseudo-inverse is sufficient, one can use any regressor from PyDMD. Additionally, one can use NNDMD to concurrently search for the observables and optimal linear fit. Regarding NNDMD, we have found that using the recurrent loss leads to more accurate and robust model performance than the standard one-step loss, which is adopted in more traditional algorithms. Thanks to the dynamic graph in PyTorch, NNDMD can minimize the recurrent loss progressively, starting from minimizing only the one future step loss to multiple steps in the future. Moreover, we have found that using second-order optimization algorithms, such as L-BFGS [72], significantly accelerates training compared to the Adam optimizer [73]. However, occasionally the standard L-BFGS can diverge, especially when trained over a long period of time. With PyTorch.Lightning, NNDMD can easily take advantage of the computing power of various hardware platforms. ## 6 Extensions In this section, we list potential extensions and enhancements to our PyKoopman implementation. We provide references for the improvements that are inspired by previously conducted research and the rationale behind the other potential changes. * **Bilinearization:** Although ideally we would like to have a standard linear input-output system in the transformed coordinates, this can lead to inconsistencies with the original system. A number of studies [74, 31, 75] have shown the advantages of using bilinearization instead of standard linearization. It is worth noting that bilinearization has been incorporated into another Python package, pykoop[76]. * **Continuous spectrum:** Most existing algorithms assume a discrete, pointwise spectrum reflected in the data. As a result, these algorithms may struggle with chaotic systems, which contain a continuous spectrum. There are several approaches for handling continuous spectra, including the use of time delay coordinates [13]. Recent approaches including resDMD, MP-EDMD, and physics informed DMD all show promise for continuous-spectrum dynamics [77, 78, 79]. * **Extended libraries:** The linear system identified in the lifted space can be further exploited to facilitate the design of optimal control for nonlinear systems. For example, the classic LQR has been extended to nonlinear systems [47]. Moreover, nonlinear MPC can be converted to linear MPC using the identified linear system from the Koopman operator, which transforms the original non-convex optimization problem into a convex optimization problem. In the future, we believe open-source libraries for Koopman-based control synthesis integrated with PyKoopman will be widely used by the community. ## 7 Acknowledgments The authors would like to acknowledge support from the National Science Foundation AI Institute in Dynamic Systems (Grant No. 2112085) and the Army Research Office (W911NF-17-1-0306 and W911NF-19-1-0045). ## Appendix A Koopman operator theory In this section, we will briefly describe Koopman operator theory for dynamical systems [4]. Specifically, the theory for autonomous dynamical systems is presented in appendix A.1 while the theory for controlled systems is presented in appendix A.2. ### Koopman operator theory for dynamical systems Given the following continuous-time dynamical system, \[\frac{d}{dt}\mathbf{x}(t)=\mathbf{f}(\mathbf{x}(t)), \tag{11}\] the flow map operator, or time-\(t\) map, \(\mathbf{F}^{t}:\mathcal{M}\to\mathcal{M}\) maps initial conditions \(\mathbf{x}(0)\) to points on the trajectory \(t\) time units in the future, so that trajectories evolve according to \(\mathbf{x}(t)=\mathbf{F}^{t}(\mathbf{x}(0))\). The Koopman operator \(\mathcal{K}^{t}:\mathcal{G}(\mathcal{M})\mapsto\mathcal{G}(\mathcal{M})\) maps the measurement function \(g\in\mathcal{G}(\mathcal{M})\) evaluated at a point \(\mathbf{x}(t_{0})\) to the same measurement function evaluated at a point \(\mathbf{x}(t_{0}+t)\): \[\mathcal{K}^{t}g(\mathbf{x})=g(\mathbf{F}^{t}(\mathbf{x})), \tag{12}\] where \(\mathcal{G}(\mathcal{M})\) is a set of _measurement functions_\(g:\mathcal{M}\to\mathbb{C}\). The infinitesimal generator \(\mathcal{L}\) of the time-\(t\) Koopman operator is known as the Lie operator [80], as it is the Lie derivative of \(g\) along the vector field \(\mathbf{f}(\mathbf{x})\) when the dynamics is given by eq. (2). This follows from applying the chain rule to the time derivative of \(g(\mathbf{x})\): \[\frac{d}{dt}g(\mathbf{x}(t))=\nabla g\cdot\dot{\mathbf{x}}(t)=\nabla g\cdot \mathbf{f}(\mathbf{x}(t))=\mathcal{L}g(\mathbf{x}(t)). \tag{13}\] In continuous-time, a Lie operator eigenfunction \(\varphi(\mathbf{x})\) satisfies \[\frac{d}{dt}\varphi(\mathbf{x})=\mathcal{L}\varphi(\mathbf{x})=\mu\varphi( \mathbf{x}). \tag{14}\] An eigenfunction \(\varphi\) of \(\mathcal{L}\) with eigenvalue \(\mu\) is then an eigenfunction of \(\mathcal{K}^{t}\) with eigenvalue \(\lambda^{t}=\exp(\mu t)\). However, we often take multiple measurements of a system, which we will arrange in a vector \(\mathbf{g}\): \[\mathbf{g}(\mathbf{x})=\begin{bmatrix}g_{1}(\mathbf{x})\\ g_{2}(\mathbf{x})\\ \vdots\\ g_{p}(\mathbf{x})\end{bmatrix}. \tag{15}\] The vector of observables, \(\mathbf{g}\), can be expanded in terms of a basis of eigenfunctions \(\varphi_{j}(\mathbf{x})\): \[\mathcal{K}^{t}\mathbf{g}(\mathbf{x})=\sum_{j=1}^{\infty}\lambda_{j}^{t}\varphi_ {j}(\mathbf{x})\mathbf{v}_{j}, \tag{16}\] where \(\mathbf{v}_{j}:=[\langle\varphi_{j},g_{1}\rangle,\langle\varphi_{j},g_{2} \rangle,\ldots,\langle\varphi_{j},g_{p}\rangle]\) is the \(j\)-th _Koopman mode_ associated with the eigenfunction \(\varphi_{j}\). For a discrete-time system \[\mathbf{x}_{k+1}=\mathbf{F}(\mathbf{x}_{k}), \tag{17}\] where \(\mathbf{x}_{k}=\mathbf{x}(t_{k})=\mathbf{x}(k\Delta t)\), the Koopman operator \(\mathcal{K}\) governs the one-step evolution of the measurement function \(g\), \[\mathcal{K}g(\mathbf{x}_{k})=g(\mathbf{F}(\mathbf{x}_{k}))=g(\mathbf{x}_{k+1}). \tag{18}\] In this case, a Koopman eigenfunction \(\varphi(\mathbf{x})\) corresponding to an eigenvalue \(\lambda\) satisfies \[\varphi(\mathbf{x}_{k+1})=\mathcal{K}\varphi(\mathbf{x}_{k})=\lambda\varphi( \mathbf{x}_{k}). \tag{19}\] ### Koopman theory for controlled systems The continuous-time dynamics for a controlled system is given by \[\frac{d}{dt}\mathbf{x}(t)=\mathbf{f}(\mathbf{x}(t),\mathbf{u}(t)). \tag{20}\] Following Proctor et al. [81] and Kaiser et al. [47], instead of the usual state \(\mathbf{x}\), we consider measurement functions defined on an extended state \(\tilde{\mathbf{x}}=(\mathbf{x},\mathbf{u})\), where the corresponding flow map is \(\tilde{\mathbf{F}}^{t}(\mathbf{x},\mathbf{u})=[\mathbf{F}^{t}(\mathbf{x}, \mathbf{u}),\boldsymbol{\Theta}^{t}(\mathbf{u})]\), and \(\boldsymbol{\Theta}^{t}(\mathbf{u})\) is the shift map by time \(t\) units so that \(\boldsymbol{\Theta}^{t}(\mathbf{u})(s)=\mathbf{u}(s+t)\). In summary, the Koopman operator on controlled system governs the measurement function of the extended state, \[\mathcal{K}^{t}g(\mathbf{x},\mathbf{u})=g(\tilde{\mathbf{F}}^{t}(\mathbf{x}, \mathbf{u})). \tag{21}\] The corresponding Koopman mode decomposition for a vector of observables, \[\mathbf{g}(\mathbf{x},\mathbf{u})=\begin{bmatrix}g_{1}(\mathbf{x},\mathbf{u}) \\ g_{2}(\mathbf{x},\mathbf{u})\\ \vdots\\ g_{p}(\mathbf{x},\mathbf{u})\end{bmatrix}, \tag{22}\] can be written as, \[\mathcal{K}^{t}\mathbf{g}(\mathbf{x},\mathbf{u})=\sum_{j=1}^{\infty}\lambda_{ j}^{t}\varphi_{j}(\mathbf{x},\mathbf{u})\mathbf{v}_{j}, \tag{23}\] where the Koopman eigenfunction is \[\varphi(\mathbf{x},\mathbf{u},t)=\mathcal{K}^{t}\varphi(\mathbf{x},\mathbf{u} )=\lambda\varphi(\mathbf{x},\mathbf{u}). \tag{24}\] If the continuous-time controlled system is control-affine, \[\mathbf{f}(\mathbf{x}(t),\mathbf{u}(t))=\mathbf{f}_{0}(\mathbf{x})+\sum_{i=1}^{q }\mathbf{f}_{i}(\mathbf{x})u_{i}, \tag{25}\] where \(u_{i}\) is \(i\)th component of input \(\mathbf{u}\), then the Lie operator (along the vector field \(\mathbf{f}\)) on the measurement function \(g(\mathbf{x})\) becomes, \[\mathcal{L}g(\mathbf{x})=\nabla_{\mathbf{x}}g(\mathbf{x})\cdot\dot{\mathbf{x}} =\nabla_{\mathbf{x}}g(\mathbf{x})\cdot\mathbf{f}_{0}(\mathbf{x})+\nabla_{ \mathbf{x}}g(\mathbf{x})\cdot\sum_{i=1}^{q}\mathbf{f}_{i}(\mathbf{x})u_{i}. \tag{26}\] Similarly, after we define the Lie operator along the vector field \(\mathbf{f}_{0}\) as \(\mathcal{A}\) and that along \(\mathbf{f}_{i}\) as \(\mathcal{B}_{i}\), we have the bilinearization for the control-affine system, \[\frac{\mathrm{d}}{\mathrm{d}t}g(\mathbf{x})=\mathcal{A}g(\mathbf{x})+\sum_{i= 1}^{q}u_{i}\mathcal{B}_{i}g(\mathbf{x}). \tag{27}\] Assuming \(\varphi\) is an eigenfunction of \(\mathcal{A}\), we have \[\frac{\mathrm{d}}{\mathrm{d}t}\varphi(\mathbf{x})=\mu\varphi(\mathbf{x})+ \nabla_{\mathbf{x}}\varphi(\mathbf{x})\cdot\sum_{i=1}^{q}\mathbf{f}_{i}( \mathbf{x})u_{i}. \tag{28}\] Furthermore, if the vector space spanned by \(D\) such eigenfunctions \(\{\varphi_{i}\}_{i=1}^{D}\) is invariant under \(\mathcal{B}_{1},\ldots,\mathcal{B}_{q}\)[82], we have \[\forall i=1,\ldots,q,\quad\mathcal{B}_{i}\boldsymbol{\varphi}=\mathbf{B}_{i} \boldsymbol{\varphi}, \tag{29}\] where \(\boldsymbol{\varphi}=\begin{bmatrix}\varphi_{1}&\ldots&\varphi_{D}\end{bmatrix}^ {\top}\). Plugging this into eq. (28), we have the well-known _Koopman bilinear form_ for the control-affine systems, \[\frac{\mathrm{d}}{\mathrm{d}t}\boldsymbol{\varphi}(\mathbf{x})=\boldsymbol{ \Lambda}_{c}\boldsymbol{\varphi}(\mathbf{x})+\sum_{i=1}^{q}u_{i}\mathbf{B}_{i }\boldsymbol{\varphi}. \tag{30}\] For general discrete-time system, \[\mathbf{x}_{k+1}=\mathbf{F}(\mathbf{x}_{k},\mathbf{u}_{k}), \tag{31}\] where \(\mathbf{x}_{k}=\mathbf{x}(t_{k})=\mathbf{x}(k\Delta t)\), the Koopman operator governs the one-step evolution of the measurement function \(g\) of the extended state \(\tilde{x}=(\mathbf{x},\mathbf{u})\), \[\mathcal{K}g(\mathbf{x}_{k},\mathbf{u}_{k})=g(\mathbf{F}(\mathbf{x}_{k}, \mathbf{u}_{k}))=g(\mathbf{x}_{k+1},\mathbf{u}_{k+1}). \tag{32}\] A Koopman eigenfunction \(\varphi(\mathbf{x})\) corresponding to an eigenvalue \(\lambda\) satisfies \[\varphi(\mathbf{x}_{k+1},\mathbf{u}_{k+1})=\mathcal{K}\varphi(\mathbf{x}_{k}, \mathbf{u}_{k})=\lambda\varphi(\mathbf{x}_{k},\mathbf{u}_{k}). \tag{33}\]
2310.11555
Integrating 3D City Data through Knowledge Graphs
CityGML is a widely adopted standard by the Open Geospatial Consortium (OGC) for representing and exchanging 3D city models. The representation of semantic and topological properties in CityGML makes it possible to query such 3D city data to perform analysis in various applications, e.g., security management and emergency response, energy consumption and estimation, and occupancy measurement. However, the potential of querying CityGML data has not been fully exploited. The official GML/XML encoding of CityGML is only intended as an exchange format but is not suitable for query answering. The most common way of dealing with CityGML data is to store them in the 3DCityDB system as relational tables and then query them with the standard SQL query language. Nevertheless, for end users, it remains a challenging task to formulate queries over 3DCityDB directly for their ad-hoc analytical tasks, because there is a gap between the conceptual semantics of CityGML and the relational schema adopted in 3DCityDB. In fact, the semantics of CityGML itself can be modeled as a suitable ontology. The technology of Knowledge Graphs (KGs), where an ontology is at the core, is a good solution to bridge such a gap. Moreover, embracing KGs makes it easier to integrate with other spatial data sources, e.g., OpenStreetMap and existing (Geo)KGs (e.g., Wikidata, DBPedia, and GeoNames), and to perform queries combining information from multiple data sources. In this work, we describe a CityGML KG framework to populate the concepts in the CityGML ontology using declarative mappings to 3DCityDB, thus exposing the CityGML data therein as a KG. To demonstrate the feasibility of our approach, we use CityGML data from the city of Munich as test data and integrate OpenStreeMap data in the same area.
Linfang Ding, Guohui Xiao, Albulen Pano, Mattia Fumagalli, Dongsheng Chen, Yu Feng, Diego Calvanese, Hongchao Fan, Liqiu Meng
2023-10-17T20:00:21Z
http://arxiv.org/abs/2310.11555v1
# Integrating 3D City Data through Knowledge Graphs ###### Abstract CityGML is a widely adopted standard by the Open Geospatial Consortium (OGC) for representing and exchanging 3D city models. The representation of semantic and topological properties in CityGML makes it possible to query such 3D city data to perform analysis in various applications, e.g., security management and emergency response, energy consumption and estimation, and occupancy measurement. However, the potential of querying CityGML data has not been fully exploited. The official GML/XML encoding of CityGML is only intended as an exchange format but is not suitable for query answering. The most common way of dealing with CityGML data is to store them in the 3DCityDB system as relational tables and then query them with the standard SQL query language. Nevertheless, for end users, it remains a challenging task to formulate queries over 3DCityDB directly for their ad-hoc analytical tasks, because there is a gap between the conceptual semantics of CityGML and the relational schema adopted in 3DCityDB. In fact, the semantics of CityGML itself can be modeled as a suitable ontology. The technology of Knowledge Graphs (KGs), where an ontology is at the core, is a good solution to bridge such a gap. Moreover, embracing KGs makes it easier to integrate with other spatial data sources, e.g., OpenStreetMap and existing (Geo)KGs (e.g., Wikidata, DBPedia, and GeoNames), and to perform queries combining information from multiple data sources. In this work, we describe a CityGML KG framework to populate the concepts in the CityGML ontology using declarative mappings to 3DCityDB, thus exposing the CityGML data therein as a KG. To demonstrate the feasibility of our approach, we use CityGML data from the city of Munich as test data and integrate OpenStreeMap data in the same area. Finally, we collect real-world geospatial analytical tasks and show that they can be formulated as intuitive GeoSPARQL queries. We test three state-art-of-art KG systems, Ontop, Apache Jena, and GraphDB, and confirm that the queries can be evaluated efficiently over the generated KG. _Keywords--_ CityGML, OpenStreetMap, Data Integration, Query Answering, Knowledge Graph, Ontology ## 1 Introduction 3D city data has been increasingly used to perform analysis in various applications, e.g., security management and emergency response, energy consumption and estimation, and occupancy measurement. A widely adopted standard by the Open Geospatial Consortium (OGC) for representing and exchanging 3D city models is _CityGML_[5, 22]. It defines the three-dimensional geometry, topology, semantics, and appearance of the most relevant topographic objects in urban or regional contexts. The representation of semantic and topological properties in CityGML makes it possible to query such 3D city data to perform analysis. At the implementation level, CityGML is defined as a GML application schema for the Geography Markup Language (GML) [5]. In its most common implementation, CityGML datasets consist of a set of XML files and possibly some accompanying image files that are used as textures. Each text file can represent a part of the dataset, such as a specific region, a specific type of object (such as a set of roads), or a predefined Level of Detail (LoD). The structure of a CityGML file is a hierarchy that ultimately reaches down to individual objects and their attributes. These objects have a geometry that is described using GML. Another important implementation of CityGML is 3DcityDB [11], which is a free 3D geo-database solution for CityGML-based 3D city models. 3DcityDB has been developed as an open source and platform-independent software suite to facilitate the development and deployment of 3D city model applications. The 3DcityDB software package consists of a database schema for spatially enhanced relational database management systems (Oracle Spatial or PostgreSQL/PostGIS) with a set of database procedures and software tools allowing to import, manage, analyze, visualize, and export virtual 3D city models according to the CityGML standard. However, the potential of querying CityGML data has not been fully exploited. The official GML/XML encoding of CityGML is only intended as an exchange format but is not suitable for query answering. The most common way of dealing with CityGML data is to store them in the 3DcityDB system as relational tables and then query them with the standard SQL query language. Nevertheless, for end users, it remains a challenging task to formulate queries over 3DcityDB directly for their ad-hoc analytical tasks, because there is a gap between the conceptual semantics of CityGML and the relational schema adopted in 3DcityDB. One possibility to bridge this gap is to use _semantic technology_, which is concerned with the challenges posed by data with a complex structure and associated knowledge. At the core of solutions based on semantic technology, we typically have an _ontology_ to provide semantics to the data. In computer science, the term "ontology" denotes a concrete artifact that conceptualizes a domain of interest and allows one to view the information and data relevant for that domain in a sharable and coherent way. In the CityGML standard, the semantics is defined as a collection of UML diagrams, which can be naturally regarded as an ontology. The instances of an ontology are _knowledge graphs (KGs)_[19], where data is structured in the form of a graph. Domain objects and data values are represented as nodes of such a graph, and properties of objects are encoded as edges. For CityGML, the nodes in a KG could represent instances of buildings, streets, and surfaces, among others. Moreover, embracing KGs makes it possible to integrate with existing KGs, e.g., Wikidata [8], DBPedia [9], GeoNames, and LinkedGeoData [6, 18]. This allows us to express interesting queries that require combining information from multiple sources. In this work, we describe a CityGML KG framework to expose CityGML data as a Knowledge Graph and to integrate it with other data (e.g., OSM data). The CityGML KG or the integrated KG can be queried using the standard GeoSPARQL query language. To demonstrate the feasibility of this framework, we use the 3D CityGML building data at LoD2 of the municipality of Munich, Germany as test data. We adopt and extend the CityGML ontology created by the University of Geneva and develop a suitable R2RML mapping to 3DCityDB. Moreover, as a demonstration of the capability of this methodology for integrating CityGML data with other datasets, we collect OSM data in the same test area. The selection of OSM data as an example is because it is one of the most popular crowdsourcing data worldwide and it contains complementary spatial and semantic information with CityGML data that can be combined for interesting queries and applications. To show the usefulness of the generated KG, we collect real-world geospatial analytical tasks and formulate them as intuitive GeoSPARQL queries, which show a high degree of expressiveness. Finally, we test three popular KG systems, i.e., Ontop, Apache Jena, and GraphDB, and confirm that the queries can be evaluated efficiently. ## 2 Related work ### CityGML CityGML is a data model and exchange format for 3D digital modeling of cities and landscapes [27]. The main advantage of CityGML in comparison to other data formats is that if offers the possibility to integrate semantic information within 3D city models. In 2008, CityGML became international standard in the Open Geospatial Consortium (OGC)[5]. Since then, CityGML draws more and more attention from mapping authorities, industries and academic societies. Nowadays, it is widely used for different applications in many countries and regions. Aiming to modeling cities in 3D in the digital world, CityGML covers almost all types of features that could appear in urban area, namely, Building, Water body, Terrain, Transportation, Bridge, City Furniture, Land Use, Tunnel, etc. These features are organized into modules in CityGML. Although CityGML defines levels of detail (LoDs) for all types of features, the LoD of building objects is the mostly agreed and recognized concept in the 3D city modeling community. In total, there are 5 LoDs defined for building objects in CityGML, ranging from coarse model (LoD0) to very detailed models (LoD4) with geometries and semantic information. As denoted in Figure 1, an LoD0 building model in CityGML is actually a 2D footprint in a closed polygon which is semantically indicated as building object and can be added with various attributes. An LoD1 building in CityGML is a block model with height information, while an LoD2 building model needs have detailed roof models, architectural details on facades, such as windows, doors and other elements can be modeled in addition to LoD2 models. For a more further step, interior objects can be modeled in LoD4. CityGML is very powerful to model 3D cities with rich semantic information. However, its complex and hierarchical structure, and also the interoperability issues create difficulties and complexity when transforming and decoding it for visualization and application scenarios [13]. In order to overcome this issue, CityJSON was developed [12] by combining the advantages of JSON and CityGML. In other words, CityJSON can be regarded as a JSON implementation of a subset of CityGML version 2.0. In 2021, CityJSON was accepted as OGC standard. Currently, people are working to adjust CityJSON to the CityGML 3.0 conceptual model. ### Knowledge Graphs and Geospatial Knowledge Graphs The Semantic Web research area is concerned with the challenges posed by data with a complex structure and associated knowledge. A prominent technology within the Semantic Web is that of knowledge graphs (KGs) [20], where data is structured in the form of a graph. Domain objects and data values are represented as nodes of such a graph, and properties of objects are encoded as edges. At the core of solutions based on KGs we typically have an ontology to provide semantics to the data. In computer science, the term "ontology" denotes a concrete artifact that conceptualizes a domain of interest and allows one to view the information and data relevant for that domain in a sharable and coherent way. To simplify the sharing and reuse of ontologies, the World Wide Web Consortium (W3C)1 has defined standard languages. We refer here to the Resource Description Framework (RDF) [1], providing a simple mechanism to represent the data in a certain domain, and Web Ontology Language (OWL)[3], providing a very rich language to encode complex knowledge in the domain of interest. Footnote 1: [https://www.w3.org/](https://www.w3.org/) GeoSPARQL is a standard by OGC for representation and querying of Geospatial KGs [26]. GeoSPARQL provides a topological ontology in RDFS/OWL for representation using Geography Markup Language (GML) and well-known text representation of geometry (WKT) literals, and topological relationship vocabularies and ontologies for qualitative reasoning. GeoSPARQL also provides a SPARQL query interface using a set of topological SPARQL extension functions for quantitative reasoning. Geospatial KGs are often converted from geospatial data sources which are stored in spatial databases or other popular used formats like Shapefiles. A systematic approach to such conversion is the _ntology-based data access_ (OBDA) paradigm, which Figure 1: Five LoDs in CityGML enables end users to access data sources through a domain ontology. Typically, this ontology imports the GeoSPARQL ontology, and is semantically linked to the data sources by means of a mapping, which is expressed in the R2RML language [4] standardized by the W3C. OBDA can be realized in a materialized or virtual fashion: * In the _Materialized Knowledge Graph (MKG)_ approach, the original data sources are first materialized as RDF Graphs using systems, e.g., GeoTriples [10] and Ontop [16], and then are loaded into RDF stores that support geospatial KGs, e.g., Apache Jena2, GraphDB3, and Stardog4. Footnote 2: [https://jena.apache.org/](https://jena.apache.org/) * In the _Virtual Knowledge Graph (VKG)_ approach, the content of KG is not generated but can be kept virtual. The ontology and mapping together, called a _VKG Specification_, exposes the underlying data source as a virtual RDF graph, and makes it accessible at query time. For example, Ontop [16] is a popular VKG system that supports GeoSPARQL. Footnote 3: [https://graphdb.ontotext.com/](https://graphdb.ontotext.com/) Footnote 4: [https://www.stardog.com/](https://www.stardog.com/) One of the most famous Geospatial KG projects is LinkedGeoData [6, 18], which mostly relies on the VKG approach to expose the OSM data as Geospatial KGs. To evaluate the systems for Geospatial KGs, Jovanvik _et al._ proposed a GeoSPARQL Compliance Benchmark [21], and Li _et al._[24] carried out an extensive evaluation of several Geospatial RDF triple stores, showing that both MKG and VKG systems have their advantages. ### Semantic technologies for 3D city models There have been early attempts to convert CityGML to knowledge graphs. In [17], the authors use a straightforward ad-hoc implementation. They first refined an existing CityGML ontology from the University of Geneva, and then extended a corresponding data transformation tool that was originally designed to work alongside CityGML, which allowed for the transformation of original data into a form of semantic triples. Various scalable technologies for this semantic data storage were compared and Blazegraph was chosen due to the required geospatial search functionality. Many applications in the context of urban informatics require detailed information about the physical urban environment, which requires the integration of 3D city data with other data sources. The work [23] studies the integration of OpenStreetMap and CityGML using the formal concept analysis approach. Most work focuses on the integration of CityGML and BIM. The work [14] proposed two approaches to integrate and reconcile city models and BIM in the context of solar energy simulations, where BIM data is stored in IFC and the city model in CityGML (LOD2). The first approach is to perform a schema matching in an ETL tool, so as to convert and import window information from the IFC file into the CityGML model to create a LOD2-3 building model. In the second approach, they adopted a semantic web approach, in which both the BIM and city models are transformed into knowledge graphs (linked data). City models and BIM utilize their respective but interlinked domain ontologies. Particularly, two ontologies are investigated for BIM data, i.e., the ifcOWL ontology and the building topology ontology (BOT). Methodology In what follows we describe the approach we adopted for generating the target KG from CityGML data and integrating other data sources. An overall view, denoted as a pipeline, is unveiled through the utilization of the _Business Process Model and Notation (BPMN)_5 diagram represented in Figure 2. Footnote 5: Note thatBPMN is a conceptual modeling language adopted to represent tasks and procedures within a system. To get more information aboutBPMN, the authors refer the readers to [2]. The scope of the whole process is twofold. Firstly, it allows for the generation of a KG to support query answering over CityGML data. Secondly, it allows for the evolution of the created KG by importing new data and knowledge, thus enabling the possibility of extending question-answering services. Let us delve into the specific aspects of the approach. As shown by Figure 2 we have four main groups of elements, which we may also call "phases", namely _(i) Initialization with CityGML data_ (hereafter "Initialization"), _(ii) KG Construction_, _(iii) Integration of further resources_ (hereafter "Integration"), and _(iv) Application_. These phases are composed of sub-tasks or steps, which, in turn, may receive and/or produce different kinds of data. Differently, the _KG_ group represents the final output to be used as main support for the query-answering activities, which are represented in the _Application_ phase group. The output KG can be created either as a VKG or MKG. The VKG contains two sub-components, namely an _ontology_, and a _mapping_ function which are used to generate the RDF triples from the physical storage on demand. Representing the KG as an MKG instead eliminates the need to maintain a virtualized pipeline for RDF data, with the trade-off of larger space requirements and the need to rematerialize the RDF triples every time the source data changes. All these components can be then evolved through the steps composing the _Integration_ phase. In this setting, the _Initialization_ phase has the primary goal of generating a reference data storage out of _CityGML_ data and, also, providing the reference CityGML ontology to be used as a baseline version of the knowledge graph. For the creation of the reference data storage _the input CityGML dataset is embedded into a relational database_ (See Generate SQL in Figure 2), mapping each row in the data into entities and columns in specific information fields so that the data can be then queried, retrieved, stored, and possibly updated. Note that, to address this step, albeit multiple automatic and comprehensive solutions are available, some _ad hoc_ customization sub-steps may be involved. This is due to the fact that the output physical storage of this phase must be compliant with the technology used to generate the knowledge graph in the following phase. For instance, if the solution for generating SQL databases from the input data uses XML attributes containing (semi-)structured information, a customization step would be needed to generate multiple fields out of each XML attribute6. Footnote 6: To have more information about implementation issues please refer to Section 4 The second phase _KG Construction_ crafts a KG that can be used as a reference point for the query-answering activities. Note that this step can be iterated multiple times. The first time is after the initialization phase, and the inputs for KG construction will naturally be the _CityGML_ ontology, the _Physical Storage_ hosting the _CityGML_ data, and the _mapping_ that is necessary to connect the former to the latter. In the following times, the inputs of this phase are the result of the integration phase. Either way, the KG Construction phase is mainly concerned with the definition of the ontology and the related mappings, with the main scope of _(i)_ defining the set of concepts, relationships, and properties within the reference domain of knowledge; _(ii)_ capturing the meaning and semantics of the stored information, by enabling extended reasoning and inference capabilities; and _(iii)_ fostering interoperability among the data sources to be integrated. A key aspect here is also to find an already existing ontology that covers as much of the semantics of the selected data as possible. Once the ontology is selected a mapping step is performed. The database generated through the initialization phase is then aligned with the ontology concepts. When the information cannot be straightforwardly mapped, a manual intervention is required. This mainly involves a modification of the selected ontology in order to properly account for the information in the physical storage. For example, if a property that is present in the input dataset is not present in the ontology, the ontology can be manually extended with the required property. Once the ontology is tuned according to the dataset requirements, the KG is created and ready to support the query-answering application. After the adoption of both the creation and KG construction phases, as we anticipated above, we already have a reference KG enabling query answering. However, at this point, our approach allows for the integration of more data sources and then for the creation of an extended KG, on top of the previous version. This evolution is addressed through the _Integration_ phase. Here the main tasks are _(i)_ the selection of new data by the user, _(ii)_ the integration of the new data with the existing physical Figure 2: CityGML KG Creation and Evolution: Overall view storage, and _(iii)_ the selection of a new ontology or new ontological information by the user, in order to account for the newly integrated data. Task _(ii)_ takes place with the support of an automatic component that involves two sub-steps, namely _(ii.a) post-processing_ where heterogeneous geo-object types, data formats, and coordinate reference systems (CRS) are harmonized and unified and _(ii.b) spatial matching_ where the similarity between geo-entities from different datasets is calculated. Finally, the phase named _Application_ is dedicated to using the output KG. Here the user, via an _ad hoc_ interface, is enabled to request information via SPARQL queries, which can potentially be returned in either textual or visual format. ## 4 System Architecture In this section, we describe how the conceptual methodology framework in Section 3 can be realized in a concrete system. As shown in Figure 3, in the context of our particular application scenario, we examine two typical 3D and 2D geospatial data sources, namely _CityGML_ and _OpenStreetMap (OSM)_. CityGML data is used throughout the whole phases, while OSM data, one of the most comprehensive and widely used geospatial data sources, is adopted to illustrate the _Integration_ phase. More specifically, we use LOD2 CityGML building data retrieved from the Bavarian Open Data portal7 in the central area of Munich, Germany and OSM data in the same area as a demonstration. Please refer to Section 5.1 for further details on the test area and datasets. Footnote 7: [https://geodaten.bayern.de/opengeodata/OpenDataDetail.html?pn=lod2](https://geodaten.bayern.de/opengeodata/OpenDataDetail.html?pn=lod2) Figure 3: VKG over CityGML: Architecture ### Initialization with CityGML data The initialization phase generates a reference data storage out of CityGML data. We selected the _3DCityDB schema_ as the preferred solution to import CityGML data into an SQL database. 3DCityDB as a software solution provides both a predefined SQL schema8 and importer-exporter9 tool which can process the cumulative addition of an arbitrary number of CityGML files into the database. The software provides the option to use PostgreSQL or Oracle as the backend for the relational data storage. In this work, we chose PostgreSQL in particular because it is open source and its geospatial extension PostGIS is renowned for its high adoption and maturity in the geospatial domain. Footnote 8: [https://github.com/3dcitydb/3dcitydb](https://github.com/3dcitydb/3dcitydb) Footnote 9: [https://github.com/3dcitydb/importer-exporter](https://github.com/3dcitydb/importer-exporter) An excerpt of the table building with three records is presented in Table 1 (omitting any attributes with missing data). Every building is uniquely identified by the attribute id and has a corresponding LOD2 solid identifier lod2_solid_id. The latter is mapped onto its respective polyhedral surface serialization in the surface_geometry table. Further sample data is provided in Figure 5. Finally, we note that despite the comprehensive default SQL schema provided by 3DCityDB, a further step is needed to _tune DB and add constraints_. For example, the attribute of address in the default database schema is encoded as XML strings and has to be decomposed into more specific attributes, e.g. administrative area, thoroughfare, etc. Relevant constraints like primary and foreign keys are added to enhance the efficiency. ### Knowledge Graph Construction In this architecture, we support constructing KGs in both VKG and MKG fashions. We first construct the VKG utilizing the _Ontop_ system. The main activity is to develop/refine the _ontology_ and create _mappings_ to link the terms (classes and properties) in the ontology to the data sources. We will also use Ontop to materialize the RDF triples into an MKG, and load it to a triple store system like Jena or GraphDB. _Ontology._ We adopt the most prominent and well-known _CityGML_ ontology version 2.010 developed by the University of Geneva for the ontology component of the KG construction phase. We first _load and tune ontologies_ from the ontology provided. Following validation of the ontology, there were 92 declarations of object and data properties using the same IRIs, which makes the ontology invalid. The same inconsistencies have been diagnosed in previous research [17]. We tackled the issue in a similar fashion resolving any inconsistencies manually depending on the most intuitive \begin{table} \begin{tabular}{l l l l l l} \hline \hline id & objectclass\_id & building\_root\_id & roof\_type & measured\_height & lod2\_solid\_id \\ \hline 10 & 26 & 10 & 1000 & 13.363 & 117 \\ 54 & 26 & 54 & 3100 & 13.99 & 315 \\ 248 & 26 & 248 & 1000 & 17.362 & 1258 \\ \hline \hline \end{tabular} \end{table} Table 1: Excerpt from the CityGML building table definition of each property. In order to review and resolve these inconsistencies and get an overview of the collection of ontologies we utilized the open-source tool Protege version 5.6.1. Figure 4 shows part of the top-level classes of the selected ontology and the sub-tree starting from Feature and Building, which are of primary interest in this study. We also note that in addition to the primary CityGML ontology, auxiliary ontologies were also specifically geosparql, gml, sf, sosa, core, dublin_core_elements. These ontologies do not define additional concepts but rather help define constraints within the CityGML ontology. The GeoSPARQL ontology [26], for instance, allows the differentiation of geometric classes such as polyhedral surfaces from standard surfaces while complying with the standard OGC recommendations. _Mappings._ Mapping design is the most crucial user-centric step in generating a VKG. Individual RDB2RDF mappings exploit attributes from the PostGIS database to populate the RDF graph of CityGML. Consequently, SQL queries have to be written to individually map 3DCityDB attributes to their respective ontological concepts. Due to the limitation of LOD2 Bavarian data (and any open CityGML data we can find), which contains exclusively buildings, and lack of any complementary real-world LOD3 files, many 3DCityDB tables are empty. Therefore, while for completeness purposes any column from the 3DCityDB schema that could be mapped has been mapped to an ontological concept, in practice no triples can be generated from many of these mappings. A mapping consists of three components: a mapping ID, a source, and a target. A mapping ID is an arbitrary but unique mapping identifier. A source refers to an SQL query expressed over a relational database for retrieving data. A target is RDF triple pattern(s) that uses the answer variables from the preceding SQL query as placeholders. Three example mappings written in Ontop Protege plugin editor are illustrated in Figure 5 to respectively define a building, link a building to its respective solid geometry, and define the serialization of that geometry. More specifically, in the first mapping in Figure 4(a), the class Building is mapped with its respective data properties such as building height, storeys above and below ground, function, year of construction, etc. The second mapping in Figure 4(b) shows that object property bldg:lod2solid links Figure 4: A subset of the concepts in the CityGML ontology with the subtree of the class Feature highlighted any building with its respective solid geometry identifier. We distinguish between a solid geometry class and its respective serializations which are properties of the class. In the third mapping in Figure (c)c, class sf:PolyhedralSurface defines objects of type polyhedral surface and their respective Well-Known Text (WKT) geometry serialization. _KG Materialization_ With the completion of all the ontology and mappings, the CityGML Figure 5: Three example mappings in Ontop Protege Plugin VKG has been successfully created. This VKG can be queried already by Ontop, or be materialized to use native MKG systems. The MKG is constructed by utilizing the functionality of Ontop to generate RDF triples or data assertions based on the ontology, mappings, and physical storage, which were discussed in detail in the previous section. The resulting triples can be in turn loaded into RDF triple stores like GraphDB, Apache Jena, RDF4J, and similar tools to facilitate query answering via SPARQL. Further details on how these triple stores can exploit the materialized MKG are provided in section 5. Examples of sets of triples generated by our application pipeline are depicted in Figure 6. They reflect the triples generated from the three mappings described in the previous section. Specifically, Figure 5(a) shows a sub-graph from the example mappings representing a building and its LOD2 geometry and address. ### Integration of OSM data Below we describe the steps of integrating further geospatial data sources in our architecture by using OSM data as an example. For any other generic geospatial data, the integration task will be contingent on the type of data we wish to integrate and any existing popular ontologies. For example, for what concerns _OSM data_, as mentioned in Section 2, the LinkedGeoData project already leveraged the loading of OSM data into PostgreSQL and developed ontology and mapping, which can be reused in this work. What's missing is the linking between CityGML and OSM data at both the data level and the ontology level. This requires computing the correspondence between the data items, i.e. buildings, between these two data sets, and creating additional suitable mappings and ontological axioms to capture these correspondences. As anticipated above, in order to handle this heterogeneity issue we leverage an entity resolution step that produces a reference linkage table to be used for the generation of the output physical storage as PostgreSQL DB. Below we present a geometry-based method for linking entities in CityGML and OSM data sources and how to incorporate the results in the KG. #### 4.3.1 OSM and CityGML data linking Because of the heterogeneity between the CityGML and OSM datasets, we cannot expect that the resulting data linking is always 1:1. The building information of OSM data mainly consists of the building footprint layer (polygons) and the point of interest (POI) layer (points). In contrast, the CityGML data is 3D, hence we primarily rely on the ground surfaces of the CityGML buildings. The CityGML dataset normally has more detailed information about the buildings. In particular, CityGML buildings frequently encompass minor ancillary features like stairs and garages, which are often absent in OSM building footprints, and may lead to a n:1 matching result. We propose a three-step method of data linking: 1. Computing direct spatial correspondence using CityGML ground surfaces and OSM polygons, 2. Exploiting the adjacent ground surfaces in CityGML to enrich the results, and 3. Matching OSM POI points with CityGML buildings. Note that more sophisticated approaches, _e.g._, formal concept analysis [23], can be adopted in this architecture as well, but to simplify the presentation, we only use the geometry-based approach in this work. Figure 6: Example of triples in the Knowledge Graphs (1) Spatial matching.Since the CityGML data in this study only contains building information, we refer to the linking of CityGML and OSM polygons as the linking of building information between them. Given any CityGML building (\(bldg\)) and OSM polygon (\(osm\)), their direct spatial correspondences are identified based on Equation 1[7, 25], derived from individual areas of any CityGML and OSM polygons (\(\text{Area}\left(bldg_{i}\right)\), \(\text{Area}\left(osm_{j}\right)\)). \[\frac{\text{Area}\left(osm_{i}\cap bldg_{j}\right)}{\min\left(\text{Area} \left(osm_{i}\right),\text{Area}\left(bldg_{j}\right)\right)}\geq t \tag{1}\] where \(\text{Area}\left(osm_{i}\cap bldg_{j}\right)\) represents the overlapping area of the \(i\)-th OSM polygon and the \(j\)-th CityGML polygon; \(t\) is an empirical hyperparameter, which can be adjusted based on the spatial consistency between the two datasets. Following [7], there are four possible matching results based on the ratio: 1:1, 1:n, m:1, m:n (examples illustrated in Figure 7). Relation 1:1 indicates that an OSM building and a CityGML building are uniquely matched with each other (Figure 7(a)). Relation m:1 represents multiple OSM buildings matching with one CityGML building (Figure 7(b) and relation 1:n the opposite case (Figure 7(c)). Relation m:n represents at least two OSM buildings matched together with at least two CityGML buildings (Figure 7(d)). (2) Identification of adjacent polygons.This step specifically endeavors to include adjacent amenity objects as secondary matched (adjacent) relations, guaranteeing the inclusion of all amenities. In the examples illustrated in Figure 8, \(bldg2\) and \(osm1\) are matched as adjacent if the following three conditions are met: (a) \(bldg1\) directly matches \(osm1\), (b) \(bldg1\) and \(bldg2\) are adjacent, and (c) there is no other match for \(bldg2\). (3) Matching OSM POIs with CityGML buildings.This step aims to match the CityGML data with OSM POIs to enhance the semantic information of CityGML. The OSM POIs contain the place information in the buildings, e.g., various shops on different floors. The spatial locations of building-related POIs are based on the building footprints. Thus, we applied OSM's building footprints as a mediator to determine the spatial relationship between OSM POIs and CityGML buildings. As shown in Figure 9, given any POI, if the OSM building footprint where it is located matches a CityGML ground surface, the POI also matches the corresponding CityGML ground surface. Empirical matching resultsAs for the selected study area in Munich, the input OSM data contain 3,839 building footprints while the input CityGML data contain Figure 7: Schematic diagram of the four spatial matching relations, namely (a) 1:1 relation, (b) m:1 relation, (c) 1:n relation, and (d) m:n relation. 5,728 ground surfaces. Previous studies have considered a minimum threshold \(t\) of 30% is necessary to determine the matching relationship [7, 25]. Empirically, we tried several settings and chose to set the tolerance threshold \(t\) to 50% in this case study according to the performance on both OSM and CityGML data. In order to evaluate the correctness of spatial linking workflow, 50 randomly selected building polygons were manually examined, where all of them were correctly identified as matched or adjacent. Figure 10(a) shows the results of _Step 1 - spatial matching_ between CityGML and OSM polygons in the study area. The majority of CityGML ground surfaces are successfully matched with OSM polygons. The 1:1 relations account for 42.92% (2090 buildings). The 1:n relations (shown in Figure 10(c)(d)) make up 16.20% (789 buildings) while the m:1 relations only account for 5.87% (286 buildings) of the total in _Step 1_. This indicates that CityGML provides more detailed information, representing individual building accessories (e.g., staircases) as separate ground surfaces. Zero-to-one relations account for 21.17% (1031 buildings) in _Step 1_ due to the same reason. For instance, in Figure 11(a) and (b), the CityGML polygons pointed by the arrows should be included in their main buildings and linked with the corresponding OSM polygons. After _Step 2_, these unmatched CityGML polygons are defined as matched by their adjacent OSM polygons. As a result, the 0:1 relations decrease from 21.17% (1031 cases) to 5.54% (270 cases), which demonstrates the necessity of including the adjacent structures as _Step 2_. Additionally, there is a 12.83% (625 cases) occurrence of one-to Figure 8: Schematic diagram of the two adjacent identification situations in the OSM building _osm1_ perspective: (a) 1:1 relation with adjacency, and (b) 1:n relation with adjacency. Figure 9: Schematic diagram of the spatial match between OSM POIs and CityGML ground surfaces. zero relations, where certain OSM buildings are absent in the CityGML data. This is mainly due to the slightly larger coverage of the downloaded OSM data compared to the CityGML data (as demonstrated in Figure 10), ensuring that no CityGML buildings on the tile edges are missing in OSM. Ideally, overlapping polygons within a dataset's ground layer should be avoided. For instance, locations marked by arrows in Figure 11 (c) and (d) shouldn't serve as two buildings' foundations. The integration step is intentionally designed to account for such specialized relations. In this study area, only 49 cases of m:n relation (1%) were identified. As for _step 3_, within the specified study area, we have successfully matched 2,718 OSM POIs with CityGML buildings. Figure 11: Cases of 0:1 relation converted into adjacent relation (a-b) and cases of m:n relation (c-d). Figure 10: Distribution of (a) the spatial integration result in the case study and the cases of four common spatial relations, i.e., (b) 1:1, (c) 1:n, (d) m:1, and (e) adjacent relations. #### 4.3.2 Modeling the linking results into the KG Integration at the relational database level needs to be further upstreamed to the ontology and knowledge graph level. Firstly, in order to query OSM concepts, a supplementary ontology, namely the _LinkedGeoData (LGD)_ ontology, originally defined by [6], was adopted. LGD defines over 1,200 classes and 700 properties by leveraging the most ubiquitous tags present in OSM data. LGD enriches the CityGML KG with classes that represent OSM points of interest like hotels, residential buildings, and primary highways, as well as properties such as business opening hours and websites. Secondly, there is a task to model at the KG level the connections between the still disjoint CityGML and OSM sub-KGs. Utilizing existing owl terms like owl:sameAs is insufficient since we do not model identity or matches between individual buildings but other properties like adjacency, and potential associations between buildings could be enriched beyond that. Modeling the spatial matching results from the database to the KG necessitates the reification of the association between a CityGML building and OSM building. In practice this requires creating an additional class to represent this relationship which we define as Association_CityGML_OSM. This relationship allows the addition of further properties for Association_CityGML_OSM, e.g. it can now model both matched buildings and adjacent buildings. An example of how an association would be expressed in the KG is shown in Figure 12. The upper part illustrates the schema (i.e., classes, properties, and their relations), and the lower part concrete instances corresponding to the example from Figure 8 (a) where e.g., an OSM building lgdo:way/osm1 is linked to both a matching and adjacent CityGML building surface as defined respectively by gmlid/bldg1 and gmlid/bldg2. Both the match and adjacency are modeled as subproperties of the linkage. Note that in the example above, we use the instances from Figure 8 (a) where e.g. an OSM building lgd:way/osm1 is linked to both a matching and adjacent CityGML building surface as defined respectively by gmlid/bldg1 and gmlid/bldg2. Both the match and adjacency are modelled as subproperties of the linkage. ## 5 Experiments In this section, we conduct a series of experiments in order to evaluate the following aspects: 1. the expressiveness capabilities of the KG. We determine whether a KG constructed over CityGML data and further integrated with additional ad hoc data sources suffices in answering legitimate semantic queries designed by domain experts; 2. the performance of the query evaluation with representative KG systems, including both VKG and MKG systems. This will serve to determine whether results can be retrieved within a reasonable time frame from domain experts. The experiments are reproducible following the execution of the respective queries and setup described in the online appendix11 as well as Appendix A. Figure 12: Modelling CityGML and OSM Association in the KG ### Experimental setup The experiments are conducted on a normal laptop machine running 4 cores (Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz), 16GB RAM, and 350GB SSD hard disk, running Ubuntu operating system. The whole experiment environment has been setup as Docker containers to encapsulate all necessary software, which means experiments can be conducted under any operating system. For storing and querying CityGML data, we use the Docker image of 3DCityDB v4.1.0 that comes with PostgreSQL v15 and PostGIS v3.3, corresponding to the latest versions available at the time. Three KG systems that support GeoSPARQL have been selected for the experiments: one VKG system Ontop and two MKG systems (a.k.a triple stores) Apache Jena and GraphDB. For GraphDB and Jena, a preparatory step of materializing all the triples is necessary. We carry this out using Ontop, and then load the file into Apache Jena and GraphDB as an input. Further descriptions of the evaluated systems are provided below: * Ontop12 is an open-source software project that focuses on providing a platform for efficient querying of relational databases using Semantic Web technologies, specifically the RDF data model, SPARQL query language, OWL 2 QL ontology, and R2RML mapping language. Ontop also supports the GeoSPARQL query language over PostgreSQL/PostGIS database. We use Ontop v5.0.2 in this experiment. Footnote 12: [https://ontop-vkg.org/](https://ontop-vkg.org/) * Apache Jena13 is an open-source framework for Java that allows for reading, writing, and querying RDF graphs. Apache Jena Fuseki14 is a sub-project of Jena which is a SPARQL server, combined with a UI for admin and query. TDB is a component of Jena for RDF storage and query which can be used as a high-performance RDF store. Unlike Ontop, Apache Jena Fuseki does not handle any SPARQL-to-SQL translation or virtualization but rather utilizes RDF triples as input. GeoSPARQL and spatial index support are available via the jena-fuseki-geosparql extension. Note that an RDF dataset needs to be "wrapped" as a GeoSPARQL dataset since the default Apache Jena Fuseki installation does not provide support for GeoSPARQL query functionalities. We use Apache Jena v4.8.0 in this experiment. Footnote 13: [https://jena.apache.org/index.html](https://jena.apache.org/index.html) * GraphDB15 is an RDF store developed by Ontotext, which supports SPARQL 1.1. OWL reasoning and is compliant with W3C Standards. It is a materialization-based system in a similar sense to Apache Jena, but although commercial, it does offer a free limited version. For the purpose of our experiments, we use GraphDB 10.2.2, which supports all SPARQL 1.1 and GeoSPARQL functionalities. Footnote 15: [https://jgraphdb.ontotext.com/](https://jgraphdb.ontotext.com/) Figure 13: Study area. _Geographic Area of Interest._ The experiments are carried out in the central area of Munich, Germany (Figure 13 (a) and (b)), which is a construction-dense area and can provide an adequate gauge of query performance. Figure 13 (c) and (d) depict the test datasets in the area of interest from CityGML and OSM respectively. ### Expressiveness test Geospatial queries are used in many application scenarios, e.g., urban planning or management, disaster management, tourism, and energy (solar panels). In order to test the expressiveness of GeoSPARQL queries that can be formulated over the KGs constructed from CityGML and OSM data in this work, we have collected 10 queries from domain experts and tried to formalize them as GeoSPARQL queries. These queries encompass not only conventional question types about 3D buildings but also those designed to address pragmatic real-world demands. #### 5.2.1 Queries Queries Q1-Q5 represent basic information needs for 3D buildings: * Q1: Find the addresses of buildings with height above 30 meters * Q2: Find buildings with the address "Stephansplatz" * Q3: Find 10 buildings that have the maximum number of roof surfaces * Q4: Find roof surfaces of buildings over 30 meters * Q5: Find 3D geometries of buildings over 30 meters Queries 6-10 make use of both CityGML data and OSM data. Each case would encompass the possibility of being applied to a practical task involving the retrieval or analysis of real-world geospatial data. A summary of these queries are listed in Table 3. If a researcher aims to perform a geometric analysis across OSM and CityGML data for different height ranges or building usage types, they might potentially have the following query: * Q6: Find CityGML ground surfaces and OSM building polygons for all residential buildings. For tourists who are seeking a hotel with a superior city view, they might inquire: * Q7: Find hotels over 30 meters high In the context of emergency evacuation during a hurricane disaster, the inquiry might be posed as: * Q8: Find residential buildings over 30m high. Various roof types such as gabled roofs and hip roofs possess the potential for installing photovoltaic panels. In the studied dataset, the roof type codes in CityGML follow the German cadastre information ALKIS codelists16 for CityGML 2.0. This query could be framed as: * Q9: Find residential buildings with non-flat roofs. Within the scope of urban renewal, individuals could inquire about the structures that could be affected and the conceivable expense or workload that might be earmarked for demolition: * Q10: Find buildings along a certain road within 20m and calculate the total affected area in m\({}^{2}\). #### 5.2.2 Results All of the ten queries could be successfully formulated via the SPARQL query language. Below we provide the respective scripts for queries 9 and 10, while the remaining queries for our experiments can be found in Appendix A. The prefixes utilized are listed in Table 3 and refer to the base namespaces of the CityGML and LGD ontologies, where the remaining prefixes reference authoritative vocabularies such as RDFS and GeoSPARQL. _Query 9_ uses CityGML roof type codes in conjunction with their respective labels derived from ALKIS definitions and translated into English. \begin{table} \begin{tabular}{l l l l} \hline \hline & **CityGML** & **OSM** & **Filter** \\ \hline Q6 & Building Geometry & Residential Building & Building Height \\ Q7 & Building, Building Height & Hotel & Building Height \\ Q8 & Building, Building Height & Residential Building & Building Height \\ Q9 & Building, Roof Type & Residential Building & ALKIS RoofType \\ Q10 & Building & Highway & Buffer and Intersection \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the features used in Q6–Q10 \begin{table} \begin{tabular}{l l} \hline \hline Prefix & IRI Namespace \\ \hline : & [https://github.com/yuzzfeng/D2G2/citygml](https://github.com/yuzzfeng/D2G2/citygml)\# \\ bldg: & [http://www.opengis.net/citygml/building/2.0/](http://www.opengis.net/citygml/building/2.0/) \\ geo: & [http://www.opengis.net/ont/geosparql](http://www.opengis.net/ont/geosparql)\# \\ rdfs: & [http://www.w3.org/2000/01/rdf-schema](http://www.w3.org/2000/01/rdf-schema)\# \\ lgdo: & [http://linkedgeodata.org/ontology/](http://linkedgeodata.org/ontology/) \\ \hline \hline \end{tabular} \end{table} Table 3: List of prefixes used for SPARQL queries. ?osmclassnamerdf:type?residentialclasses. #Filternon-flatroofs ?citygmlentitybldg:roofSurface/bldg:roofType?roofType. FILTER(?roofType!="flatroof"). #Retrievegeometry ?citygmlentitybldg:boundedBy?citygmlsurface. ?citygmlsurfaceabldg:RoofSurface. ?citygmlsurfacegeo:hasGeometry/geo:asWKT?citygmlGeom. } _Query 10_: utilizes two GeoSPARQL functions geof:buffer and geof:sfIntersects to both buffer and intersect geometries. It provides an example of how powerful geospatial functions can also be applied to linked data. SELECT?citygmlGeom?citygmlGeomAreaSqm { #Filterhighwaysofinterest ?osmlinkagea:Association_OSM_Class. ?osmlinkage:hasosmid?osmentity. ?osmlinkage:hasosmclassid?osmclassname. VALUES?highwayclasses{lgdo:SecondaryHighwaylgdo:TertiaryHighway lgdo:HighwayServicelgdo:UnclassifiedHighway}. ?osmclassnamerdf:type?highwayclasses. ?osmentityrdfs:label?street_name. FILTER(CONTAINS(?street_name,"EliisenstraBe")). ?osmentitygeo:hasGeometry/geo:asWKT?osmGeom. #Setimpactareabuffer BIND(geof:buffer(?osmGeom,20,uom:metre)AS?impactArea).?citygmlentitybldg:boundedBy?citygmlsurface.?citygmlsurfaceabldg:GroundSurface.?citygmlsurfacegeo:hasGeometry/geo:asWKT?citygmlGeom. #Filterbuildingswithinrangeofimpactarea FILTER(geof:sfIntersects(?impactArea,?citygmlGeom))?citygmlsurfacegeo:hasGeometry/geo:hasMetricArea?citygmlGeomAreaSqm. } ### Performance test In some application scenarios, e.g. disaster management, expressiveness is not the sole aspect that matters, but the aspect of query execution time becomes critical. Given our dual virtual and materialized KG setup, we exploit this chance to assess whether our queries can be executed within a reasonable time but also how the KG setting might impact these results. The quantitative measures we analyze are database storage and query response time. #### 5.3.1 Database size Although evaluating the storage requirements of each MKG and VKG solution was not a primary goal of our analysis, it can provide a useful indicator of scalability. The CityGML and OSM data took up 501M of storage in PostgreSQL, which includes not only the data but also geospatial indices. Ontop acting as a lightweight layer does not require additional storage, whereas Apache Jena and GraphDB store materialized triples generated by Ontop. Both Apache Jena and GraphDB use a 1.1G Turtle file of RDF triples to store the materialized triples. Moreover, for both these MKG solutions, we cannot create a geospatial index because an index cannot be added to a polyhedral surface or geometry collection respectively, rendering this figure a lower bound. #### 5.3.2 Query response time The results in terms of query response time in seconds are provided in Table 4 and visualized in Figure 14. Most of the queries can be evaluated by all the systems. All of the queries can be executed in under 10 seconds, with only one query exceeding 3 seconds. Given the relatively quick response time we deem overall performance satisfactory for a subjective domain practitioner for all queries. Due to limitations in their handling of non-simple features such as polyhedral surfaces for Apache Jena and geometry collections in the case of GraphDB, it was not possible to run Q10 which involves a geospatial function. Adding a geospatial index is not possible for either of these software solutions, polyhedral surfaces are part of the CityGML dataset whereas geometry collections of the OSM dataset. PostGIS is a more mature geospatial extension, and PostGIS version 3 does not exhibit any issues with indexing these geometry datatypes. A further drawback in the comparison for GraphDB is that the level of precision is limited to one decimal figure. While there are variations across individual queries, performance is relatively similar for both the VKG and MKG solutions, no system outperforms across most queries. RDF stores tend to outperform Ontop for integrated queries mostly due to the large number of UNION clauses needed to assemble potential matching OSM data for multi \begin{table} \begin{tabular}{c c c c} \hline \hline **Query** & **Ontop** & **Jena** & **GraphDB** \\ \hline Q1 & 0.411 & 0.235 & 0.3 \\ Q2 & 0.109 & 0.085 & 0.2 \\ Q3 & 0.178 & 1.285 & 0.5 \\ Q4 & 0.375 & 0.259 & 0.2 \\ Q5 & 0.209 & 0.278 & 0.4 \\ Q6 & 0.521 & 0.943 & 0.3 \\ Q7 & 0.592 & 0.441 & 0.1 \\ Q8 & 2.125 & 1.533 & 0.1 \\ Q9 & 2.259 & 2.007 & 0.6 \\ Q10 & 9.886 & NA & NA \\ \hline \hline \end{tabular} \end{table} Table 4: Query response time ple tag types i.e. node, way, relation and points of interest (e.g., ResidentialBuilding, House). GraphDB seems to display better performance compared to even Apache Jena for these large integrated scenarios. ### Qualitative comparison with pure relational databases In this section, we provide a qualitative comparison between SPARQL queries over the KGs, and equivalent SQL queries over relational database storing the original data. In general, since KGs represent a higher level of abstraction with the terminology used in the domain, it is easier to formulate queries in SPARQL, and the resulting SPARQL queries are more understandable. Generating simple queries which rely solely on CityGML would be comparable in both SQL and SPARQL. A user would for example run a query retrieving building addresses by simply joining two tables from the SQL schema. The task becomes considerably more difficult when an additional data source is integrated such as OSM. For example, we provide below the SQL translation of Query 10 formulated previously. We note that this SQL query is automatically generated by Ontop (slightly simplified for readability), but this would be rather close to what a human expert could produce. It would be extremely difficult and laborious for a human user to construct such a complex query. ``` SELECTST_ASTEXT(ST_TRANSFORM(v8."geometry1m27",4326))AS"v1", ST_ASTEXT(ST_TRANSFORM(v8."geometry1m60",4326))AS"v4" FROM(SELECTDISTINCTv1."cityobject_id"AS"cityobject_idim12", v1."geometry"AS"geometry1m27",v2."geometry"AS"geometry1m60", v1."id"AS"idim11",v2."id"AS"idim44",v3."osm_id"AS"osm_idim451", ST_ASTEXT(v4."geom")AS"v0",CAST(v5."building_id"ASTEXT)AS"v2", CAST(v5."cityobject_id"ASTEXT)AS"v3" FROM"surface_geometry"v1,"surface_geometry"v2,"public"."classes"v3, "public"."classes"v4, (SELECTv1."building_id"AS"building_id",v2."cityobject_id"AS"cityobject_id" Figure 14: Query time performance by Q1–Q10 FROM "thematic_surface" v1 LEFT JOIN "surface_geometry" v2 ON v1."lod2_multi_surface_id" = v2."root_id" v5, (SELECT v1."id" AS "id" FROM "cityobject" v1 LEFT JOIN "objectclass" v2 ON v1."objectclass_id"=v2."id" WHERE v2."classname"="BuildingGroundSurface') v6 WHERE (ST_INTERSECTS(ST_BUFFER(CAST(ST_ASTEXT(v4."geom") AS GEOGRAPHY),'20'), CAST(ST_TRANSFORM(v1."geometry",4326) AS GEOGRAPHY)) AND v5."cityobject_id" = v1."cityobject_id" AND v5."cityobject_id" = v6."id" AND v1."cityobject_id" = v2."cityobject_id" AND v3."osm_id" AND ('W' = v3."osm_type" AND 'SecondaryHighway' = v3."class") AND 'W' = v4."osm_type") ) v8 ``` ## 6 Discussion and Future Work This paper presents a comprehensive framework to analyze 3D City data via KGs. It provides a methodology to integrate CityGML data with other geospatial data sources and utilize the resulting KG to answer user queries. The experiments confirm that the expressive queries can be formulated over the KGs, and can be efficiently evaluated with state-of-the-art KG systems. While the obtained results are promising, below we discuss some limitations that arose while running the experiments, and possible future improvements. _Support for Complex Geometries._ Many KG systems have limitations with respect to the handling of more complex geometries such as polyhedral surfaces and geometry collections, which could not be parsed correctly and missed support of respective geospatial index. Specifically, this meant that our MKG approach (GraphDB and Apache Jena) suffered with respect to the execution of Q10 that involves computation with complex geometries. Instead, the VKG system Ontop can leverage mature and well-established relational spatial databases such as PostGIS, and thus avoided such issues. _CityGML ontology._ While the CityGML ontology developed by University of Geneva was selected as the most renowned choice available, its future adoption as the central ontology for 3D building analysis displays conspicuous limitations. The most evident limitation is caused by its inconsistencies in the duplication of the definition of data and object properties which were detailed in section 4 and in [17]. _CityGML 3.0._ The CityGML ontology in this work supports only the CityGML specification up to version 2. The latest version of CityGML, version 3, not only introduces new concepts such as time-dependent features but also revises the existing specification _e.g.,_ dropping LoD4 [15]. A new ontology will need to be designed or selected for future semantic analysis of CityGML data. Moreover, the version of 3DCityDB system for CityGML 3.0 is still under development at the time of writing. In order to support CityGML 3.0 in our framework, the VKG mapping also need to be revised with respect to the new ontology and 3DCityDB schema. _CityGML heterogeneity._ The CityGML data produced by and with the specifications of the government of Bavaria, Germany was used for this analysis. However, during the study, we found that different countries might have different standards for encoding their CityGML data which makes issues such as SRID differences arise. For example, Estonia also provides a geometry element with its address, and it links each building not to the corresponding 3D solid but to individual surface geometries (failing to provide any solid geometries)17. Hence, the design of VKG mappings to link the ontology with the same 3DCityDB physical storage is not guaranteed to be robust for analyses across countries. The mapping should be tested for robustness by repeating these experiments with datasets from as many countries as possible to strive to reach a unified design. Footnote 17: [https://geoportaal.maaamet.ee/eng/Download-3D-data-p837.html](https://geoportaal.maaamet.ee/eng/Download-3D-data-p837.html) _CityGML data paucity_. Although a significant degree of expressiveness was successfully tested by leveraging the data on CityGML buildings, there is data paucity for both higher levels of detail such as LOD3, and other non-building items such as vegetation, waterbodies, bridges, etc. The lack of this data makes a significant portion of the 3DCityDB SQL schema redundant for almost all publicly available CityGML datasets. _Integrating further data source_. Our paradigm and evaluation sought to measure CityGML and OSM data integration. While OSM is a popular source of geospatial data, our analysis might not be generalized to more unique geospatial domains. Expressiveness and the respective overall performance might diminish with the introduction of additional types and combinations of geospatial data. Tasks such as the complexity of matching objects might correspondingly become more complex and have an impact on both ontology integration and query design. These possible risks can be only addressed through further experimental research. ## Acknowledgements This research has been partially supported by German Research Foundation (DFG) and the Autonomous Province of Bolzano-Bozen through its Joint Project - Dense and Deep Geographic Virtual Knowledge Graphs for Visual Analysis - D2G2. ## Appendix A Appendix A Query 1Find the addresses of buildings with height above 30 meters. ``` SELECT?address_label {?buildingbldg:address?address_id.?address_idrdfs:label?address_label.?buildingbldg:measuredHeight?buildingHeight. FILTER(?buildingHeight>30). } ``` **Query 2**Find buildings with the address "Stephansplatz" SELECT?building?address_label { ?building bldg:address?address_id. ?address_id rdfs:label?address_label. FILTER(CONTAINS(?address_label, "Stephansplatz")). } **Query 3**Find 10 buildings that have the maximum number of roof surfaces SELECT?building (COUNT(?surface) AS?totalsurface) { ?building a bldg:Building. ?building bldg:boundedBy?surface. ?surface a bldg:RoofSurface. } GROUP BY?building ORDER BY DESC(?totalsurface) LIMIT 10 **Query 4**Find roof surfaces of buildings over 30 meters SELECT?citygmlGeom { ?citygmlentity bldg:measuredHeight?citygmlBuildingHeight. FILTER(?citygmlBuildingHeight > 30). ?citygmlentity bldg:boundedBy?citygmlsurface. ?citygmlsurface a bldg:RoofSurface. ?citygmlsurface geo:hasGeometry/geo:asWKT?citygmlGeom. BIND("chlorophyll,0.5" AS?citygmlGeomColor) # Green } **Query 5**Find 3D geometries of buildings over 30 meters SELECT?citygmlGeom { ?citygmlentity bldg:measuredHeight?citygmlBuildingHeight. FILTER(?citygmlBuildingHeight > 30). ?citygmlentity bldg:lod2Solid?solid. ?solid geo:asWKT?citygmlGeom. BIND("chlorophyll,0.5" AS?citygmlGeomColor) # Green **Query 6**Find CityGML ground surfaces and OSM building polygons for all residential buildings SELECT?citygmlGeom?osmGeom { ?linkage a :Association_CityGML_OSM. ?linkage :matchOSM?osmentity. ?linkage :matchCityGML/:mapSurface/bldg:bounds?citygmlentity. ?citygmlentity bldg:measuredHeight?citygmlBuildingHeight. FILTER(?citygmlBuildingHeight > 30). ?citygmlentity bldg:boundedBy?citygmlsurface. ?citygmlsurface a bldg:groundSurface. ?citygmlsurface geo:hasGeometry/geo:asWKT?citygmlGeom. BIND("chlorophyll,0.5" AS?citygmlGeomColor) # Green ?osmentity geo:hasGeometry/geo:asWKT?osmGeom. BIND("jet,0.8" AS?osmGeomColor) # Red } **Query 7**Find hotels over 30 meters high SELECT?citygmlentity?buildingHeight?hotelname?citygmlGeom { ?linkage a :Association_CityGML_OSM. ?linkage :matchOSM?osmentity. ?linkage :matchCityGML/:mapSurface/bldg:bounds?citygmlentity. ?osmlinkage a :Association_OSM_Class. ?osmlinkage :hasosmid?osmentity. ?osmlinkage :hasosmclassid?osmclassname. ?osmclassname a ldgo:Hotel. OPTIONAL {?osmentity rdfs:label?hotelname.} ?citygmlentity bldg:measuredHeight?buildingHeight. FILTER(?buildingHeight > 30). ?citygmlentity bldg:lod2Solid?solid. ?solid geo:asWKT?citygmlGeom. } **Query 8**Find residential buildings over 30m high SELECT?citygmlentity?citygmlGeom { ?linkage a :Association_CityGML_OSM. ?linkage :matchOSM?osmentity. ?linkage:matchCityGML/:mapSurface/bldg:bounds?citygmlentity.?osmlinkagea:Association_OSM_Class.?osmlinkage:hasomid?osmentity.?osmlinkage:hasosmclassid?osmclassname. #KnownResidentialcategorization VALUES?residentialclasses{lgdo:Residentiallgdo:ResidentialHome lgdo:BuildingResidentiallgdo:ApartmentBuildinglgdo:House}.?osmclassnamerdf:type?residentialclasses.?citygmlentitybldg:measuredHeight?buildingHeight. FILTER(?buildingHeight>30).?citygmlentitybldg:boundedBy?citygmlsurface.?citygmlsurfaceabldg:GroundSurface.?citygmlsurfacegeo:hasGeometry/geo:asWKT?citygmlGeom. BIND("chlorophyll,0.5"AS?citygmlGeomColor)#Green }
2305.15585
Chromatic number is not tournament-local
Scott and Seymour conjectured the existence of a function $f \colon \mathbb{N} \to \mathbb{N}$ such that, for every graph $G$ and tournament $T$ on the same vertex set, $\chi(G) \geqslant f(k)$ implies that $\chi(G[N_T^+(v)]) \geqslant k$ for some vertex $v$. In this note we disprove this conjecture even if $v$ is replaced by a vertex set of size $\mathcal{O}(\log{\lvert V(G)\rvert})$. As a consequence, we answer in the negative a question of Harutyunyan, Le, Thomass\'{e}, and Wu concerning the corresponding statement where the graph $G$ is replaced by another tournament, and disprove a related conjecture of Nguyen, Scott, and Seymour. We also show that the setting where chromatic number is replaced by degeneracy exhibits a quite different behaviour.
António Girão, Kevin Hendrey, Freddie Illingworth, Florian Lehner, Lukas Michel, Michael Savery, Raphael Steiner
2023-05-24T21:41:18Z
http://arxiv.org/abs/2305.15585v2
# Chromatic number is not tournament-local ###### Abstract. Scott and Seymour conjectured the existence of a function \(f\colon\mathbb{N}\to\mathbb{N}\) such that, for every graph \(G\) and tournament \(T\) on the same vertex set, \(\chi(G)\geqslant f(k)\) implies that \(\chi(G[N_{T}^{+}(v)])\geqslant k\) for some vertex \(v\). In this note we disprove this conjecture even if \(v\) is replaced by a vertex set of size \(\mathcal{O}(\log|V(G)|)\). As a consequence, we answer in the negative a question of Harutyunyan, Le, Thomasse, and Wu concerning the corresponding statement where the graph \(G\) is replaced by another tournament, and disprove a related conjecture of Nguyen, Scott, and Seymour. We also show that the setting where chromatic number is replaced by degeneracy exhibits a quite different behaviour. A.G. and F.I. were supported by EPSRC grant EP/V007327/1. R.S. was supported by an ETH Zurich Postdoctoral Fellowship. K.H. was supported by the Institute for Basic Science (IBS-R029-C1). ## 1. Introduction The question of what structures must appear in graphs of large chromatic number is one of the most fundamental in modern graph theory. One obvious reason for a graph to have high chromatic number is the presence of a large clique, but constructions from the 1940s and 50s of, for example, Tutte [10] and Zykov [11] demonstrate the existence of triangle-free graphs of arbitrarily large chromatic number. In particular, there are graphs with arbitrarily large chromatic number in which every neighbourhood is independent (and hence 1-colourable). Berger, Choromanski, Chudnovsky, Fox, Loebl, Scott, Seymour, and Thomasse [1] conjectured that the analogous phenomenon does not occur in tournaments. This was confirmed recently in a beautiful paper of Harutyunyan, Le, Thomasse, and Wu [1] in which they showed that for every \(k\) there exists an \(f(k)\) such that every tournament with chromatic number1 at least \(f(k)\) contains a vertex \(v\) such that \(\chi(T[N^{+}(v)])\geqslant k\). Footnote 1: The _chromatic number_, \(\chi(T)\), of a tournament \(T\) is the least \(k\) for which there is a partition of \(V(T)\) into \(k\) parts each of which induces a transitive (acyclic) subtournament of \(T\). Separately, Scott and Seymour [14] (see also [1, 15]) conjectured a similar result for a graph and a tournament on the same vertex set. **Conjecture 1** (Scott and Seymour).: _For every positive integer \(k\) there exists a \(\chi\) such that, for every graph \(G\) with \(\chi(G)\geqslant\chi\) and every tournament \(T\) on the same vertex set, there is a vertex \(v\) such that \(\chi(G[N_{T}^{+}(v)])\geqslant k\)._ This conjecture is supported by the observation [13] that the statement holds when chromatic number is replaced by fractional chromatic number (see Section 4 for more details). The main result of this note is a disproof of Conjecture 1 for \(k\geqslant 3\). In fact, we prove something stronger: \(G\) and \(T\) may be chosen such that the out-neighbourhood2 of any set of size at most \(\frac{\log|V(T)|}{2\chi^{2}}\) is bipartite. Footnote 2: The _out-neighbourhood_, \(N^{+}(S)\), of a set \(S\) is \(\bigcup_{v\in S}N^{+}(v)\). This might contain vertices of \(S\). **Theorem 2**.: _For every positive integer \(\chi\) there are arbitrarily large \(N\) for which there is a graph \(G\) and a tournament \(T\) on the same \(N\)-vertex set such that \(\chi(G)=\chi\) and, for every set \(U\) of at most \(\frac{\log N}{2\chi^{2}}\) vertices, \(\chi(G[N_{T}^{+}(U)])\leqslant 2\)._ We will show that \(G\) can in fact be taken to be triangle-free which will be useful for our proof of Corollary 3. We make two remarks concerning the optimality of Theorem 2. * It is not possible to replace \(2\) by \(1\) in the bound on the chromatic number of the out-neighbourhood, even when \(U\) consists of a single vertex. Indeed, suppose that \(G[N_{T}^{+}(v)]\) is independent for every vertex \(v\). Let \(xy\) be an edge of \(G\). No out-neighbourhood of a vertex of \(T\) can contain both \(x\) and \(y\), so \(\{x,y\}\) dominates \(T\). But then \(G\) is \(3\)-colourable: one colour for each of \(N_{T}^{+}(x)\) and \(N_{T}^{+}(y)\), and a final colour for whichever of \(x\) and \(y\) has not been coloured. * The bound on the size of \(U\) is very close to being best possible. Let \(S\) be a dominating set of \(T\) of size at most \(\lceil\log_{2}N\rceil\) (such a set can be constructed greedily). Then \(N^{+}(S)\) contains all vertices of \(G\) except perhaps one and so, for any \(0\leqslant\ell\leqslant\chi-2\), there is some \(U\subseteq S\) of size at most \(\lceil\log_{2}(N)/\lfloor\frac{\chi-2}{\ell}\rfloor\rceil\) with \(\chi(G[N_{T}^{+}(U)])>\ell\). Theorem 2 has the following corollary, which resolves in a strong sense a question of Harutyunyan, Le, Thomasse, and Wu [10] concerning the analogous problem for two tournaments on the same vertex set. **Corollary 3**.: _For every positive integer \(\chi\) there are arbitrarily large \(N\) for which there are tournaments \(T_{1}\) and \(T_{2}\) on the same \(N\)-vertex set such that \(\chi(T_{1})=\chi\) and, for every set \(U\) of at most \(\frac{\log N}{8\chi^{2}}\) vertices, \(\chi(T_{1}[N_{T_{2}}^{+}(U)])\leqslant 2\)._ In turn, Corollary 3 has the following immediate consequence which disproves a conjecture of Nguyen, Scott, and Seymour [20]. **Corollary 4**.: _For every positive integer \(\chi\) there are arbitrarily large \(N\) for which there is an \(N\)-vertex tournament \(T\) and disjoint subsets \(A,B\subseteq V(T)\) such that \(\chi(T[A]),\chi(T[B])\geqslant\chi\) and the following holds. For all \(A^{\prime}\subseteq A\) and \(B^{\prime}\subseteq B\) of size at most \(\frac{\log N}{32\chi^{2}}\), both \(\chi(A\cap N^{+}(B^{\prime}))\) and \(\chi(B\cap N^{+}(A^{\prime}))\) are at most \(2\)._ Finally, we include two results for the setting where chromatic number is replaced by degeneracy (or equivalently maximum average degree). Since every graph of high chromatic number has high degeneracy, Theorem 2 shows that for every positive integer \(d\) there is a graph \(G\) and a tournament \(T\) on the same vertex set such that the degeneracy of \(G\) is at least \(d\), but the subgraph of \(G\) induced on each out-neighbourhood of \(T\) is bipartite. Our next result strengthens this statement by ensuring that the graph induced on the out-neighbourhood is \(1\)-degenerate. **Proposition 5**.: _For every positive integer \(k\), there is a \(k\)-regular graph \(G\) and a tournament \(T\) on the same vertex set such that \(G[N_{T}^{+}(v)]\) is a forest for every vertex \(v\)._ Despite this result, and in contrast to Theorem 2, if \(G\) has high degeneracy and \(T\) is a tournament on the same vertex set, then there is a two-vertex set whose out-neighbourhood has high degeneracy. **Theorem 6**.: _For every positive integer \(k\), every graph \(G\) with degeneracy at least \(12k\), and every tournament \(T\) on the same vertex set, there exist vertices \(x,y\) such that \(G[N^{+}(\{x,y\})]\) has degeneracy at least \(k-1\)._ ## 2. Proofs of the main theorems In this section we present the proof of Theorem 2. Our construction is based on the classical _Schrijver graphs_[12]. **Definition 7**.: _Let \(k\geqslant 1\) and \(n\geqslant 2k\) be integers. The Kneser graph \(\mathsf{KG}(n,k)\) is the graph whose vertex set is \(\binom{[n]}{k}\) and in which two distinct sets \(S_{1},S_{2}\in\binom{[n]}{k}\) are adjacent if and only if \(S_{1}\cap S_{2}=\varnothing\). The Schrijver graph \(\mathsf{SG}(n,k)\) is the induced subgraph of \(\mathsf{KG}(n,k)\) whose vertex set consists of all stable sets in \(\binom{[n]}{k}\). Here, a set \(S\in\binom{[n]}{k}\) is called stable if it does not include two cyclically consecutive3 elements of \([n]\)._ Footnote 3: By this we mean a pair \(i,i+1\) where \(1\leqslant i<n\) or the pair \(n,1\). Kneser [10] conjectured that the chromatic number of \(\mathsf{KG}(n,k)\) is \(n-2k+2\). This conjecture remained open for two decades and was first proved by Lovasz [11] using homotopy theory (see also Barany [12] and Greene [13] for very short proofs). Shortly afterwards, Schrijver [14] introduced the graphs \(\mathsf{SG}(n,k)\) and proved that \(\mathsf{SG}(n,k)\) is vertex-critical with chromatic number \(\chi(\mathsf{SG}(n,k))=\chi(\mathsf{KG}(n,k))=n-2k+2\). To prove Theorem 2, we will show that for every integer \(\chi\geqslant 3\) and every sufficiently large integer \(k\) there exists a tournament \(T\) on the same vertex set as \(\mathsf{SG}(2k+\chi-2,k)\) such that for every \(U\subseteq V(T)\) which is sufficiently small, the out-neighbourhood of \(U\) in \(T\) induces a bipartite subgraph of \(\mathsf{SG}(2k+\chi-2,k)\). As \(\chi(\mathsf{SG}(2k+\chi-2,k))=\chi\), this will prove Theorem 2. In constructing our tournament, we rely on the following combinatorial statement which follows directly from the existence of tournaments with high domination number. **Lemma 8**.: _For every positive integer \(t\) there is some \(n_{0}\) such that for all integers \(n\geqslant n_{0}\) there exists a function \(f\colon\binom{[n]}{t}\to 2^{[n]}\) with the following two properties:_ * _for every_ \(A,B\in\binom{[n]}{t}\)_, at least one of_ \(A\cap f(B)\) _and_ \(B\cap f(A)\) _is empty, and_ * _for every collection_ \((A_{i})_{i\in I}\) _of at most_ \(\frac{\log n}{2t}\) _sets from_ \(\binom{[n]}{t}\)_,_ \[\bigcap_{i\in I}f(A_{i})\neq\varnothing.\] Proof.: By a classical result of Erdos [1] (see [1] for an explicit construction), for every sufficiently large \(n\) there is an \(n\)-vertex tournament in which every set of at most \(\log(n)/2\) vertices is dominated by a vertex outside the set. Let \(n\) be large enough that this result holds and that \(\log(n)/2\geqslant t\), and let \(T\) be the corresponding tournament. Identify \(V(T)\) with \([n]\) and, for \(A\in\binom{[n]}{t}\), define \(f(A)\) as \[f(A)\coloneqq\{v\in[n]\setminus A\colon v\text{ dominates }A\}.\] We claim \(f\) satisfies the two properties of the lemma statement. Firstly, let \(A,B\in\binom{[n]}{t}\) and suppose for a contradiction that \(A\cap f(B)\) and \(B\cap f(A)\) are both non-empty. Then there is some \(a\in A\setminus B\) that dominates \(B\) and some \(b\in B\setminus A\) that dominates \(A\). This implies that \(a\) and \(b\) are distinct, and the edge between them is oriented in both directions, which is a contradiction. Next, let \((A_{i})_{i\in I}\) be a collection of at most \(\frac{\log n}{2t}\) sets from \(\binom{[n]}{t}\). Let \(A=\bigcup_{i\in I}A_{i}\) which is a set of size at most \(\log(n)/2\). By the definition of \(T\) some vertex \(x\not\in A\) dominates \(A\), but then \(x\in\bigcap_{i\in I}f(A_{i})\), as required. Before giving the proof of Theorem 2, let us fix the following notation: for a set \(S\in\binom{[n]}{k}\), we denote by \(\mathsf{gap}(S)\) the set of "left-elements" of cyclically consecutive pairs of \([n]\) that are disjoint from \(S\). Concretely, \(r\in\mathsf{gap}(S)\) if and only if \(\{r,r+1\}\cap S=\varnothing\), where addition is to be understood modulo \(n\) (that is, \(n+1\) is identified with \(1\)). Pause to note that every stable set \(S\subseteq[n]\) of size \(k\) (that is, every vertex of the Schrijver graph \(\mathsf{SG}(n,k)\)) satisfies \(|\mathsf{gap}(S)|=n-2k\). Every \(S\in\binom{[n]}{k}\) can be recovered from \(\mathsf{gap}(S)\) and so \(|V(\mathsf{SG}(n,k))|\leqslant\binom{n}{n-2k}\). Proof of Theorem 2.: The result is trivial for \(\chi\leqslant 2\), so let \(\chi\geqslant 3\) be an integer, \(t\coloneqq\chi-2\), and \(n_{0}\) be as given by Lemma 8. Pick some positive integer \(k>t\) such that \(2k+t\geqslant n_{0}\), set \(n\coloneqq 2k+t\), and set \(G\coloneqq\mathsf{SG}(n,k)\). Note that \(G\) is triangle-free, has chromatic number \(\chi\) and, for any \(S\in V(\mathsf{SG}(n,k))\), \(\mathsf{gap}(S)\in\binom{[n]}{t}\). Hence, \(N\coloneqq|V(\mathsf{SG}(n,k))|\leqslant\binom{n}{t}\leqslant n^{t}\). Let \(f\colon\binom{[n]}{t}\to 2^{[n]}\) be the function from Lemma 8. Define a directed graph \(D\) on the same vertex set as \(G\) that has a directed edge from a vertex \(S_{1}\) to a vertex \(S_{2}\) if and only if \(f(\mathsf{gap}(S_{1}))\cap\mathsf{gap}(S_{2})=\varnothing\). Note, by the first property of \(f\) guaranteed by Lemma 8, that any two distinct vertices of \(D\) are connected by an arc in at least one of the two possible directions. Hence, there exists a spanning subdigraph \(T\) of \(D\) which is a tournament. Let \(U\) be any set of at most \(\frac{\log N}{2\chi^{2}}\leqslant\frac{\log N}{2t^{2}}\leqslant\frac{\log n}{2t}\) vertices. To finish the proof we will show that the out-neighbourhood \(N_{D}^{+}(U)\) induces a bipartite subgraph of \(G\) (and hence the same is true for the out-neighbourhood \(N_{T}^{+}(U)\subseteq N_{D}^{+}(U)\) in \(T\)). Write \(U=\{S_{1},\ldots,S_{|U|}\}\). By the second property of \(f\) guaranteed by Lemma 8, there is some \(r\in[n]\) common to all the \(f(\mathsf{gap}(S_{i}))\). By the definition of \(D\), any \(S\in N_{D}^{+}(U)\) satisfies \(r\notin\mathsf{gap}(S)\) and so \(S\cap\{r,r+1\}\neq\varnothing\). Colouring all the vertices in the out-neighbourhood that include the element \(r\) with one colour and all the remaining vertices (which necessarily contain \(r+1\)) with another colour provides a proper \(2\)-colouring of \(G[N_{D}^{+}(S)]\). This concludes the proof of the theorem. We can convert the graph \(G\) from Theorem 2 to a tournament: pick any linear order on the vertices of \(G\) and construct a tournament \(T_{1}\) whose back-edge graph is \(G\). We will show that \(\chi(G)\) and \(\chi(T_{1})\) are closely related, and thus prove Corollary 3. Proof of Corollary 3.: Let \(K\coloneqq 2\chi\) and \(n\) be sufficiently large. By Theorem 2 there is a triangle-free graph \(G\) with chromatic number \(K\) and a tournament \(T\) on the same \(N\)-vertex set such that, for every set \(U\) of at most \(\frac{\log N}{8\chi^{2}}\) vertices, \(\chi(G[N_{T}^{+}(U)])\leqslant 2\). Let \((V(G),\prec)\) be a linear order and define a tournament \(T_{1}\) with vertex set \(V(G)\) as follows: there is an arc from vertex \(u\) to vertex \(v\) in \(T_{1}\) if either \(v\prec u\) and \(uv\in E(G)\) or \(u\prec v\) and \(uv\notin E(G)\). We further set \(T_{2}\coloneqq T\) and claim that the pair \((T_{1},T_{2})\) of tournaments satisfies the statement of the corollary. Let \(W\subseteq V(G)\) be any set of vertices where \(T_{1}[W]\) is transitive. Note that if \(v_{1}v_{2}v_{3}\) is a path in \(G\) (so \(v_{1}v_{3}\notin E(G)\) by triangle-freeness) and \(v_{1}\prec v_{2}\prec v_{3}\), then \(v_{1}v_{2}v_{3}\) is a cyclic triangle in \(T_{1}\) and so \(v_{1}\), \(v_{2}\), \(v_{3}\) are not all in \(W\). In particular, the partition \(W=W_{1}\cup W_{2}\) where \[W_{1} \coloneqq\{w\in W\colon\text{there is $w^{\prime}\in W$ such that $w^{\prime}\prec w$ and $w^{\prime}w\in E(G)$}\},\] \[W_{2} \coloneqq\{w\in W\colon\text{there is no $w^{\prime}\in W$ such that $w^{\prime}\prec w$ and $w^{\prime}w\in E(G)$}\},\] gives a proper \(2\)-colouring of the vertices of \(G[W]\). Since this holds for any \(W\) where \(T_{1}[W]\) is transitive, we have \(\chi(T_{1})\geqslant\chi(G)/2=\chi\). To finish the proof, consider any set \(U\) of at most \(\frac{\log N}{8\chi^{2}}=\frac{\log N}{2K^{2}}\) vertices. Note that \(G[N_{T}^{+}(U)]=G[N_{T_{2}}^{+}(U)]\) is bipartite. Let \(I_{1}\), \(I_{2}\) be two disjoint independent sets in \(G\) such that \(I_{1}\cup I_{2}=N_{T_{2}}^{+}(U)\). Now consider any two vertices \(u,v\in I_{j}\) for some \(j\in\{1,2\}\) and note that since \(uv\notin E(G)\), there is an arc from \(u\) to \(v\) in \(T_{1}\) if and only if \(u\prec v\). Hence \(T_{1}[I_{1}]\) and \(T_{1}[I_{2}]\) are transitive tournaments and so \(\chi(T_{1}[N_{T_{2}}^{+}(U)])\leqslant 2\). To prove Corollary 4, we can now take the two tournaments \(T_{1}\) and \(T_{2}\) from Corollary 3 and combine them appropriately: we simply orient the edges within \(A\) and \(B\) according to \(T_{1}\), and the edges between \(A\) and \(B\) according to \(T_{2}\). Proof of Corollary 4.: Let \(\chi\) be a positive integer. By Corollary 3, for arbitrarily large \(N\) there exist tournaments \(T_{1}\) and \(T_{2}\) on the same \(N\)-vertex set \(V\) with \(\chi(T_{1})=2\chi\) and \(\chi(T_{1}[N_{T_{2}}^{+}(U)])\leqslant 2\) for every \(U\subseteq V\) of size at most \(\frac{\log N}{32\chi^{2}}\). Partition \(V\) into sets \(A\) and \(B\) such that \(\chi(T_{1}[A]),\chi(T_{1}[B])\geqslant\chi\), then construct a new tournament \(T\) on \(V\) by orienting the edge between \(u,v\in V\) to agree with \(T_{1}\) if \(u,v\in A\) or \(u,v\in B\), and orienting it to agree with \(T_{2}\) otherwise. It is not difficult to see that \(T\) satisfies the conditions of the corollary. ## 3. Degeneracy In this section we consider the setting in which degeneracy replaces chromatic number. We first show that there is a tournament on the vertex set of the \(k\)-dimensional hypercube such that each out-neighbourhood induces a forest in the hypercube, proving Proposition 5. Therefore, having high degeneracy does not imply that some out-neighbourhood has high degeneracy. Proof of Proposition 5.: For each \(k\), let \(G_{k}\) be the hypercube on \(2^{k}\) vertices. We will actually prove something stronger than Proposition 5, namely that the _closed_ in- and out-neighbourhoods4\(G_{k}[N_{T}^{-}[v]]\) and \(G_{k}[N_{T}^{+}[v]]\) are both forests for every vertex \(v\in V(G_{k})\). We proceed by induction on \(k\). For \(k=1\) the result is immediate, so given \(k\geqslant 1\) let \(T_{k}\) be a tournament on \(V(G_{k})\) with the desired property. We will view \(G_{k+1}\) as the union of two copies of \(G_{k}\), say \(G_{k}^{1}\) and \(G_{k}^{2}\), connected via the matching consisting of all edges of the form \(x^{1}x^{2}\), where \(x^{1}\in V(G_{k}^{1})\) and \(x^{2}\in V(G_{k}^{2})\) denote the copies of a vertex \(x\in V(G_{k})\). For each \(S\subseteq V(G_{k})\), we will write \(S^{(1)}\) and \(S^{(2)}\) for the corresponding sets of vertices in \(G_{k}^{1}\) and \(G_{k}^{2}\) respectively. Footnote 4: The _closed in-neighbourhood_ of a vertex \(v\) in tournament \(T\) is \(N_{T}^{-}[v]=\{v\}\cup N_{T}^{-}(v)\). The closed out-neighbourhood is defined analogously. Now define a tournament \(T_{k+1}\) on vertex set \(V(G_{k+1})\) as follows. First orient the edges within each of \(V(G_{k}^{1})\) and \(V(G_{k}^{2})\) according to \(T_{k}\), in the canonical way. Then for each \(x\in V(G_{k})\), orient every edge between \(x^{1}\) and \(N_{T_{k}}^{-}[x]^{(2)}\) away from \(x^{1}\) and every edge between \(x^{1}\) and \(N_{T_{k}}^{+}(x)^{(2)}\) towards \(x^{1}\). This completes the construction of \(T_{k+1}\). Observe that for each \(x\in V(G_{k})\), the edges between \(x^{2}\) and \(N_{T_{k}}^{-}(x)^{(2)}\) are oriented away from \(x^{2}\) and the edges between \(x^{2}\) and \(N_{T_{k}}^{+}[x]^{(2)}\) are oriented towards \(x^{2}\). Let \(x\in V(G_{k})\) and note that \(N_{T_{k+1}}^{+}[x^{1}]=N_{T_{k}}^{+}[x]^{(1)}\cup N_{T_{k}}^{-}[x]^{(2)}\). By the induction hypothesis, \(N_{T_{k}}^{+}[x]\) and \(N_{T_{k}}^{-}[x]\) both induce forests in \(G_{k}\), so \(N_{T_{k}}^{+}[x]^{(1)}\) and \(N_{T_{k}}^{-}[x]^{(2)}\) do the same in \(G_{k+1}\). Since there is exactly one edge in \(G_{k+1}\) between these two sets, namely \(x^{1}x^{2}\), the graph \(G_{k+1}[N_{T_{k+1}}^{+}[x^{1}]]\) is acyclic. Analogous arguments show that \(G_{k+1}[N_{T_{k+1}}^{-}[x^{1}]]\), \(G_{k+1}[N_{T_{k+1}}^{+}[x^{2}]]\), and \(G_{k+1}[N_{T_{k+1}}^{-}[x^{2}]]\) are all acyclic too. Since every vertex of \(G_{k+1}\) is of the form \(x^{1}\) or \(x^{2}\) for some \(x\in V(G_{k})\), this completes the proof. However, we will now show that, unlike with chromatic number, having high degeneracy implies that there are two vertices \(x\) and \(y\) such that the out-neighbourhood of \(\{x,y\}\) has high degeneracy. Proof of Theorem 6.: Let \(H\) be a bipartite subgraph of \(G\) with \(\delta(H)\geqslant 6k\) and let \(A\cup B\) be a bipartition of \(H\) with \(|A|\geqslant|B|\). Define \(T_{1}=T[A]\) and \(T_{2}=T[B]\). Pick \(x\in A\) satisfying \(|N_{T_{1}}^{+}[x]|\geqslant|A|/2\) and define \(A^{\prime}=N_{T_{1}}^{+}[x]\). Now let \(H_{1}=H[A^{\prime},B]\). It can be shown using linear programming duality that every tournament has a probability distribution on its vertex set which assigns weight at least \(1/2\) to every closed in-neighbourhood (see [1, Sec. 1.2]). Let \(w\) be such a probability distribution for \(T_{2}\). Take a random vertex \(y\in B\) according to \(w\) and note that \(\mathbb{P}(u\in N_{T_{2}}^{+}[y])\geqslant 1/2\) for every \(u\in B\). Let \(H_{2}=H_{1}[A^{\prime},N_{T_{2}}^{+}[y]]\) so that for every \(e\in E(H_{1})\), \(\mathbb{P}[e\in E(H_{2})]\geqslant 1/2\). We have \(\mathbb{E}[e(H_{2})]\geqslant e(H_{1})/2\geqslant 3k|A^{\prime}|\geqslant k(|A^{ \prime}|+|B|)\), from which it follows, since \(|N_{T_{2}}^{+}[y]|\leqslant|B|\), that there exists \(y\in B\) such that \(e(H_{2})\geqslant k|V(H_{2})|\). Removing \(x\) and \(y\) from \(H_{2}\), we obtain a subgraph \(G^{\prime}\) of \(G[N_{T}^{+}(\{x,y\})]\) with \(e(G^{\prime})\geqslant(k-2)|V(G^{\prime})|\). Thus \(G^{\prime}\), and therefore also \(G[N^{+}(\{x,y\})]\), has degeneracy greater than \(k-2\). ## 4. Fractional chromatic number We remind the reader that a graph \(G\) has fractional chromatic number \(\chi_{f}(G)\leqslant r\) if and only if there is a probability distribution on the independent sets of \(G\) such that the random independent set \(I\) obtained and every vertex \(v\) satisfy \(\mathbb{P}(v\in I)\geqslant 1/r\). In this section we demonstrate that the modified version of Conjecture 1 in which chromatic number is replaced by fractional chromatic number is true, as observed by Scott and Seymour [13] without proof. **Theorem 9**.: _For \(c\geqslant 1\), let \(G\) be a graph and \(T\) be a tournament on the same vertex set such that \(\chi_{f}(G[N_{T}^{+}(v)])\leqslant c\) for every vertex \(v\). Then \(\chi_{f}(G)\leqslant 2(c+1)\)._ Proof.: Let \(w\) be a probability distribution on the vertex set of \(T\) that assigns weight at least \(1/2\) to every closed in-neighbourhood. For each vertex \(v\), since \(\chi_{f}(G[N_{T}^{+}(v)])\leqslant c\), there is a random independent set \(\boldsymbol{I}_{v}\) of \(G[N_{T}^{+}(v)]\) such that \(\mathbb{P}(u\in\boldsymbol{I}_{v})\geqslant 1/c\) for every \(u\in N_{T}^{+}(v)\). We sample a random independent set \(\boldsymbol{I}\) of \(G\) as follows. First pick a vertex \(\boldsymbol{v}\) according to \(w\). Then with probability \(1/(c+1)\) take \(\boldsymbol{I}=\{\boldsymbol{v}\}\) and with probability \(c/(c+1)\) take \(\boldsymbol{I}=\boldsymbol{I}_{\boldsymbol{v}}\). Note that, for any vertex \(u\), if \(\boldsymbol{v}\in N^{-}[u]\), then \(u\in\boldsymbol{I}\) with probability at least \(1/(c+1)\). Hence, by the defining property of \(w\), \(\mathbb{P}(u\in\boldsymbol{I})\geqslant 1/(2c+2)\) and so \(\chi_{f}(G)\leqslant 2(c+1)\). ## 5. Closing remarks We have been unable to determine whether high chromatic number forces an out-neighbourhood with high degeneracy, and we would be interested to know if this is the case. **Question 10**.: _Does there exist, for each integer \(d\), an integer \(\chi\) such that for every graph \(G\) with \(\chi(G)\geqslant\chi\) and every tournament \(T\) on the same vertex set, there is a vertex \(v\) for which \(G[N_{T}^{+}(v)]\) has degeneracy at least \(d\)?_ We do, however, suspect that this is true for \(d=2\), that is, it should be possible to force some out-neighbourhood to contain a cycle. **Conjecture 11**.: _For every graph \(G\) with sufficiently large chromatic number, and every tournament \(T\) on the same vertex set, there exists a vertex \(v\) such that \(G[N_{T}^{+}(v)]\) contains a cycle._ We have shown that for certain very structured tournaments \(T\) there are graphs on the same vertex set with large chromatic number, in which every out-neighbourhood of \(T\) induces a bipartite subgraph. We conjecture that (with high probability) we cannot replace \(T\) with a random tournament. **Conjecture 12**.: _For every positive integer \(k\), there exists a \(\chi\) such that if \(T\) is the uniformly random tournament on vertex set \([N]\), then with high probability \((\)as \(N\to\infty)\), for every graph \(G\) on \([N]\) with \(\chi(G)\geqslant\chi\) there is a vertex \(v\in[N]\) for which \(G[N_{T}^{+}(v)]\geqslant k\)._ Finally, as remarked after the statement of Theorem 2, if \(\chi(G)\geqslant\chi\), then there is a collection of at most \(\lceil\log_{2}(N)/\lfloor\chi/2-1\rfloor\rceil\) out-neighbourhoods whose union induces a subgraph of chromatic number at least \(3\). It would be interesting to know if \(o(\log(N)/\chi)\) (as \(\chi\to\infty\)) out-neighbourhoods suffice here. In particular, we conjecture the following. **Conjecture 13**.: _There exists \(f(N)\) satisfying \(f(N)=o(\log N)\) such that for every \(N\)-vertex graph \(G\) with \(\chi(G)\geqslant f(N)\), and every tournament \(T\) on the same vertex set, there is a vertex \(v\) for which \(\chi(G[N_{T}^{+}(v)])\geqslant 3\)._ **Acknowledgements.** We would like to thank Sang-il Oum, Alex Scott, David Wood, and Liana Yepremyan for organising the April 2023 MATRIX-IBS Structural Graph Theory Downunder III workshop where we began this work. Our thanks to Paul Seymour for helpful comments on the paper.
2303.09222
The Dunnett procedure with possibly heterogeneous variances
Most comparisons of treatments or doses against a control are performed by the original Dunnett single step procedure \cite{Dunnett1955} providing both adjusted p-values and simultaneous confidence intervals for differences to the control. Motivated by power arguments, unbalanced designs with higher sample size in the control are recommended. When higher variance occur in the treatment of interest or in the control, the related per-pairs power is reduced, as expected. However, if the variance is increased in a non-affected treatment group, e.g. in the highest dose (which is highly significant), the per-pairs power is also reduced in the remaining treatment groups of interest. I.e., decisions about the significance of certain comparisons may be seriously distorted. To avoid this nasty property, three modifications for heterogeneous variances are compared by a simulation study with the original Dunnett procedure. For small and medium sample sizes, a Welch-type modification can be recommended. For medium to high sample sizes, the use of a sandwich estimator instead of the common mean square estimator is useful. Related CRAN packages are provided. Summarizing we recommend not to use the original Dunnett procedure in routine and replace it by a robust modification. Particular care is needed in small sample size studies.
Ludwig A. Hothorn, Mario Hasler
2023-03-16T10:56:24Z
http://arxiv.org/abs/2303.09222v1
# The Dunnett procedure with possibly heterogeneous variances ###### Abstract Most comparisons of treatments or doses against a control are performed by the original Dunnett single step procedure [1] providing both adjusted \(p\)-values and simultaneous confidence intervals for differences to the control. Motivated by power arguments, unbalanced designs with higher sample size in the control are recommended. When higher variance occur in the treatment of interest or in the control, the related per-pairs power is reduced, as expected. However, if the variance is increased in a non-affected treatment group, e.g. in the highest dose (which is highly significant), the per-pairs power is also reduced in the remaining treatment groups of interest. I.e., decisions about the significance of certain comparisons may be seriously distorted. To avoid this nasty property, three modifications for heterogeneous variances are compared by a simulation study with the original Dunnett procedure. For small and medium sample sizes, a Welch-type modification can be recommended. For medium to high sample sizes, the use of a sandwich estimator instead of the common mean square estimator is useful. Related CRAN packages are provided. Summarizing we recommend not to use the original Dunnett procedure in routine and replace it by a robust modification. Particular care is needed in small sample size studies. ## 1 Introduction Both clinical multi-arm trials, e.g. dose finding phase IIb studies, and non-clinical bioassays commonly use a placebo or zero-dose control for the comparisons against treatment or dose groups. Commonly, the original Dunnett single step procedure [1] is used. The question arises how robust is this procedure in the case of variance heterogeneity and still normally distributed errors. Several modifications are available, where primarily the summarizing concept of the any-pairs power (i.e., per-pairs power under \(H_{0}\)) was used to characterize the different power losses and primarily the control of the familywise error rate (FWER). Summarizing, Dunnett's original test is conservative when low variances occur in groups with large sample size (with related power loss), but it is unacceptably liberal when high variances occur in treatments with small sample size (with seemingly, unacceptable power increase). Appropriate modifications control the FWER at the price of a power loss compared to the unacceptable power of the original under these conditions.
2302.06238
From Small to Large: Clos Network for Scaling All-Optical Switching
To cater to the demands of our rapidly growing Internet traffic, backbone networks need high-degree reconfigurable optical add/drop multiplexers (ROADMs) to simultaneously support multiple pairs of bi-directional fibers on each link. However, the traditional ROADM architecture based on the Spanke network is too complex to be directly scaled up to construct high-degree ROADMs. In addition, the widely deployed Spine-Leaf datacenter networks (DCNs) based on electrical switches consume too much power and exhibit high packet latency. Because of these issues, Clos networks are considered as promising alternatives for constructing large-scale ROADMs and all-optical DCNs. In this article, we look at a next-generation Clos-based ROADM architecture and show that it indeed provides better blocking performance with lower element and fiber complexities compared with a traditional Spanke-based ROADM architecture. We also discuss the application of a Clos network in all-optical DCNs to show that it can be used to effectively construct large-scale DCNs with significantly greater flexibility in supporting a variety of multicast services and in combining different network topologies.
Jiemin Lin, Zeshan Chang, Liangjia Zong, Sanjay K. Bose, Tianhai Chang, Gangxiang Shen
2023-02-13T10:23:37Z
http://arxiv.org/abs/2302.06238v1
# From Small to Large: Clos Network for Scaling All-Optical Switching ###### Abstract To cater to the demands of our rapidly growing Internet traffic, backbone networks need high-degree reconfigurable optical add/drop multiplexers (ROADMs) to simultaneously support multiple pairs of bi-directional fibers on each link. However, the traditional ROADM architecture based on the Spanke network is too complex to be directly scaled up to construct high-degree ROADMs. In addition, the widely deployed Spine-Leaf datacenter networks (DCNs) based on electrical switches consume too much power and exhibit high packet latency. Because of these issues, Clos networks are considered as promising alternatives for constructing large-scale ROADMs and all-optical DCNs. In this article, we look at a next-generation Clos-based ROADM architecture and show that it indeed provides better blocking performance with lower element and fiber complexities compared with a traditional Spanke-based ROADM architecture. We also discuss the application of a Clos network in all-optical DCNs to show that it can be used to effectively construct large-scale DCNs with significantly greater flexibility in supporting a variety of multicast services and in combining different network topologies. Reconfigurable Optical Add/Drop Multiplexer (ROADM), Wavelength Selective Switch (WSS), Spanke Network, Clos Network, Spine-Leaf Network ## I Introduction With the advent of the Fifth Generation (5G) era, and the development of several key enabling technologies, applications such as Mobile Edge Computing (MEC), Artificial Intelligence (AI), Internet of Things (IoT) and Internet of Vehicles (IoV) are becoming increasingly popular. This has not only led to the rapid growth of Internet traffic, but has also fueled an increasing demand for high Quality of Service (QoS) with low latency (e.g., 0.5-1 ms latency [1]), high reliability, low power consumption and ubiquitous service. Supporting these features not only require the backbone/backhaul networks to provide stable, high-capacity pipes, but also require datacenter networks (DCNs) with computing resources which operate with sufficiently high speed and high reliability and have low power consumption. In the backbone networks, the Dense Wavelength-Division Multiplexing (DWDM) technology has already been extensively employed. To efficiently leverage the DWDM technology, Reconfigurable Optical Add/Drop Multiplexers (ROADMs) are also widely deployed to enable flexible all-optical switching. Meanwhile, with the rapid growth of Internet traffic, more fiber pairs (instead of a single pair of bi-directional fibers) are lit on each link in today's networks. This leads to the requirement of ROADMs with higher fiber degrees even though their nodal degrees may be unchanged. Moreover, it is anticipated that this trend will continue over time with increasing traffic and more fibers being deployed. This therefore raises the important question of _how the traditional ROADM architecture should evolve to support higher fiber degrees to sustain this rapid growth of Internet traffic_. On the other hand, in data center networks (DCNs), the (folded) Clos (Spine-Leaf) networks have been employed as the switching architecture for decades. However, today's DCNs significantly rely on electrical switches, which leads to several disadvantages, like small capacity, high power consumption, and long latency. To overcome these disadvantages, an optical switching technology may be a promising alternative to replace electrical switches or co-work with them in next-generation DCNs. However, when using all-optical switching technology, an open question for all-optical DCNs will be _whether the Clos switching architecture will remain competitive for these all-optical DCNs_. This paper tries to answer the above two questions. Specifically, we first elaborate on the traditional ROADM architecture and its associated features. Based on this, we further discuss new ROADM architectures that are being evolved based on the Clos network. We consider various Clos-based ROADMs with different optical switching elements and compare their costs and performance. Finally, we discuss and evaluate the potential of applying the Clos network to optical DCNs. ## II Clos Network in Backbone Networks: Clos-based ROADM Architecture ### _ROADM Features_ ROADM is a key switching component in today's backbone optical networks. It consists of two main parts, i.e., line side and add/drop side (A/D side) [2]. Each _line side_ consists of a pair of ingress and egress modules. An ingress module distributes optical connections (i.e., wavelengths) to different egress/drop modules, and an egress module aggregates optical connections (i.e., wavelengths) from different ingress/add modules. Each _add/drop side_ consists of a pair of add/drop modules. An add module relays optical connections from local terminals to different egress modules, and a drop module distributes optical connections from different ingress modules to different local terminals. Here, each local terminal carries one wavelength. ROADMs are expected to support the three key features of being _colorless_, _directionless_, and _contentionless_, which are defined as follows [3]. _Colorless_ means that each _add/drop_ port of a ROADM should not be wavelength-selective, and any wavelength can be added/dropped at an _add/drop_ port. _Directionless_ means that each _add/drop_ port is not nodal degree selective, and any optical connection added on a port can be directed to any egress module, and vice versa. _Contentionless_ means that, in a ROADM, establishing optical connections between add/drop ports and ingress/egress modules will not prevent other optical connections from being set up, and that if there is a free add/drop port and a free wavelength on an ingress/egress module, an optical connection can always be set up between them. ### _Spanke-based ROADMs_ A ROADM supporting the _colorless_, _directionless_, and _contentionless_ features is called a CDC ROADM. Fig. 1 shows the basic architecture of a CDC ROADM (see the left-hand side) [2-3], which is made up of switching components, i.e., Wavelength Selective Switches (WSSs). On the line side, \(1\times K\) WSSs and \(K\times 1\) WSSs are deployed as the ingress/egress modules, respectively. On the add/drop side, \(M\times N\) WSSs are employed to add/drop wavelengths. The ingress/egress modules and add/drop modules are fully connected by short-reach fibers in the backplane of the ROADM. Although a CDC ROADM is often displayed in the format as shown on the left-hand side of Fig. 1, it is essentially a Spanke ROADM as shown in the middle of Fig. 1 if a transformation is made for its backplane [4]. In the new Spanke format, the line side of a ROADM is made up of the same length as the _double_ relay optical connections from local terminals to different egress modules, and a drop module distributes optical connections from different ingress modules to different local terminals. Here, each local terminal carries one wavelength. ROADMs are expected to support the three key features of being _colorless_, _directionless_, and _contentionless_, which are defined as follows [3]. _Colorless_ means that each _add/drop_ port of a ROADM should not be wavelength-selective, and any wavelength can be added/dropped at an _add/drop_ port. _Directionless_ means that each _add/drop_ port is not nodal degree selective, and any optical connection added on a port can be directed to any egress module, and vice versa. _Contentionless_ means that, in a ROADM, establishing optical connections between add/drop ports and ingress/egress modules will not prevent other optical connections from being set up, and that if there is a free add/drop port and a free wavelength on an ingress/egress module, an optical connection can always be set up between them. ### _Spanke-based ROADMs_ A ROADM supporting the _colorless_, _directionless_, and _contentionless_ features is called a CDC ROADM. Fig. 1 shows the basic architecture of a CDC ROADM (see the left-hand side) [2-3], which is made up of switching components, i.e., Wavelength Selective Switches (WSSs). On the line side, \(1\times K\) WSSs and \(K\times 1\) WSSs are deployed as the ingress/egress modules, respectively. On the add/drop side, \(M\times N\) WSSs are employed to add/drop wavelengths. The ingress/egress modules and add/drop modules are fully connected by short-reach fibers in the backplane of the ROADM. Although a CDC ROADM is often displayed in the format as shown on the left-hand side of Fig. 1, it is essentially a Spanke ROADM as shown in the middle of Fig. 1 if a transformation is made for its backplane [4]. In the new Spanke format, the line side of a ROADM is related to degrees, based on which there are two different types of degrees, i.e., _directional_ degree and _fiber degree_. Each directional degree corresponds to a geographic degree of a ROADM node in a network topology, while the fiber degree corresponds to a pair of bi-directional fibers contained on a directional degree. Since there can be multiple pairs of bi-directional fibers on a directional degree, multiple fiber degrees can share a common directional degree. We define \(s(D,L)\) as a Spanke-ROADM containing \(D\) directional degrees with \(L\) fiber degrees on each directional degree. The Spanke-ROADM can strictly ensure internal non-blocking through its fully-connected backplane. Nonetheless, it would require a huge number of short-reach fibers and will have a low scalability when a large-scale ROADM is constructed. This disadvantage would become even more severe when higher-degree ROADMs are required. ### _Clos-based ROADM Architecture_ Seventy years ago, C. Clos designed a useful switching network, called a Clos network [5], for the telephone switching network. It allows the use of small-scale (strictly non-blocking) switching elements to construct a large-scale switching network, while still guaranteeing the strictly non-blocking feature. The Clos network consists of three switch stages, i.e., input, middle, and output stages. It owes better scalability when constructing a large-scale switch compared with the Spanke network. However, to construct a Clos network, \(M\times N\) switching elements are required. In the past, \(M\times N\) WSS technologies were premature and there were no commercial \(M\times N\) WSSs. Today, \(M\times N\) WSSs are gradually becoming mature [6-7], we can consider replacing \(1\times K\) WSSs using Fig. 1: Architectures and complexities of CDC-ROADM, Spanke-ROADM, and Clos-ROADM. \(M\times N\) WSSs for constructing larger-scale ROADMs. The right-hand side of Fig. 1 illustrates a ROADM based on the Clos network (Clos-ROADM). The Clos-ROADM consists of an ingress stage, a middle stage, and an egress stage. Two neighboring stages are interconnected by a fully connected network using short-reach fibers. Switching elements in the ingress and egress stages consist of the line and the A/D sides of the ROADM and switching elements in the middle stage relay the ingress and egress switch stages and provide different routes for connections established between the two stages. For a Clos-ROADM with \(D\) directional degrees and \(L\) fiber degrees, the ingress and egress stages require arrays of \(L\times M\) (or \(M\times L\)) WSSs and the middle stage requires an array of \(D\times D\) WSSs. As in [9], we represent a Clos-ROADM containing \(M\) middle-stage switching elements and \(D\) directional degrees with \(L\) fiber degrees as \(\nu(M,L,D)\). Recently, there is an increasing interest on how to construct ROADMs based on the Clos network in both academia [8-9] and industry [10-11]. In [8], an initial Clos-ROADM was proposed. In [9], strictly non-blocking conditions for WSS-based Clos-ROADMs were derived, which provides a theoretical foundation for the Clos-ROADMs. In [10-11], top industrial vendors paid special attention to the potential of Clos-ROADM and verified its performance based on simulations. ### _Performance Comparison between Spanke-ROADM and Clos-ROADM_ We compare Clos-ROADMs with Spanke-ROADMs in terms of their respective element complexity, fiber complexity, and blocking performance. **Element and Fiber Complexities**: We first consider the aspects of element and fiber complexities. The element complexity is referred to as the number of elements required in a ROADM, and the fiber complexity is referred to as the number of fibers required in a ROADM. In a \(s(D,L)\) Spanke-ROADM, \(2\cdot L\) WSSs are required for \(L\) pairs of bi-directional fibers in each directional degree, and therefore, the total number of WSSs required is \(2\cdot L\cdot D\) for a Spanke-ROADM with \(D\) directional degrees. In contrast, in a \(\nu(M,L,D)\) Clos-ROADM, \(2\cdot D\) WSSs are required in the ingress and egress stages when there are \(D\) directional degrees and \(M\) WSSs are required in the middle stage. Thus, the total number of WSSs required is \(2\cdot D+M\). Note that here we only count the number of switching elements, but do not consider the difference between \(1\times K\) and \(M\times N\) switching elements though the latter can be more expensive than the former. In a \(s(D,L)\) Spanke-ROADM, one fiber degree requires two \(1\times(D-1)\cdot L\) WSSs for incoming connections and outgoing connections (Note that WSSs on the same direction degree are not inter-connected to each other). Thus, each fiber degree requires \((D-1)\cdot L\) fibers. Since there are \(D\cdot L\) fiber degrees, the total number of fibers required in this architecture is \((D^{2}-D)\cdot L^{2}\). In contrast, in a \(\nu(M,L,D)\) Clos-ROADM, one ingress switching element needs \(M\) fibers to connect with the middle stage, and so does one egress switching element. Therefore, the total number of fibers required in this architecture is \(2\cdot L\cdot M\). For a Clos-ROADM, its element and fiber complexities are both related to \(M\), i.e., the number of middle-stage switching elements. Since Spanke-ROADM is strictly non-blocking for each wavelength (i.e., spatially strictly non-a fair comparison, we consider a Clos-ROADM that is strictly non-blocking for each wavelength. The spatially strictly non-blocking condition for the Clos-ROADM is \(M>2\cdot L-1\), so we take \(M=2\cdot L\) for the following comparison as a matter of convenience. As shown in the bottom of Fig. 1, the fiber complexity of a Clos-ROADM is \(O(L^{2})\), while the fiber complexity of a Spanke-ROADM is \(O(D^{2}\cdot L^{2})\). Thus, the fiber complexity of a Spanke-ROADM is much higher (\(D^{2}\) times) than that of a Clos-ROADM. Similarly, the element complexity of a Clos-ROADM is \(O(D+L)\), while that of a Spanke-ROADM is \(O(D\cdot L)\), which is therefore much higher than the former. In conclusion, a Clos-ROADM demonstrates significantly greater scalability than a Spanke-ROADM for constructing high-degree ROADMs. **Blocking Performance**: We also compare the blocking performance of the two types of ROADM architecture. As an example, consider a ROADM with 10 directional degrees and 10 fiber degrees, and supporting 5 wavelengths in each fiber. We evaluate connection blocking performance of the ROADMs based on dynamic traffic load (in Erlang). Specifically, under the dynamic traffic load, connection requests arrive following a Poisson distribution and the holding time of each established service connection follows a negative exponential distribution. Service connections are established between any pair of fiber degrees in different directional degrees. A total of \(10^{6}\) arrived connection requests are simulated, and the connection blocking probability is found as the ratio of the total number of blocked connection requests to the total number of arrivals of connection requests. The offered traffic load for the simulation is 2 Erlang per fiber degree. To construct such a ROADM, we need a \(s(10,10)\) Spanke-ROADM or a \(\nu(M,10,10)\) Clos-ROADM. Based on Table I, Spanke-ROADM would need 200 \(1\times 90\) WSSs and 9,000 fibers for this. In comparison, a Clos-ROADM would need 20 \(10\times M\) (or \(M\times 10\)) WSSs and \(M\)\(10\times 10\) WSSs and \(20\times M\) fibers. Here, \(M\) is a variable that affects the blocking performance of Clos-ROADM. In [10], \(M\) is recommended to be in the range of \([L,1.3\cdot L]\). In this study, we set \(M=L\) as a performance bound, since it corresponds to the condition of a reconfigurable non-blocking Clos-ROADM [8]. Fig. 2 shows the simulation results of the two ROADM architectures, where the dashed line corresponds to the performance of Clos-ROADM and the solid line corresponds to the performance of a Spanke-ROADM. The blue line indicates the blocking probability, corresponding to the left y-axis of the figure, and the red line indicates the number of fibers, corresponding to the right y-axis of the figure. It is noted that, as \(M\) increases, the blocking performance of the Clos-ROADM improves rapidly, and when the blocking probability of Clos-ROADM is very close to that of the Spanke-ROADM, \(M=6\), it corresponds \(20\ 10\times 6\) WSSs, \(6\ 10\times 10\) WSSs, and \(120\) fibers. This demonstrates that in addition to significantly saving on the WSSs used, the Clos-ROADM provides large savings on the number of fibers needed (more than 98%) compared with a Spanke-ROADM. This supports our observation that using a Clos-ROADM is not only highly desirable because of its much lower element and fiber complexity than a Spanke-ROADM, but also this advantage would become even more significant when a larger ROADM is required to be constructed. ## III Clos-ROADM with Different Middle-Stage Switches The previous section showed the overall performance benefit of a Clos-ROADM over a Spanke-ROADM using WSSs as key switching elements. In this section, we consider other available optical switching elements which may be considered as alternatives for the middle-stage switches of a Clos-ROADM. These may further improve the performance of a Clos-ROADM and also reduce its overall cost. ### _Tunable Wavelength Converter-based Clos-ROADM_ For simplicity, we will call the Clos-ROADM studied in the previous section as a _WSS Clos-ROADM_. This is subject to the wavelength continuity constraint, which requires that a connection between a pair of input and output ports must use the same wavelength on each passed fiber. The wavelength continuity constraint is critical to a backbone optical network since it is typically topologically sparse. In [9], it is proved that, to satisfy the wavelength continuity constraint and achieve the strictly non-blocking condition, the total number of wavelengths supported by the Clos network needs to be at least twice the number of wavelengths on each line-side port of a ROADM. This therefore poses a great pressure on the WSS element to support many wavelengths as, otherwise, the number of wavelengths supported on each line-side port would become very limited. To reduce the number of wavelengths required to be supported by a Clos network, we may additionally incorporate wavelength conversion capability in the middle stage of Clos-ROADM as shown in the left top of Fig. 3 (the "TWC-WSS" module). We call this architecture a _TWC-WSS Clos-ROADM_. This is implemented by adding tunable wavelength convertor (TWC) modules at the input ports of the middle-stage WSSs. Here, each TWC module contains a \(1\times K\) de-multiplexer to demultiplexed wavelengths, a \(1\times K\) coupler to multiplex wavelengths, and multiple tunable wavelength convertors, each of which can convert wavelengths independently. The detail of these TWCs can also be found in [12]. By introducing these TWC modules, the wavelength continuity constraint in the Clos network can be fully relaxed. Moreover, if an optical network is deployed with this type of ROADM, the wavelength continuity constraint itself can be relaxed in the network. This would significantly improve wavelength assignment flexibility and spectrum resource utilization in the overall network. ### _Arrayed Wavelength Gating-based Clos-ROADM_ Since WSSs are expensive, a Clos-ROADM with many WSSs would have a high system cost. To reduce the system cost while maintaining switching flexibility, we may employ low-cost switching elements to implement the middle stage of the Clos-ROADM. An Arrayed Wavelength Gating (AWG) may be a good candidate for this as it has a good potential to fulfill the middle-stage switching functionality at lower cost while guaranteeing flexible wavelength-routing capability. The right top of Fig. 3 shows three potential modules for a Clos-ROADM with AWGs and TWCs deployed in the middle stage modules. The "AWG" module in Fig. 3 uses AWGs to form the middle switching stage (called _AWG Clos-ROADM_). However, this architecture cannot achieve a blocking performance close to WSS Clos-ROADM since AWGs are passive and are not able to switch wavelengths on demand. To improve the wavelength switching flexibility of the middle stage, another option is to use AWGs with the TWC modules. This is expected to improve the blocking performance since TWC modules can change wavelengths. The "TWC-AWG" module in Fig. 3 adds TWC modules before the input ports of the AWGs; we call this a Fig. 3: Clos-ROADMs with different middle switching elements. Fig. 2: \(s\)(10,10) vs. \(v(M,10,10)\). _TWC-AWG Clos-ROADM_. The "TWC-AWG-TWC" module in Fig. 3 further adds TWC modules after the output ports of the AWGs and are called a _TWC-AWG-TWC Clos-ROADM_. Comparing these three AWG-based architectures, there is a tradeoff between system cost and wavelength-switching flexibility. An AWG Clos-ROADM is the cheapest, but the least flexible, while a TWC-AWG-TWC Clos-ROADM is the most flexible, but also the most expensive. ### _Blocking Performance_ We evaluate the blocking performance of the proposed Clos-ROADM architectures, including the two WSS-based Clos-ROADM (i.e., WSS Clos-ROADM and TWC-WSS Clos-ROADM) and three AWG-based Clos-ROADM (i.e., AWG Clos-ROADM and TWC-AWG Clos-ROADM and TWC-AWG-TWC Clos-ROADM) architectures. The simulation assumptions are as follows. We use \(\nu\)(5,5,5) as the basic Clos-ROADM architecture. Clos-ROADM supports 5 wavelengths in each fiber. Offered traffic load follows the Erlang assumption, i.e., the connection request arrivals between each pair of input-output fibers follows a Poisson process and the holding time of each established connection follows a negative exponential distribution. The offered traffic load between any input-output port pair is the same. In addition, two special scenarios are considered as benchmarks. One is the traditional CDC-ROADM (Spanke-ROADM), which achieves the best performance among today's ROADMs. The other is a _theoretical limit_, which is calculated based on the following formulae. \[E_{B}\big{(}\rho,w\big{)}=\big{(}\rho^{w}/w!\big{)}/\big{(}\sum_{k=0}^{w}\rho ^{k}/k!\big{)} \tag{1}\] \[\big{\{}B_{i}=E_{B}\big{(}\rho,w\big{)} \tag{2}\] \[B_{o}=E_{B}\big{(}\rho\big{(}1-B_{i}\big{)},w\big{)}\] \[B=1-(1-B_{i})(1-B_{o}) \tag{3}\] Here, since the incoming traffic is assumed to follow the Erlang distribution, we use the well-known Erlang-B formula (1) to calculate the blocking probability with traffic load \(\rho\) and number of available wavelengths \(w\). The best performance that a ROADM can achieve is when connection blocking is only due to the lack of free ports, but not due to the internal blocking of the ROADM switching fabric. For this, we can use (2) to calculate the blocking probabilities of the input and output ports ( \(B_{i}\) and \(B_{O}\) ), and finally find the theoretical blocking probability limit of the ROADM using (3). Fig. 4 shows the blocking performance of the two WSS-based Clos-ROADMs (see blue lines). It is noted that the WSS Clos-ROADM can achieve the same blocking performance as the traditional CDC-ROADM, while TWC-WSS Clos-ROADM can reach the full theoretical limit of blocking performance. This is achieved because of the additional flexibility provided by the TWC modules. Fig. 4 also shows the blocking performance of AWG-based Clos-ROADMs (see the black lines). It is noted that the blocking performance of the AWG Clos-ROADM is not as good as that of the WSS Clos-ROADM. This is because AWGs are inherently less flexible than WSSs. Moreover, the performance improvement by adding TWCs only at the input stage of AWGs is fairly small. This is still attributed to the bottleneck of the middle-stage AWG. However, the blocking performance of AWG-based Clos-ROADM can be significantly improved when TWC modules are added at both the input and output ports of the AWGs. This is because the configuration of TWC-AWG-TWC is essentially the same as TWC-WSS with a full wavelength conversion capability. With this, the TWC-AWG-TWC Clos-ROADM can approach the theoretical limit of blocking performance. ## IV Clos Network in Datacenter Networks: Folded Clos Architecture Today's DCNs are typically constructed based on Spine-Leaf networks and using large-scale electrical switches [13]. The disadvantages of these architectures have been widely discussed, mainly including high-power consumption and latency due to electric switching. Optical switching is considered promising to resolve the above issues and is being gradually implemented in DCNs. ### _Clos Network in All-Optical DCNs_ Spine-Leaf network has many advantages, including a small network diameter and a fixed number of route hops. These advantages are important for all-optical DCNs because the quality of the optical signals can be accurately estimated in this type of network. Thus, the Spine-Leaf network is often employed as a good choice for all-optical DCNs. A Spine-Leaf network includes a Spine layer and a Leaf layer (see the left-hand side of Fig. 5), which is essentially a (folded) Clos network. The Clos network is an excellent candidate for building a large-scale optical switch using small-scale optical switching elements [14]. The Leaf layer, which corresponds to the stacked ingress and egress stages in a Clos network, interconnects the network devices. The Spine layer provides multiple routes to the Leaf layer, which corresponds exactly to the middle stage of the Clos network. The only difference between Clos and Spine-Leaf networks is the scale of switching elements in the Leaf layer. Specifically, to fold a \(\nu(M,L,D)\) Clos network to a Spine-Leaf network, the size of each switching element in the Leaf layer should be increased from \(L\times M\) to \((L+M)\times(L+M)\), since each switching element in the Leaf layer then has \(L\) additional input and output ports. Fig. 4: Blocking performance of WSS and AWG-based Clos-ROADMs. ### _Variances of Clos Network_ There are two typical types of services in DCNs, i.e., unicast and multicast services. For example, publish-subscribe services for data dissemination are typical multicast services. An optical switch-based network is good at provisioning unicast services, which is however not efficient for multicast services since multiple wavelengths are required for each multicast service. To tackle this issue, we consider employing optical splitters (the diamond module in Fig. 6) to replace spine switches in the Spine layer. Since an optical splitter is passive in equally splitting an optical signal to all output ports, this enables an all-optical Spine-Leaf network with splitters to support multicast services. Another variance of the Spine-Leaf network is to combine with other topologies, e.g., Torus topology, to efficiently support different types of services, e.g., general datacenter services and High-Performance Computing (HPC) services, in a common DCN. Many HPC systems employ Mesh/Torus topologies because of their high scalability and high performance-to-cost ratio [15]. Fig. 6 shows a 1-D Torus scenario, which replaces a Spine switch with direct fiber connections (the red curves in Fig. 6) to form a Torus topology. By transforming the Spine-Leaf network to an unfolded Clos network, it is interesting to see that the Torus topology essentially employs a round-robin direct connection pattern to replace a middle switch. ## V Conclusion Increasing traffic demands require large-scale ROADMs, for which the Clos network is considered promising. In this article, we first compared the traditional Spanke-based ROADM and the Clos-based ROADM from the perspectives of element and fiber complexities. Based on the Clos-based ROADM, we further discuss other architectures for improving the blocking performance and reducing the system cost. We demonstrate the tradeoff between blocking performance and system cost for these architectures through simulations. Finally, we also discuss the application of Clos network in all-optical datacenter networks using the Spine-Leaf architecture. Several variations to the basic architecture for supporting different types of datacenter services are also presented.
2308.13108
The Dusty Rossby Wave Instability (DRWI): Linear Analysis and Simulations of Turbulent Dust-Trapping Rings in Protoplanetary Discs
Recent numerical simulations have revealed that dust clumping and planetesimal formation likely proceed in ring-like disc substructures, where dust gets trapped in weakly turbulent pressure maxima. The streaming instability has difficulty operating in such rings with external turbulence and no pressure gradient. To explore potential paths to planetesimal formation in this context, we analyse the stability of turbulent dust-trapping ring under the shearing sheet framework. We self-consistently establish the pressure maximum and the dust ring in equilibrium, the former via a balance of external forcing versus viscosity and the latter via dust drift versus turbulent diffusion. We find two types of $\gtrsim H$-scale instabilities ($H$ being the pressure scale height), which we term the dusty Rossby wave instability (DRWI). Type I is generalised from the standard RWI, which is stationary at the pressure maximum and dominates in relatively sharp pressure bumps. Type II is a newly identified travelling mode that requires the presence of dust. It can operate in relatively mild bumps, including many that are stable to the standard RWI, and its growth rate is largely determined by the equilibrium gas and dust density gradients. We further conduct two-fluid simulations that verify the two types of the DRWI. While Type I leads strong to dust concentration into a large gas vortex similar to the standard RWI, the dust ring is preserved in Type II, and meanwhile exhibiting additional clumping within the ring. The DRWI suggests a promising path towards formation of planetesimals/planetary embryos and azimuthally asymmetric dust structure from turbulent dust-trapping rings.
Hanpu Liu, Xue-Ning Bai
2023-08-24T22:47:28Z
http://arxiv.org/abs/2308.13108v1
The Dusty Rossby Wave Instability (DRWI): Linear Analysis and Simulations of Turbulent Dust-Trapping Rings in Protoplanetary Discs ###### Abstract Recent numerical simulations have revealed that dust clumping and planetesimal formation likely proceed in ring-like disc substructures, where dust gets trapped in weakly turbulent pressure maxima. The streaming instability has difficulty operating in such rings with external turbulence and no pressure gradient. To explore potential paths to planetesimal formation in this context, we analyse the stability of turbulent dust-trapping ring under the shearing sheet framework. We self-consistently establish the pressure maximum and the dust ring in equilibrium, the former via a balance of external forcing versus viscosity and the latter via dust drift versus turbulent diffusion. We find two types of \(\gtrsim H\)-scale instabilities (\(H\) being the pressure scale height), which we term the dusty Rossby wave instability (DRWI). Type I is generalised from the standard RWI, which is stationary at the pressure maximum and dominates in relatively sharp pressure bumps. Type II is a newly identified travelling mode that requires the presence of dust. It can operate in relatively mild bumps, including many that are stable to the standard RWI, and its growth rate is largely determined by the equilibrium gas and dust density gradients. We further conduct two-fluid simulations that verify the two types of the DRWI. While Type I leads strong to dust concentration into a large gas vortex similar to the standard RWI, the dust ring is preserved in Type II, and meanwhile exhibiting additional clumping within the ring. The DRWI suggests a promising path towards formation of planetesimals/planetary embryos and azimuthally asymmetric dust structure from turbulent dust-trapping rings. keywords: protoplanetary discs - instabilities - hydrodynamics - planets and satellites: formation - methods: analytical - methods: numerical ## 1 Introduction It has recently been established that ring-like substructures are ubiquitous among extended protoplanetary discs (PPDs), as revealed by ALMA (ALMA Partnership et al., 2015; Andrews et al., 2018; for a review, see Andrews, 2020). While the formation mechanisms of such ring-like substructures are debated (see, e.g., Bae et al., 2022, for a review), they are believed to reflect dust trapping in turbulent pressure bumps (e.g., Dullemond et al., 2018; Rosotti et al., 2020). Such dust-trapping sites not only retain the dust by preventing or slowing down radial drift (e.g., Pinilla et al., 2012), but also allow dust density to build up, and it has been speculated to be preferred sites for planetesimal formation (e.g., Pinilla and Youdin, 2017; Dullemond et al., 2018). Conventionally, planetesimal formation is believed to be triggered by the streaming instability (SI; Youdin and Goodman, 2005) between gas and marginally or weakly coupled dust as a result of reciprocal dust-gas aerodynamic drag. The source of free energy behind the SI arises from the background radial pressure gradient, which induces relative drift between gas and dust. Once the dust abundance (vertically-integrated dust-to-gas mass ratio) exceeds a certain threshold (depending on dust size, typically \(\gtrsim 0.02\), Bai and Stone, 2010; but see Li and Youdin, 2021), the SI is found in simulations to lead to efficient dust clumping, with clumps dense enough to form planetesimals directly by gravitational collapse (e.g., Johansen et al., 2009; Carrera et al., 2015; Yang et al., 2017). However, if turbulent dust-trapping ring-like substructures are common as found in observations, the streaming instability paradigm for planetesimal formation in such dust rings faces two challenges. First, most existing simulations did not include external turbulence, but studies have found that modest turbulence of viscous parameter \(\alpha\sim 10^{-3}\) suffices to impede the development of the SI (Chen and Lin, 2020; Umurhan et al., 2020) and SI-induced clumping (Umurhan et al., 2020). Second, the SI does not operate at the pressure maxima where most of the dust is concentrated (but see Auffinger and Laibe, 2018; Hsu and Lin, 2022), although SI-induced dust clumping remains efficient in low-pressure-gradient regions near pressure bumps (Carrera et al., 2021, without external turbulence). More realistic models of dust rings should incorporate turbulence as well as a certain driving mechanism that leads to ring formation, and recent simulations along this line suggested instabilities beyond the SI paradigm. For instance, Huang et al. (2020) found "meseo-scale" instability triggered by dust feedback in a pressure bump where the disc transitions between low and high viscosity mimicking the dead zone outer boundary, which leads to the the formation of dust clumps. Similar instability has also been found at planetary gap edges (Surville et al., 2020; Yang & Zhu, 2020). In the presence of turbulence due to the vertical shear instability (VSI), Lehmann & Lin (2022) found strongly enhanced dust-trapping into VSI-induced vortices when there is an initial pressure bump. Moreover, Xu & Bai (2022a,b) conducted hybrid particle-gas non-ideal magnetohydrodynamic (MHD) simulations in outer PPDs and found efficient dust clumping in the presence of a ring-like pressure maximum, which was formed by zonal flows or external forcing. The dust rings are also observed to split into finer-scale filaments. With dust clumping found in environments unfavorable to the SI, these results imply additional mechanisms could be responsible to trigger instabilities that potentially lead to dust clumping. A closely related physical process in the context of the dust ring is the Rossby wave instability (RWI; Lovelace et al., 1999). Planet-induced radial pressure variations are found to give rise to the RWI (de Val-Borro et al., 2007; Lyra et al., 2009; Lin, 2014; Bae et al., 2016; Cimerman & Rafikov, 2023), which requires the presence of a local vortexensity (also known as potential vorticity) minimum (Li et al., 2000). Ono et al. (2016) conducted detailed linear parametric studies in a global 2D barotropic disc, showing the necessary and sufficient condition for the onset and a physical interpretation of the RWI. The linear behaviour of this instability in the presence of turbulence and dust, however, has not been rigorously explored. In this work, we analyse the stability of turbulent dust-trapping rings. Our analysis generalises the work of Ono et al. (2016) in a local shearing-sheet setting, and consider several additional physical ingredients for a self-consistent and realistic dusty ring model. Specifically, similar to Xu & Bai (2022b), we simultaneously introduce external forcing and gas viscosity (to mimick turbulence), the balance of which sustains a pressure bump that models an axisymmetric ring. We include an additional dust fluid, particularly incorporating the two-way drag between dust and gas, and the new formulation of dust concentration diffusion that properly ensures momentum conservation and Galilean invariance. Although limited in a local shearing sheet, our analysis represents a major first step towards a comprehensive understanding of dust-trapping rings. This work is organised as follows. In Section 2, we formulate and assemble the physical ingredients of our analysis. We obtain equilibrium solutions of the dust-trapping rings in Section 3, which serves as the basis of the linear perturbation developed in Section 4. In Section 5, we show the two types of Rossby wave-like instabilities emerging from our linear analysis, describing their phenomenology, parametric dependence and important physical ingredients. We name them the "dusty Rossby wave instability" (DRWI). The DRWI is then numerically tested in Section 6, in which we also briefly explore its nonlinear evolution. We summarise our findings and discuss implications, caveats and future work in Section 7. ## 2 Theory ### Formation of a Pressure Bump from Forcing We take a shearing-sheet formulation, which is constructed by following fluid motion around a reference radius \(R_{0}\) from the central object, and writing down equations in the corotating frame with respect to this radius in Cartesian coordinates. By doing so, it ignores curvature and only applies to regions around \(R_{0}\pm\Delta L\) with \(\Delta L\ll R\). This is applicable for thin discs whose pressure scale height \(H\ll R\), and has been widely used for local models of accretion discs. In this radially narrow region, we also ignore the disc background pressure gradient, assuming that the local bump forms a pressure maximum on which our local sheet is centered. This assumption also implies no net dust radial flux through the bump region (an "isolated" bump), suitable if another dust trap resides outside the bump in question. We will return to these assumptions in Section 7.1. We customarily choose the \(x\) axis along the radial, \(y\) axis along the azimuthal, and \(z\) axis along the vertical directions. In particular, at the reference radius \(R_{0}\), we set \(x=0\), where the angular velocity is denoted by \(\Omega_{0}\). For simplicity, we assume an isothermal equation of state with isothermal sound speed \(c_{s}\). The pressure is given by \(P=\rho_{g}c_{s}^{2}\), and the pressure scale height \(H=c_{s}/\Omega_{0}\). The fluid equations including viscosity now read \[\frac{\partial\rho_{g}}{\partial t}+\nabla\cdot(\rho_{g}\mathbf{v}_{g})=0\, \tag{1}\] \[\frac{\partial\mathbf{v}_{g}}{\partial t}+(\mathbf{v}_{g}\cdot\nabla)\mathbf{ v}_{g}=-\frac{\nabla P}{\rho_{g}}+[2\mathbf{v}_{g}\times\mathbf{\Omega}_{0}+3\Omega_{0}^{2} \mathbf{v}_{g}x]+\nu\nabla^{2}\mathbf{v}_{g}+f_{0}(x)\mathbf{e}_{y}\, \tag{2}\] where we have also included a forcing term \(f_{0}(x)\), to be discussed later. We use the subscript "\(g\)" to denote gas quantities, to be distinguished later from dust and combined one-fluid quantities. Note that the form of viscosity adopted here differs from the standard Navier-Stokes viscosity, which captures the essence of viscosity without complicating the analysis. Here we consider a 2D system and ignore the vertical dimension (i.e., being vertically-integrated), and \(\rho_{g}\) essentially represents a surface density. We consider a unit system such that time is normalised to \(\Omega_{0}^{-1}\) and velocity is normalised to \(c_{s}\). Then, the natural units for length is \(H\). We simply choose \(\Omega_{0}=c_{s}=H=1\). The standard \(\alpha-\)prescription for viscosity takes the form \(\nu=\alpha c_{s}H\), where \(\alpha\) is taken to be a constant and for protoplanetary discs, it is expected that \(\alpha\sim 10^{-4}\) to \(10^{-3}\)(see, e.g., Lesur et al., 2022, for a review), but there is also evidence for stronger turbulence in some systems (e.g., Flaherty et al., 2020). We further subtract background Keplerian shear from the velocity, or \(\mathbf{v}=\mathbf{v}^{\prime}-(3/2)\Omega_{0}\mathbf{v}_{g}\)(known as orbital advection, or the FARGO algorithm, Masset, 2000; Stone & Gardiner, 2010). The equations then become \[\frac{\partial\rho_{g}}{\partial t}+\nabla\cdot(\rho_{g}\mathbf{v}_{g}^{\prime}) -\frac{3}{2}\Omega_{0}\mathbf{v}\frac{\partial\rho_{g}}{\partial y}=0\, \tag{3}\] \[\frac{\partial\mathbf{v}_{g}^{\prime}}{\partial t}+(\mathbf{v}_{g}^{\prime }\cdot\nabla)\mathbf{v}_{g}^{\prime}-\frac{3}{2}\Omega_{0}\mathbf{v}\frac{\partial\mathbf{ v}_{g}^{\prime}}{\partial y}=-\frac{\nabla P}{\rho_{g}}+\nu\nabla^{2}\mathbf{v}_{g}^{ \prime}+\] \[[2\Omega_{0}\mathbf{v}_{g}^{\prime}\mathbf{e}_{x}-\frac{1}{2}\Omega_{0} \mathbf{v}_{g}^{\prime}\mathbf{e}_{y}]+f_{0}(x)\mathbf{e}_{y}. \tag{4}\] In equilibrium without forcing, i.e., \(f_{0}(x)=0\) for all \(x\), we simply have \(\rho_{g}=\text{const.}\), \(\mathbf{v}_{g}^{\prime}=0\). Now consider adding forcing by imposing a positive torque at \(x<0\) and a negative torque at \(x>0\), which is achieved by applying a force \(f_{0}(x)\) in the \(y\) direction, being an odd function about \(x=0\). This would modify the equilibrium state to create a pressure bump in the center of the box. In reality, it mimics the effect of zonal flows or the presence of a planet, both of which will drive a density bump in the disc. The new equilibrium state is the background state that we shall consider for linear stability analysis, and for this state we include a subscript "0". Clearly, there is no radial flow, thus \(\nu_{g,0}^{\prime}=0\). The solution for \(\rho_{g}\) and \(\nu_{y}^{\prime}\) is determined by the forcing profile according to \[c_{s}^{2}\frac{\partial}{\partial x}\ln\rho_{g0}=2\Omega_{0}v^{\prime}_{0y}\, \tag{5}\] \[f_{0}(x)+\nu\frac{\partial^{2}}{\partial x^{2}}v^{\prime}_{0y}=0. \tag{6}\] It is straightforward to see that the relation between forcing and the resulting density profile is given by \[f_{0}(x)=-\frac{vc_{s}^{2}}{2\Omega_{0}}\frac{\partial^{3}}{\partial x^{3}}\ln \rho_{g0}. \tag{7}\] Assuming pressure varies on scales of \(H\), an order-of-magnitude estimate shows that the forcing term is \(f_{0}\sim cc_{s}\Omega_{0}\), while the pressure gradient term is on the order of \(\nabla P/\rho_{g}\sim c_{s}\Omega_{0}\). Therefore, for \(\alpha\ll 1\) which is expected to apply in protoplanetary discs, very modest forcing can drive substantial pressure variation. In practice, we consider a Gaussian bump, given by \[\rho_{g0}=\rho_{b}\left[1+A\exp(-x^{2}/2\Delta w^{2})\right]\, \tag{8}\] where \(\rho_{b}\) is the background density. From here one can determine the forcing profile. However, after taking logarithm, evaluation of the third order derivative results in substantial complication. Instead, we may consider the limit where \(A\) is relatively small (\(A\lesssim 1\)), and instead assert \[\rho_{g0}=\rho_{b}\exp[A\exp(-x^{2}/2\Delta w^{2})]\, \tag{9}\] and this will yield \[f_{0}(x)=\frac{A}{2}\alpha c_{s}\Omega_{0}\left(\frac{H}{\Delta w}\right)^{3} \left(\frac{x^{3}}{\Delta w^{3}}-\frac{3x}{\Delta w}\right)\exp(-x^{2}/2 \Delta w^{2}). \tag{10}\] With this setup, the only parameters are \(A\) and \(\Delta w\) for the bump profile, and \(\alpha\) for viscosity. ### Dust Diffusion and Concentration Dust is considered as a pressureless fluid, subjecting to gas drag and turbulent diffusion. The gas drag is characterised by a stopping time \(t_{s}\), which depends on dust size. This is usually non-dimentionalised by defining a Stokes number \(St\equiv\Omega_{0}t_{s}\). The equations of dust fluid motion read \[\frac{\partial\rho_{d}}{\partial t}+\nabla\cdot(\rho_{d}\mathbf{v}^{ \prime}_{d})-\frac{3}{2}\Omega_{0}x\frac{\partial\rho_{d}}{\partial y}=0\, \tag{11}\] \[\frac{\partial\mathbf{v}^{\prime}_{d}}{\partial t}+(\mathbf{v}^{\prime}_{ d}\cdot\nabla)\mathbf{v}^{\prime}_{d}-\frac{3}{2}\Omega_{0}\frac{\partial\mathbf{v}^{ \prime}_{d}}{\partial y}=\frac{1}{\rho_{d}}\nabla\cdot(\rho_{d}\mathbf{v}^{\prime }_{\rm dil}\mathbf{v}^{\prime}_{\rm dil})+\] \[2\Omega_{0}v^{\prime}_{dy}\mathbf{e}_{x}-\frac{1}{2}\Omega_{0}v^{ \prime}_{dx}\mathbf{e}_{y}-\frac{\mathbf{v}^{\prime}_{d}-\mathbf{v}^{\prime}_{g}}{t_{s}}\, \tag{12}\] the dust diffusion velocity \(\mathbf{v}_{\rm dif}\) defined by \[\mathbf{v}_{\rm dif}=-\frac{\rho_{g}}{\rho_{d}}D\nabla\left(\frac{\rho_{d}}{\rho_ {g}}\right)=\frac{D}{f_{d}}\nabla\ln f_{g}\, \tag{13}\] where \(D\) denotes the dust diffusion coefficient, and \[f_{g}=1-f_{d}\equiv\rho_{g}/(\rho_{g}+\rho_{d}) \tag{14}\] is the gas mass fraction. Note that what is being diffused is dust concentration, rather than dust density, so that diffusion drives the dust to achieve constant dust-to-gas density ratio. Different from usual treatments, we represent dust concentration diffusion in the momentum equation while keeping the dust density conserved. This formulation is motivated by Tominaga et al. (2019); Huang and Bai (2022), who pointed out the inconsistencies in the conventional treatment of adding the concentration diffusion term in the continuity equation that violates momentum conservation and Galilean invariance. Our treatment closely follows that of Huang and Bai (2022), exemplified in their Equation (A1), but with one difference in that our dust velocity term includes \(\mathbf{v}_{\rm dif}\) in itself, i.e., our \(\mathbf{v}_{d}\) now represents their \(\mathbf{v}_{d}+\mathbf{v}_{\rm dif}\) on the left hand side. Note that the drag term is proportional to \(\mathbf{v}^{\prime}_{d}-\mathbf{v}^{\prime}_{g}\) which involves the dust concentration diffusion velocity. The concentration diffusion coefficient \(D\) is generally closely related to the turbulent gas kinematic viscosity (on the same order), at least for tightly coupled particles. While our formulation leaves flexibility for the specific expression of \(D\), for the rest of the paper, we assume that the coefficient is simply proportional to the gas mass fraction (mimicking the reduction of turbulence strength in the presence of strong dust mass loading, cf., Xu and Bai 2022b), or \[D(f_{g})=\nu f_{g}=\alpha c_{s}Hf_{g}. \tag{15}\] We also experimented an alternative expression of \(D=\nu(\text{const.})\), which yields qualitatively the same results for the equilibrium solutions and linear perturbation behaviours. As the aerodynamic drag affects the dust, the gas must feel the backreaction (i.e., feedback) from the dust as well, and the momentum equation of the gas is modified to \[\frac{\partial\mathbf{v}^{\prime}_{g}}{\partial t}+(\mathbf{v}^{\prime}_{g}\cdot \nabla)\mathbf{v}^{\prime}_{g}-\frac{3}{2}\Omega_{0}x\frac{\partial\mathbf{v}^{ \prime}_{g}}{\partial y}=-\frac{\nabla P}{\rho_{g}}+\nu\nabla^{2}\mathbf{v}^{\prime }_{g}+\] \[2\Omega_{0}v^{\prime}_{g,g}\mathbf{e}_{x}-\frac{1}{2}\Omega_{0}v^{ \prime}_{g,g}\mathbf{e}_{y}+\frac{\rho_{d}}{\rho_{g}}\frac{\mathbf{v}^{\prime}_{d}-\bm {v}^{\prime}_{g}}{t_{s}}+f_{0}(x)\mathbf{e}_{y}. \tag{16}\] The force difference between gas and dust mainly results from the fact that gas is subject to its own pressure whereas dust is not. It can be shown that when the dust is strongly coupled to gas, meaning \(St\ll 1\), the dust reaches a terminal velocity given by (Jacquet et al. 2011; Laibe and Price 2014) \[\mathbf{v}_{d}=\mathbf{v}_{g}+\mathbf{v}_{\rm dif}+t_{s}\frac{\nabla P}{\rho_{g}}\, \tag{17}\] which describes that dust always drifts towards pressure maxima. Note that the original derivation ignores dust diffusion and external forcing. We supplement the right-hand side with \(\mathbf{v}_{\rm dif}\) in response to our implicit incorporation of this term in \(\mathbf{v}_{d}\), while our earlier analysis shows that the forcing term should be negligible compared with the pressure gradient term; see argument following Equation (7). Therefore, this expression is valid for our applications. In the presence of a pressure maxima in the gas, dust would drift indefinitely into the pressure bump, leading to infinite concentration. However, this is prevented by turbulent diffusion. If feedback is ignored, then gas and dust dynamics are decoupled, and the dust distribution simply achieves an equilibrium profile whose width is set by a balance between concentration and diffusion. When considering feedback, however, the situation is much more involved. As a first investigation, we reduce the mathematical complexity by considering a single-fluid formalism below. ### Single-fluid Formalism When assuming dust particles are strongly coupled with \(St\ll 1\), the problem can be cast into a one-fluid framework (Laibe and Price 2014; Lin and Youdin 2017), where the single-fluid density and velocity are defined as \[\rho=\rho_{g}+\rho_{d}\,\quad\mathbf{v}=\frac{\rho_{g}\mathbf{v}_{g}+\rho_{d}\mathbf{v}_{d}}{ \rho_{g}+\rho_{d}}. \tag{18}\] Since dust is pressureless, total pressure is still gas pressure \[P=\rho_{g}c_{s}^{2}=\rho c_{s}^{2}f_{g}. \tag{19}\] This equation relates the gas fraction \(f_{g}\) to the equation of state. We derive the equations of the single-fluid system as follows. Firstly, the addition of Equation (3) and (11) gives the continuity equation: \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\nu^{\prime})-\frac{3}{2} \Omega_{0}x\frac{\partial\rho}{\partial y}=0. \tag{20}\] As for the momentum equation, we simplify the derivation by assuming \(\nu_{g}^{\prime}\sim\nu_{d}^{\prime}\). We multiply both sides of Equations (16)(12) by \(\rho_{g}\) and \(\rho_{d}\) respectively and then directly add them up, finally arriving at \[\frac{\partial\nu^{\prime}}{\partial t}+(\nu^{\prime}\cdot\nabla )\nu^{\prime}-\frac{3}{2}\Omega_{0}x\frac{\partial\nu^{\prime}}{\partial y}=- \frac{1}{\rho}\nabla P+\nu f_{g}\nabla^{2}\nu^{\prime}+\] \[\frac{1}{\rho}\nabla\cdot(\rho_{d}\nu^{\prime}\mathrm{\dot{a}if} ^{\prime}\mathrm{\dot{a}if})+2\Omega_{0}v_{s}^{\prime}\epsilon_{x}-\frac{1}{ 2}\Omega_{0}v_{s}^{\prime}\epsilon_{y}+f_{g}f_{0}(x)\epsilon_{y}. \tag{21}\] Despite the simplification, we maintain Equation (17) to account for the dust-gas drag. We further represent the equation of state in the form of the pressure diffusion equation, derived from Equations (3)(13)(17)(19): \[\frac{\partial P}{\partial t}+\nabla\cdot(P\nu^{\prime})-\frac{3}{2}\Omega_{0 }x\frac{\partial P}{\partial y}=c_{s}^{2}f_{g}\nabla\cdot(f_{d}\nabla P)+ \nabla\cdot(DP\nabla\ln f_{g}). \tag{22}\] Here, dust drift behaves as nonlinear thermal conduction (first term on the right hand side), as pointed out in Lin & Youdin (2017). The additional dust concentration diffusion flux gives rise to the last (nonlinear) term. ## 3 Equilibrium states of the single-fluid system In this section, we numerically solve the single-fluid equations above for an equilibrium state of the system, on top of which we will further conduct linear perturbation analysis. From now on, we add a subscript "0" to quantities in the steady state. We have introduced this subscript in Section 2.1, and the notations in two places are consistent with each other. ### Derivation of steady-state equations In equilibrium, we expect axisymmetry and therefore no dependence on \(y\). The radial velocity of the single fluid is also zero: the gas stays at rest, while the bulk velocity of the dust, which drifts towards pressure maxima, is balanced by outward diffusion, giving an overall effect of \(v_{d0}^{\prime}=0\). The absence of any bulk radial motion reflects the equilibrium state with an isolated dust trap, which is unlike the conventional scenario with dust/gas drifting inward/outward in the presence of a background pressure gradient. Therefore, after separating all vectors into \(x\) and \(y\) components, we drop the partial derivatives of \(t\) and \(y\) as well as terms involving \(\nu_{0}^{\prime}\) from Equations (20)(21)(22) to obtain \[v_{0x}^{\prime}=0\, \tag{23}\] \[c_{s}^{2}f_{s}f_{d0}\frac{\partial P_{0}}{\partial x}+D_{0}P_{0 }\frac{\partial\ln f_{g}0}{\partial x}=0\,\] (24) \[-\frac{1}{\rho_{0}}\frac{\partial P_{0}}{\partial x}+2\Omega_{0}v _{0y}^{\prime}+\frac{1}{\rho_{0}}\frac{\partial}{\partial x}\left[D_{0}^{2} \frac{\rho_{0}^{2}}{\rho_{d0}}\left(\frac{\partial\ln f_{g0}}{\partial x} \right)^{2}\right]=0\,\] (25) \[vf_{g0}\frac{\partial^{2}v_{0y}^{\prime}}{\partial x^{2}}+f_{g0} f_{0}(x)=0\quad\Rightarrow\quad v_{0y}^{\prime}=-\frac{c_{s}^{2}A}{2\Omega_{0}( \Delta w)^{2}}x\mathrm{e}^{-x^{2}/2(\Delta w)^{2}}\, \tag{26}\] where we use the fact that \(P_{0}(x)\) and \(f_{g0}(x)\) have no spatial gradient far from the pressure bump in obtaining Equation (24), which is derived from Equation (22). Note that if there were no forcing, the equilibrium solution would become trivial, where all velocities would vanish and the dust-to-gas ratio would become uniform. ### Numerical solution We solve the equilibrium equations as an initial value problem by specifying conditions at \(x=0\). Due to the symmetry of our setting, \(\rho_{0}(x)\) and \(P_{0}(x)\) are even functions with respect to \(x=0\) and thus we only solve the equations for \(x>0\). Our methods are described in detail in Appendix B. We obtain equilibrium solutions with five dimensionless physical parameters, of which the information, fiducial values and the ranges explored in this work are summarised in Table 1. Among the parameters, \(St=0.1\), corresponding to mm- to cm-sized dust for typical disc models in the outer disc, would be an upper bound for our single-fluid formalism to remain approximately valid, which assumes strongly coupled dust and gas. The minimum gas fraction \(f_{\mathrm{gmin}}\) is the gas mass fraction at the center of the bump. We choose this parameter rather than a global dust-to-gas ratio by convention because \(f_{\mathrm{gmin}}\) is numerically easier to control. Our lower bound of \(f_{\mathrm{gmin}}=0.5\) is chosen to correspond to the extreme situation with a \(1:1\) gas-to-dust mass ratio in the bump center, which may be the case in some systems such as in HD 142527 (being 1.7, Boehler et al. 2017). We show in Figure 1 the equilibrium solution in terms of the radial profiles of \(P_{0}(x)\), \(f_{g0}(x)\) and \(v_{\mathrm{diff}}(x)\) for a bump with \(A=0.8\) and \(\Delta w/H=1.5\). In all combination of parameters below, the density and pressure form a bump close to \(x=0\) and quickly approaches the background value as \(x\) exceeds \(\Delta w\). The density excess close to \(x=0\) and the minimum in the gas fraction profile indicate significant increase of dust concentration at the pressure maximum. We call this region a "dust bump" as opposed to the wider gas bump. As can be seen from either \(\rho_{0}\) or \(f_{g0}\), the width of the equilibrium dust bump depends heavily on \(St\) and \(\alpha\): the dust bump is considerably narrower than the gas bump if the gas and dust are not well-coupled and/or if the concentration diffusion that balances the dust drift is weak. Tightly coupled systems give a slightly higher maximal pressure, which reflects the dust feedback to the gas. It is of interest to translate \(f_{\mathrm{gmin}}\) to an averaged gas fraction through a ring. We define the mean gas and dust mass fraction \(\overline{f_{g}}\) and \(\overline{f_{d}}\) as \[\overline{f_{g}}=1-\overline{f_{d}}=\frac{\int_{-x_{B}}^{x_{B}}P(x)dx}{c_{s}^{2} \int_{-x_{B}}^{x_{B}}P(x)dx}\, \tag{27}\] where \(x_{B}\) specifies the radial range of interest. A minimum gas fraction of \(0.7\) in our fiducial setting corresponds to \(\overline{f_{g}}=0.980\) integrated from \(x/\Delta w=-4\) to +4. In other words, if a bump quasi-statically evolved from a uniform mixture of gas and dust with \(\overline{f_{g}}=0\) 0.98 (\(\overline{f_{d}}=0.02\)), which is a reasonable condition, and if the bump could attract all the dust in a range of \(\pm 4\Delta w=\pm 6H\), the equilibrium \(f_{\rm gmin}\) would be equal to our fiducial value. For \(f_{\rm gmin}=0.7\), a combination of large dust particles and low viscosity (\(St=0.1,\alpha=3\times 10^{-5}\)) gives \(\overline{f_{g}}=0.996\), while \(St=0.003,\alpha=1\times 10^{-3}\) gives a rather low \(\overline{f_{g}}=0.89\) (we find no "interesting" instability anyway with this configuration or a more realistic \(\overline{f_{g}}\)). More values of \(\overline{f_{g}}\) are annotated later in Figures 7, 8 with notes in Section 5.3. ## 4 Formulation of the perturbation equations ### Linearised system of equations Based on the equilibrium results, now we proceed to obtain the perturbation equations to investigate potential instabilities. Using the subscript "\({}_{1}\)" to denote perturbation variables, we consider a plane wave perturbation of the form \[\rho_{1}(x,y,t) = {\rm Re}[\rho_{1}(x){\rm e}^{i(ky-\omega t)}]\,\] \[P_{1}(x,y,t) = {\rm Re}[P_{1}(x){\rm e}^{i(ky-\omega t)}]\,\] \[\mathbf{v}_{1}(x,y,t) = {\rm Re}[\mathbf{v}_{1}(x){\rm e}^{i(ky-\omega t)}]\, \tag{28}\] where \(k\) (being a real number) is the \(y\)-direction wavenumber, \(\omega\) is the complex frequency, \(i^{2}=-1\), and \({\rm Re}[\cdot]\) takes the real part. The perturbation variables \(\rho_{1}(x)\), \(P_{1}(x)\) and \(\mathbf{v}_{1}(x)\) throughout this paper represent the complex 1D functions on the right hand side in Equation (28) unless otherwise stated to denote the real 2D waveform. Note that we do not impose (anti-)symmetry here but solve the perturbation equations over the full domain of \(x\). The real part of \(\omega\) represents oscillation and the imaginary part implies temporal growth or damping in the perturbation magnitude. We write \(\omega=\omega_{r}+i\gamma\), where \(\omega_{r}\) and \(\gamma\) are real. An unstable perturbation with its magnitude growing with time has \(\gamma>0\). We introduce the following notation of perturbation ratios \({\rm p}_{1}(x)\equiv P_{1}(x)/P_{0}(x)\) and \([{\rm g}_{1}(x)\equiv f_{\rm g1}(x)/f_{\rm g0}(x)\). We substitute the perturbations into Equations (13)(20)(21)(22) to obtain the linearised system of equations. The derivation and detailed form of the system are lengthy and involve considerable algebra, which we outline in Appendix A1. We only show the compact form here, expressed as a matrix of linear operators acting on the perturbation variables: \[\begin{bmatrix}\mathcal{M}_{00}&\mathcal{M}_{01}&\mathcal{M}_{02}&\mathcal{M} _{03}\\ \mathcal{M}_{10}&\mathcal{M}_{11}&\mathcal{M}_{12}&\mathcal{M}_{13}\\ \mathcal{M}_{20}&\mathcal{M}_{21}&\mathcal{M}_{22}&\mathcal{M}_{23}\\ \mathcal{M}_{30}&\mathcal{M}_{31}&\mathcal{M}_{32}&\mathcal{M}_{33}\end{bmatrix} \begin{bmatrix}{\rm p}_{1}(x)\\ {\rm f}_{\rm g1}(x)\\ {\rm v}_{1}^{\prime}(x)\\ {\rm v}_{1}^{\prime}{\rm v}_{1}(x)\end{bmatrix}=0. \tag{29}\] This is a system of four second-order linear ordinary differen \begin{table} \begin{tabular}{c c c c} \hline Parameter & Symbol & Fiducial Val. & Range \\ \hline Gas bump magnitude & \(A\) & 1.2 & 0.4–1.8 \\ Gas bump width & \(\Delta w/H\) & 1.5 & 1.0–2.0 \\ Stokes number & \(St\) & 0.03 & 0.003–0.1 \\ Viscous parameter & \(\alpha\) & \(3\times 10^{-4}\) & \(3\times 10^{-5}\)–\(1\times 10^{-3}\) \\ Minimum gas fraction & \(f_{\rm gmin}\) & 0.7 & 0.5–0.99 \\ \hline \end{tabular} \end{table} Table 1: Parameters of the dust-trapping ring. Figure 1: Equilibrium solutions for different combination of parameters. Only the \(x>0\) region is plotted; \(\rho_{0}\), \(P_{0}\) and \(f_{g0}\) are even functions in \(x\) whereas \(\nu_{\rm diff}\)) is odd. Each row presents \(\rho_{0}\), \(P_{0}\), \(\nu_{\rm diff}\) and \(f_{g0}\) respectively. For all plots here, \(A=0.8\) and \(\Delta w/H=1.5\). Three columns correspond to different \(\alpha\), as noted on the top. Colors represent different \(f_{\rm gmin}\) while line styles represent different \(St\) values. tial equations in four functional variables, \(\mathfrak{p}_{1}(x)\), \(\mathfrak{f}_{g1}(x)\), \(\nu^{\prime}_{1x}(x)\), and \(\nu^{\prime}_{1y}(x)\). The matrix \(\mathcal{M}(x,\omega,k)\) consists of block coefficients \(\mathcal{M}_{ij}(i,j=0,1,2,3)\), which are differential operators of order at most two and may be functions of \(x\), the already known equilibrium variables, and the yet-undetermined perturbation parameters \(\omega\) and \(k\). For a given \(k\), we view the system of equations as an eigenproblem and solve for the eigenvalue \(\omega=\omega_{m}\) with the corresponding eigenfunction. The system allows for numerous modes, but only a handful of modes are unstable, which we will focus on. ### Boundary conditions Boundary conditions are required for a complete eigenproblem. As can be observed from Figure 1, \(\rho_{0},P_{0}\) and \(f_{g0}\) quickly approach background values as \(|x|\) increases, while \(v_{\rm diff0}\) shows a slower decay. Therefore, we set \(\rho_{0}=\rho_{b},P_{0}=P_{b}=c_{s}^{2}\rho_{b},f_{g0}=1\), and \(\nu^{\prime}_{0y}=0\) at the boundary, while still using nonzero \(v_{\rm diff0}(x)\) from the equilibrium solution. Since dust is depleted here, \(\mathfrak{f}_{g1}=0\). The perturbation equations at the boundary \(x=\pm y\) can therefore be reduced to three equations in three variables \(\mathfrak{p}_{1},\nu^{\prime}_{1x}\), and \(\nu^{\prime}_{1y}\) (the second perturbation equation becomes trivial). Now, we apply the WKBJ approximation, i.e., to take \(\mathfrak{p}_{1},\nu^{\prime}_{1x}\), and \(\nu^{\prime}_{1y}\) as a plane wave proportional to \(\exp\left(ik_{x}x\right)\), where \(k_{x}\), the asymptotic radial wavenumber shared by the three perturbation variables, is yet to be determined. This form is motivated by the fact that physical quantities in equilibrium vary slowly with \(x\) near the boundaries; similar methods have been used by Li et al. (2000); Ono et al. (2016). We stress that \(k_{x}\) is only used to specify the asymptotic relation at the boundaries, i.e., we do not assume that the perturbation variables constitute a plane wave everywhere. The boundary perturbation equations are therefore reduced to a linear system, whose coefficient matrix must have a vanishing determinant for a nontrivial solution. The boundary perturbation equations before and after the WKBJ approximation, as well as the form of the determinant, can be found in Appendix A.2. The zero determinant condition yields a dispersion relation \(k_{x}=k_{x}(\omega,k,x)\), which is a polynomial equation of fourth degree in \(k_{x}\). Two of the four complex solutions unphysically go to infinity both in real and imaginary parts as \(\nu\to 0\). Between the remaining two, one and only one has a positive real part if \(k\) is not too close to zero. We obtain this root \(k_{x}\) numerically at the outer and inner boundaries respectively and adopt it as the outgoing boundary condition: \(d\mathfrak{p}_{1}/dx=ik_{x}\mathfrak{p}_{1},d\mathfrak{f}_{g1}/dx=ik_{x} \mathfrak{f}_{g1},d\nu^{\prime}_{1x}/dx=ik_{x}\nu^{\prime}_{1x}\), and \(d\nu^{\prime}_{1y}/dx=ik_{x}\nu^{\prime}_{1y}\). ### Numerical treatment We solve the eigenproblem numerically by discretising the differential equations in \(x\) and representing all coefficients with matrix elements, similar to Ono et al. (2016). We perform calculations over a range of \(-4\Delta w<x<4\Delta w\), which sufficiently covers the pressure bump region, over a uniform grid of \(N=1001\) nodes. Doubling the node number would give eigenvalues that agree with our fiducial resolution within three to four digits. To construct the matrix, we use findiff (Baer, 2018), a Python package for finite difference numerical derivatives and partial differential equations in any number of dimensions. We then solve \(\omega\) with a positive imaginary part such that the determinant of the matrix goes to zero. Once the desired eigenvalue \(\omega_{m}=\omega_{rm}+i\gamma_{m}\) is found, we substitute it for \(\omega\) in the matrix and calculate the eigenfunction, which we denote as a vector function in \(x\) with parameter \(\omega=\omega_{m}\), namely, \(\vec{u}_{1m}(x,\omega_{m})=(\mathfrak{p}_{1m}(x),\mathfrak{f}_{g1m}(x), \nu^{\prime}_{1xm}(x),\nu^{\prime}_{1ym}(x))^{\top}|_{\omega=\omega_{m}}\). We use the subscript "\({}_{m}\)" to denote eigenmodual quantities. We normalise the eigenfunction in magnitude and phase such that \(\vec{u}_{1m}(x,\omega_{m})\) has length unity and \(\mathfrak{p}_{1m}(0)|_{\omega=\omega_{m}}\) is real. The original perturbation variables \(P_{1m}(x)\) and \(f_{g1m}(x)\) are then recovered from \(\mathfrak{p}_{1m}(x)\) and \(\mathfrak{f}_{g1m}(x)\). Finally, the physically meaningful waveform in 2D, as later displayed in Figures 3-5, is obtained from Equation (28), where we arbitrarily take \(t=0\). The phases of these 2D waveforms depend on both \(x\) and \(y\) as the 1D perturbation variables are complex. We describe details of matrix construction and determination of \(\omega_{m}\) and \(\vec{u}_{1m}(x,\omega_{m})\) in Appendix C. ## 5 Results of the linear analysis We identify solutions of the eigenproblem by a broad search on the \(\gamma\)-\(\omega_{r}\) plane (Appendix C). Only two modes are unstable among the numerous solutions. We term them the Type I and Type II DRWI. For reasons to be discussed below, we believe that Type I is a direct generalisation from the classical RWI, whereas Type II is first identified in this work whose origin is closely related to the presence of dust. The two types are distinguished by the value of \(\omega_{rm}\). Type I features \(\omega_{rm}=0\), i.e., the mode has no phase velocity at \(x=0\) in the rotating frame and therefore is stationary at the pressure maximum. Its eigenfunctions are symmetric or antisymmetric about \(x=0\). In contrast, Type II has nonzero \(\omega_{rm}\), indicating a co-rotation radius off the peak, and the perturbation profiles do not have the (anti-)symmetry as Type I does. ### Two types of the DRWI: dispersion relation and eigenmodes Now, for the purpose of illustration, we demonstrate the main properties of the two DRWI types by a representative result in Figure 2. With fiducial settings of \(A=1.2,\Delta w/H=1.5,St=0.03\), and Figure 2: Dispersion relation of a sharp bump, showing \(\gamma_{m}\) as a function of \(k\), for different dust content measured in \(f_{\rm spin}\). Two types of DRWI are plotted using solid and dashed lines respectively. The Type II curve for \(f_{\rm spin}=0.99\) is below the lower limit of this figure. Parameters are set as \(A=1.2,\Delta w/H=1.5,\)\(St=0.03,\)\(\alpha=3\times 10^{-4}\). The value of \(\overline{ful}\) corresponding to each level of \(f_{\rm spin}\) is measured as \(0.038,\)\(0.020,\)\(0.009\), and \(6\times 10^{-4}\) respectively. \(\alpha=3\times 10^{-4}\), various values of \(f_{\rm{gmin}}=0.5,0.7,0.85,0.99\) are chosen, with the corresponding \(\overline{f_{d}}\) measured to be \(0.038,0.020,0.009\), and \(6\times 10^{-4}\). Our calculation starts from \(k=0.1\), which corresponds to a wavelength of the disc circumference if the local pressure scale height satisfies \(H/R_{0}=0.1\). We see that as dust concentration increases, the dispersion relation for the type IDRWI extends to larger \(k\) (shorter wavelength), while the fastest growth rates decreases. On the other hand, the fastest growth rate of type II DRWI increases with dust concentration. In the next, we show the eigenfunctions of the two modes to further examine the underlying physics. Apart from \(P_{1},f_{\rm{g1}},v^{\prime}_{1x}\), and \(v^{\prime}_{1y}\), the vortensity \(q\) is known to be vital for the mechanism of the RWI and also of interest here. The vortensity is defined by \[q\equiv\frac{(2-3/2)\Omega_{0}+(\nabla\times\mathbf{r}^{\prime})_{z}}{\rho}=\frac {1}{\rho}\left(\frac{1}{2}\Omega_{0}+\frac{\partial v^{\prime}_{y}}{\partial x }-\frac{\partial v^{\prime}_{x}}{\partial y}\right) \tag{30}\] as proper for pure gas in a Keplerian-rotating shearing sheet (see Appendix A3). Its linear perturbation is then \[q_{1}=-\frac{1}{\rho_{0}^{2}}\left(\frac{1}{2}\Omega_{0}+\frac{dv^{\prime}_{0 y}}{dx}\right)\rho_{1}+\frac{1}{\rho_{0}}\left(\frac{dv^{\prime}_{1y}}{dx}- ikv^{\prime}_{1x}\right). \tag{31}\] Starting from the Type I DRWI, we first look at the case with \(f_{\rm{gmin}}=0.99\), which is close to the dust-free scenario, and the corresponding eigenfunctions of perturbed pressure, density and vortensity are shown in Figure 3. In this limit, Type I is strongly unstable with a maximal \(\gamma_{m}\) on the order of \(10^{-2}\) to \(10^{-1}\Omega_{0}\), in agreement with the RWI investigated in Ono et al. (2016) with similar gas bump profiles. The pressure and density perturbations show alternate peaks and troughs along the \(\hat{y}\) direction accompanied respectively by anti-cyclonic and cyclonic velocity perturbations. The vortensity perturbations show patterns of two Rossby waves along \(\hat{y}\) on the two sides of the background vortensity minimum (\(x=0\)), with a phase difference. The growth of this instability, as explained in Ono et al. (2016), can be ascribed to \(v^{\prime}_{1x}\) advecting large background vortensity towards a positive vortensity perturbation and vice versa (e.g., along the horizontal line of phase \(0.5\pi\) in Figure 3). On the other hand, for a higher \(k\) such that \(\gamma_{m}<0\), the vortensity begins to show an opposite phase difference that suppresses the perturbation. Based on these reasons, we recognise Type I as the RWI loaded with dust.1 Footnote 1: Although vortensity is no longer strictly conserved due to the presence of dust, the deviation is expected to be small with only mild dust mass loading (\(f_{\rm{gmin}}=0.99\), or \(\overline{f_{d}}=6\times 10^{-4}\)). In the eigenfunction described above, the magnitude of the density perturbation is strongly enhanced within the dust bump. The phenomenon becomes more pronounced for a system with higher total dust content. In Figure 4 where \(f_{\rm{gmin}}=0.7\) or \(\overline{f_{d}}=0.020\), both the density and the vortensity perturbations are mainly concentrated in the dust bump. Vortensity sources are no longer negligible here, while the pattern remains similar to the \(f_{\rm{gmin}}=0.99\) case outside the dust bump, where the vortensity-flow explanation of the instability still applies. We will further discuss the instability mechanism in Section 5.4. The Type II DRWI shows essentially different eigenfunctions (Figure 5). The non-zero \(\omega_{Arm}\) implies a \(y\)-direction phase velocity in the co-rotating frame at \(x=0\). Therefore, the patterns in Figure 5, where Figure 3: Eigenfunctions of Type I DRWI for a system with low dust content (\(f_{\rm{gmin}}=0.99\)). The top panels present the \(y\)-independent equilibrium profiles \(P_{0}\), \(\rho_{0}\), and \(q_{0}\), and the bottom panels show the perturbation functions \(P_{1},\rho_{1},\) and \(q_{1}\) respectively. The eigenfunctions are drawn for one and half a wavelength in the \(y\)-axis, indicated by the phase. Arrows in the bottom panels denotes the perturbed velocity field \(\mathbf{v}^{\prime}_{1}\), with the arrow length proportional to the perturbed speed magnitude. In all panels, a grey dashed line marks the \(x\)-location of the co-rotation radius. Here \(k=0.2H^{-1}\) and \(\omega_{km}=(0+0.0577)\Omega_{0}\) and other parameters are the same as Figure 2. Figure 4: Eigenfunction of Type I DRWI for a system with moderate dust content (\(f_{\rm{gmin}}=0.7\)). Here \(k=0.2H^{-1}\) and \(\omega_{m}=(0+0.0096i)\Omega_{0}\). Other details are the same as Figure 3. Figure 5: Eigenfunction of Type II DRWI for a system with moderate dust content (\(f_{\rm{gmin}}=0.7\)). Here \(k=0.2H^{-1}\) and \(\omega_{m}=(0.1244+0.0407i)\Omega_{0}\). Other details are the same as Figure 3. \(\omega_{rm}>0\), should be understood as travelling up along the \(y\)-axis with time. Another viewpoint is that the co-rotation radius \(x_{c}\), defined implicitly by \(v_{0y}(x_{c})=v_{0y}^{\prime}(x_{c})-(3/2)\Omega_{0}v_{c}=\omega_{rm}/k\), deviates from the pressure maximum towards approximately the edge of the dust bump. For \(\omega_{rm}>0\), we have \(x_{c}<0\). The pressure perturbation appears distorted across \(x_{c}\) and reaches maximum/minimum at \(x<0\). The density perturbation forms periodic patterns along \(\hat{y}\), with a positive patch on one side of \(x=0\) matched with negative on the other side and vice versa. The perturbed vortensity patterns outside the dust bump still resemble two Rossby waves, but now the vortensity advection does not effectively contribute to the growth of the instability in the interval \(x_{c}<x<0\). We will elaborate on this observation quantitatively in Section 5.4, where we point out that the Type II DRWI requires vortensity sources in the dust bump to be unstable at all. As expected from the symmetry of our formulation, Type II DRWI modes always come in: if \(\omega_{rm}+i\gamma_{m}\) is an eigenvalue, then so is \(-\omega_{rm}+i\gamma_{m}\). The 2D waveform of \(-\omega_{rm}+i\gamma_{m}\) can be obtained from that of \(\omega_{rm}+i\gamma_{m}\) by mapping \(x\mapsto-x\) and \(y\mapsto-y\), i.e., by reflection over the origin. The pair of modes likely coexist in real bumps, which implies a complicated mixture of their travelling patterns. Still, one may expect the \(P_{1}\) patterns to appear to travel up along the \(\hat{y}\) direction on the inner side of the bump (\(x<0\)), where a mode with positive \(\omega_{rm}\) has much stronger pressure perturbation than its negative-\(\omega_{rm}\) contourperat; the opposite is expected on the outer side (\(x>0\)). Our simulation in Section 6.1 confirms this prediction. While the two types of instabilities follow distinct trends in Figure 2, their relation becomes more complicated where the dust bump is sharper. We observe bifurcation phenomena, where Type II merges into or forks from Type I, which we describe in further detail in Appendix D. While the discussion above on eigenfunction remains valid, the bifurcation implies a smooth transition between the two types of the DRWI and hence between symmetric and asymmetric perturbation patterns in strongly unstable regions of the parameter space. ### Effect of viscosity on the RWI We have shown that the dispersion relation and eigenfunction patterns of the Type I DRWI approach those of the classical RWI in the limit of pure gas. However, the DRWI incorporates the turbulent viscosity, a physical process neglected in most previous studies on the RWI. As a short digression, our formulation can naturally be used to calculate the linear behavior of the RWI in the presence of gas turbulence. In Figure 6, we compare a wide range of \(\alpha\) for an approximately dust-free bump (\(f_{\rm{gmin}}\)=0.99). While high viscosity suppresses the instability, \(\alpha\leq 3\times 10^{-3}\) hardly influences the dispersion relation. The \(y\)-direction wavenumber corresponding to the maximal \(\gamma_{m}\) also stays almost invariant. Our linear analysis here is consistent with simulations that the RWI in the linear regime is largely unaffected by realistic disc viscosity settings (Lin, 2014). Analytical work by Gholipour & Nejad-Asghar (2014) gave similar results, on which we improve by properly setting up the background equilibrium state. ### Parameter space of the most unstable DRWI To understand the parametric dependence of the two types of the DRWI, we perform a grid search in the parameter range listed in Table 1 except that only two levels of dust content, \(f_{\rm{gmin}}=0.7\) and 0.5, are selected. The maximum growth rates \(\gamma_{m}\) with regard to different \(k\) and the corresponding wavenumbers \(k_{*}\) are shown in Figures 7, 8 and Figures 9, 10 respectively. For all these figures, each panel represents one particular \((St,\alpha)\), while each pixel in each panel gives one \((A,\Delta w)\). White dots denote pixels where Type I is more unstable than Type II. Black dotted curves denote where the dust-free bump is marginally stable to the standard RWI. 2 We also calculate the mean gas mass fraction \(\overline{f_{g}}\) for each pixel. As the bump sharpens, it concentrates dust more vigorously, but the total amount of gas also increases, making \(\overline{f_{g}}\) barely dependent on \(A\) or \(\Delta w\). We show the panel-wise averaged results at the bottom right of each panel, where the significant digits reflect the magnitude of pixel-wise deviation. Footnote 2: We only calculate the black dotted curve accurate to the \((\Delta w,A)\) pixel size for \(f_{\rm{gmin}}=1-10^{-8}\), \(St=0.03\) and \(\alpha=3\times 10^{-4}\) and duplicate it to all panels. Section 5.2 and additional tests on different \(St\) verify that the line does not change location significantly across panels. #### 5.3.1 Maximum growth rate \(\gamma_{m*}\) We first focus on the maximum growth rate (among different \(k\)). Three levels of inspection reveal the effect of different parameters: between pixels within one panel for \(A\) and \(\Delta w\), between the sixteen panels within one figure for \(St\) and \(\alpha\), and between Figures 7 and 8 for \(f_{\rm{gmin}}\). In the following, we will start from the first and third levels, where the trends are relatively straightforward, before elaborating on the second level of comparison. Each panel shows similar pixel-level trends: sharper pressure bumps (large \(A\), small \(\Delta w\)) induce faster growth rates for both types of the DRWI. While the Type I DRWI shows a steep slope with respect to \(A\) or \(\Delta w\) and prevails for very sharp bumps, Type II dominates in a broad range of realistic parameters and even renders the pressure bump unstable when it is stable to the classical (dust-free) RWI (i.e., colored pixels below the black dotted curve). Regarding the comparison between Figures 7 and 8, a higher dust concentration tends to stabilise the Type I but destabilise the Type II DRWI, consistent with Section 5.1. For example, the pixel with \(St=0.03,\alpha=3\times 10^{-4},A=1.8,\Delta w/H=1.4\) has a darker color in Figure 6: Dispersion relation of the classical RWI with different viscosity. A low gas content of \(f_{\rm{gmin}}=0.99\) is considered to approximate the pure-gas limit. Other parameters are set as \(A=1.2\), \(\Delta w/H=1.5\), \(St=0.03\). Figure 7 than in Figure 8 (a Type I-dominant case, \(\gamma_{rms}=0.16\Omega_{0}\) versus \(0.12\Omega_{0}\)), whereas the opposite is true for the pixel with \(St=0.01\), \(\alpha=1\times 10^{-4}\), \(A=0.5\), \(\Delta w/H=1.5\) (Type II-dominant). Now, we compare different panels within Figure 7 to explain how \(\gamma_{rms}\) changes with \(St\) and \(\alpha\). The comparison also applies to Figure 8. The similar colors on the upper left corner of each panel demonstrate that the Type I DRWI, when dominant, is insensitive to \(St\) or \(\alpha\). Conversely, the trend of the Type II DRWI is most clearly seen from the lower right region of each panel. For most panels (those above the blue dashed line), a combination of small \(\alpha\) and large \(St\) shows the broadest range of colored pixels. For example, a bump with \(A=0.8\) and \(\Delta w/H=2.0\) is unstable to the Type II DRWI for \(St=0.03\) and \(\alpha=1\times 10^{-4}\), which is not true for \(St\leq 0.01,\alpha=1\times 10^{-4}\) or for \(St=0.03\), \(\alpha\geq 3\times 10^{-4}\). The total area of colored pixels on the panel (\(St,\alpha\)) = (\(0.03,1\times 10^{-4}\)) or (\(0.01,3\times 10^{-5}\)) is larger than that on the panel on its upper right, i.e., \((St,\alpha)=(0.03,3\times 10^{-4})\), (\(0.01,1\times 10^{-4}\)), or (\(0.003,3\times 10^{-5}\)). In other words, the susceptibility of the system to the Type II DRWI largely varies along the diagonal of the figure from moderately high \(St\) and low \(\alpha\) (most unstable) to low \(St\) and high \(\alpha\) (least unstable). We have seen that a sharp pressure bump promotes both types of the DRWI; the correlation here likely similarly points to the Type II DRWI favoring a sharp dust bump in addition to a sharp gas bump (see Figure 1 for how the dust bump profile changes with \(St,\alpha\) and \(f_{\rm{gmin}}\)). Notably, here we believe that \(St\) and \(\alpha\) only indirectly influence the Type II DRWI by modifying the equilibrium bump profile instead of directly involving in the mechanism of the instability, a point we will argue more rigorously after describing the \(k_{*}\) trends. However, the four panels below the blue dashed line deviate from the diagonal trend as they show a shrinkage of the unstable range compared to adjacent panels on their upper right. This corner corresponds to very low \(\alpha\) and relatively large \(St\), leading to a very sharp Figure 7: Grid search of \(\gamma_{rms}\) for \(f_{gmin}=0.7\). Each panel is covered by a linearly uniform grid of step size 0.1 along both axes. The values of \(St\) and \(\alpha\) for each panel are annotated on the axes of the entire figure. Colors represent the value of \(\gamma_{rms}\) for a given set of parameters, listed in Table 1, with darker shades denoting higher \(\gamma_{rms}\) and white denoting \(\gamma_{rms}<0\). In each panel, white dots denote pixels where Type I is more unstable than Type II, the black dotted curve denotes where the dust-free bump is marginally stable to the standard RWI, and the number at the bottom right denotes \(\overline{f_{R}}\) averaged over all pixels on the panel. The blue dashed line separates four bottom left panels from the rest, as the former deviate from the trend related to \(St\) and \(\alpha\) (see discussion in Section 5.3.1). All panels here and in Figure 8 are colored in one single scale for comparison. dust bump. We confirm that our resolution is adequate for resolving the dust bump, and speculate on the potential causes that reverse the trend. First, the sharp dust bump implies a very low average dust mass fraction \(\overline{f_{d}}\), where dust feedback likely becomes too spatially restricted for the Type II DRWI to operate. Also, weak dust-gas coupling with \(St\gtrsim 0.1\) may be subject to two-fluid effects not fully captured in our one-fluid formalism. Later we find that the mechanism of the Type II DRWI does not necessitate streaming motion and thus refrain from further analysis of the marginally coupled system. #### 5.3.2 Most unstable wavenumber \(k_{*}\) We show in Figures 9, 10 the most unstable wavenumber \(k_{*}\) in the sense that \(\gamma_{m}(k)\) reaches maximum at \(k=k_{*}\). The apparent discontinuous transition in \(k_{*}\) in most of the panels reflect a switch from Type I (with white dots) to Type II (no white dots) regimes. Generally, for both types of the DRWI, higher maximum growth rates correspond to higher \(k_{*}\), as is also seen from Figure 2. For \(f_{\rm{gmin}}=0.5\), a few Type I cases have abnormally large \(k_{*}\) (e.g., the deep blue pixel at \(St=0.003,\alpha=1\times 10^{-4},A=1.0,\Delta w/H=1.1\)). This is related to the fact that \(\gamma_{m}(k)\) is quite flat in the full dispersion relation of Type I DRWI when \(\gamma_{m*}\) is low (cf. Figure 2) and thus \(k_{*}\) can be parameter-sensitive. Interestingly, typical unstable wavelengths in \(\hat{y}\) are comparable to the disc size and much longer than typical length scales of the SI, the latter being only a fraction of the disc scale height. We find no unstable mode in high \(k\gtrsim 1H^{-1}\) (Appendix C), except when we reduce the turbulence level to \(\alpha<10^{-6}\). This is likely due to the turbulent diffusion that strongly suppresses small-scale instabilities. Also, we find that the most unstable wavenumber of the Type II DRWI is insensitive to \(St\). The important implication is that this instability is unlikely related to the dust streaming motion, the mechanism used to explain the SI and more generally the resonant drag instabilities (RDI; Squire and Hopkins 2018), where the outcome sensitively depends on the Stokes number. To further verify this, we conducted another series of calculations, gradually reducing \(St\) and \(\alpha\) simultaneously until \(St\sim 10^{-5}\) and \(\alpha\sim 10^{-7}\), so that the dust bump remains similar to that in our fiducial setting (Table 1) but the dust is tightly coupled with the gas. We find that the dispersion relation remains similar for \(k\lesssim 1H^{-1}\), pointing to the fact that it Figure 8: Grid search of \(\gamma_{m*}\) for \(f_{\rm{gmin}}=0.5\). This figure is similar to Figure 7. All panels here and in Figure 7 are colored in one single scale for comparison. is mainly the dust mass loading, rather than dust-gas streaming that shape the properties of this instability. This conclusion is further strengthened by examining the perturbed relative kinetic energy of the dust and the gas with regard to the center of mass (Appendix A4). In the fiducial eigenfunction with \(k=0.2H^{-1}\), it is found to be only 0.06% of the perturbed kinetic energy of the single fluid. The finding sets the stage for our understanding of the Type II DRWI as tightly coupled motion of gas and dust in the following section. ### Physical ingredients of the DRWI We have identified Rossby waves in the morphology of both types of the DRWI. A Rossby wave is characterised by periodic votensity perturbation patterns with a background vortensity gradient normal to its travelling direction. The vortensity perturbations imply velocity perturbations and hence vortensity advection along the gradient, which solely governs the vortensity budget if the vortensity is conserved (e.g., pure isotropic gas). The RWI, then, is a result of two Rossby waves on each side of a background vortensity minimum positively feeding back to each other (for a detailed interpretation of the Rossby wave and the RWI, see Ono et al., 2016). The physical picture behind the Rossby waves requires conservation of the vortensity. However, the dust bump introduces vortensity sources, which modifies the RWI to become the Type I DRWI and brings the Type II DRWI into existence. In the following, we analyse the vortensity budget in the dust-trapping ring in Section 5.4.1, which is followed by a discussion in Section 5.4.2 on the evolution of vortensity in and outside the dust bump and the properties of vortensity sources. We aim at identifying the governing physical ingredients of the DRWI, where we demonstrate that the Rossby waves still play important roles in both types of the DRWI, with dust playing a damping/driving role in Type I/Type II. We further tentatively explain in Appendix E the propagation process of the Type II DRWI, but a comprehensive investigation of the instability mechanism is beyond the scope of this paper. Figure 9: Grid search of the most unstable wavenumber \(k_{*}\) for \(f_{\rm{grain}}=0.7\). Darker shades denote higher \(k_{*}\) and white pixels have no unstable mode. We take \(k_{*}=0.1H^{-1}\) for monotonically decreasing \(\gamma_{m}(k)\) curves. All panels here and in Figure 10 are colored in one single scale for comparison. For other details, see the caption of Figure 7. #### 5.4.1 The vortensity budget The DRWI involves a mixture of gas and dust that violates the conservation of vortensity. Specifically, the vortensity equation derived from Equations (20)(21) takes the following form: \[\left(\frac{\partial}{\partial t}+\mathbf{v}^{\prime}\cdot\nabla-\frac{3}{2}\Omega_ {0}\mathbf{v}\frac{\partial}{\partial y}\right)q=S\,, \tag{32}\] where the source \(S\) satisfies \[S\mathbf{e}_{z}=\frac{1}{\rho}\nabla p\times\nabla\left(\frac{1}{ \rho}\right)+v\frac{1}{\rho}\nabla\times(f_{g}\nabla^{2}\mathbf{v}^{\prime})+\] \[\frac{1}{\rho}\nabla\times\left[\frac{1}{\rho}\nabla\cdot(\rho_{ d}\mathbf{v}_{\rm dif}\mathbf{v}_{\rm dif})\right]+\frac{1}{\rho}\frac{\partial(f_{g}f_{0}(x) )}{\partial x}\mathbf{e}_{z}\,. \tag{33}\] Derivation of Equation (32) can be found in Appendix A.3. The first term on the right-hand side of Equation (33) is usually known as the "baroclinic" term that arises when the fluid is not barotopic (\(\rho\) being only a function of \(P\)). In our system, the dependence of \(\rho\) on \(f_{g}\) implies that vortensity may be created or consumed by any misalignment between the density and pressure gradients. The second and third terms might be crudely understood as vortensity diffusion due to gas viscosity and dust concentration diffusion respectively, and the fourth term emerges from the external forcing. The source terms have zero net contribution in equilibrium: the baroclinic and dust diffusion terms vanish, while the gas diffusion term is balanced by the external torque. In perturbation, though, the vortensity equation will become \[(-i\omega+ikv_{0y}^{\prime}-\frac{3}{2}ik\Omega_{0}x)q_{1}+\frac{dq_{0}}{dx}v _{1x}^{\prime}=S_{1}=S_{\rm bar1}+S_{\rm v1}+S_{\rm diff1}+S_{\rm ext1}\,, \tag{34}\] where we express the perturbed source as a sum of the baroclinic \(S_{\rm bar1}\), viscous \(S_{\rm v1}\), dust diffusion \(S_{\rm diff1}\), and external forcing \(S_{\rm ext1}\) terms, in parallel with the four in Equation (33). In particular, \[S_{\rm bar1}=\frac{ik}{\rho_{0}^{3}}\left(\frac{dp_{0}}{dx}P_{1}-\frac{dP_{0} }{dx}\rho_{1}\right)=-\frac{ikP_{0}v_{\rm dif1}\omega_{0}}{\rho_{0}^{2}}\left( \frac{1}{c_{s}^{2}f_{g1}}f_{g1}+\frac{fd_{0}}{D_{0}}\mathbf{p}_{1}\right)\,. \tag{35}\] Figure 10: Grid search of the most unstable wavenumber \(k_{*}\) for \(f_{\rm gmin}=0.5\). This figure is similar to Figure 9. All panels here and in Figure 9 are colored in one single scale for comparison. This term is dominant among the sources and plays significant roles in the two types of the DRWI, as shown in the following sub-subsection. #### 5.4.2 Vortensity analysis We first analyse the vortensity budget of the Type II DRWI, shown in Figure 11. Here, we compare the time derivative of the vortensity perturbation, \(-i\omega q_{1}\), the baroclinic source, \(S_{\rm bwt1}\), and the advection term, \(-[ikv_{0y}^{\nu}-(3/2)ikQ_{0}x]q_{1}-(dq_{0}/dx)v_{1x}^{\nu}\). We find that the combination of the middle and right panels in Figure 11, representing the latter two terms, largely account for the total vortensity perturbation, as shown in the left panel. We have also examined that the contributions from other terms, primarily from gas and dust diffusion, are relatively minor and only yield certain small-amplitude fine-scale features. The baroclinity barely appears in the interval \(x<x_{c}\) and is relatively weak in \(x>0\). Advection dominates the evolution of \(q_{1}\) in these regions, supporting our interpretation of classical Rossby waves based on the conservation of vortensity. However, in the narrow interval in between, \(S_{\rm bwt1}\) is stronger and one observes a discrepancy between \(-i\omega q_{1}\) and the advection. To quantify the effects of the baroclinity and the advection, we select the region \(x_{c}\leq x\leq 0,0\leq ky<2\pi\) and calculate the cross-correlation between \(q_{1}\) and the three terms shown in Figure 11 along the \(y\)-axis with circular boundary conditions. The results are all sinusoidal as expected. Measuring the phase of the sinusoids, we find that the time derivative of \(q_{1}\) has a phase lead of \(71.9^{\circ}\) over \(q_{1}\) itself, which is plainly equal to \(-\mathrm{arg}(-i\omega_{dm})\). The angle is less than \(90^{\circ}\) (a positive imaginary part of \(\omega_{dm}\)), indicating instability. \(S_{\rm bwt1}\) lags behind \(q_{1}\) by \(24.3^{\circ}\), a small angle compared to \(90^{\circ}\), thus significantly enhancing \(q_{1}\). In contrast, the advection term leads \(q_{1}\) by \(89.7^{\circ}\). The instability in the interval \(x_{c}<x<0\), then, may be interpreted as the baroclinic source driving the growth of the vortensity perturbation, whereas the advection only serves to propagate the \(q_{1}\) patterns. We also perform similar calculations on the Type I DRWI exemplified in Figure 4. In the region \(-0.3H\leq x\leq 0,0\leq ky<2\pi\) (roughly the left half of the dust bump), the advection is completely in phase with \(q_{1}\) whereas the baroclinic source lags behind by \(162.2^{\circ}\). Now, the advection encourages the growth of \(q_{1}\) even in the dust bump, but the baroclinic source still works against the advection. This explains why the dust tends to suppress the Type I DRWI: the dust gaps the perturbed vortensity from the positive feedback loop involving the Rossby waves. In this sense, the same mechanism of instability underlies the Type I DRWI and the classical RWI. This concludes our analysis on the interaction between the dust bump and the gaseous Rossby waves in the linear regime of the DRWIs. ## 6 Numerical test and the nonlinear regime In this section, we qualitatively verify the two types of DRWI and investigate their evolution in the nonlinear regime. We use the multifluid dust module in Athen++(Stoe et al., 2008; Huang & Bai, 2022). Our numerical setup keeps the formulation in Section 2.1 and 2.2, treating the gas and dust as two fluids in a local shearing sheet and establishing the external forcing to maintain the pressure bump. Differently, though, we adopt the standard Navier-Stokes viscosity in Athen++. The external forcing is then modified to satisfy the new equilibrium equation in place of Equation (6): \[f_{0}(x)+\frac{1}{\rho_{g0}}\frac{\partial}{\partial x}\left[\rho_{g0}v^{\prime }\frac{\partial}{\partial x}\left(v_{0y}^{\prime}-\frac{3}{2}\Omega_{0}x \right)\right]=0\, \tag{36}\] Figure 11: Terms related to the evolution of vortensity of Type II DRWI for a system with moderate dust content (\(f_{\rm gmin}=0.7\)). The eigenmode is identical to that shown in Figure 5. The three panels show the time derivative, the baroclinic source, and the advection term of the vortensity perturbation. All panels are colored in one single scale for comparison. The \(y\) range, the perturbed velocity field and the co-rotation radius are similarly plotted as in Figure 3. Figure 12: Snapshots in the “mild bump” run of the dust density, perturbed gas density and perturbed one-fluid vortensity (Equation (34)) after inserting the perturbation. Note that the aspect ratio is not drawn to scale. The time is annotated on the top left of each panel. Panels in the top row are colored in the same power-law scale. Panels in the middle and bottom rows are colored linearly and not in the same scale: the maximum \(|\rho_{g1}|\) or \(|q_{1}|\) of each is annotated on the top right. Red arrows in the last column point to dust density maxima. which gives the form of \(f_{0}(x)\) implemented in the simulation: \[f_{0}(x)=-\frac{A}{2}\alpha c_{s}\Omega_{0}\left(\frac{H}{\Delta w} \right)^{3}\left(-\frac{x^{3}}{\Delta w^{3}}+\frac{3x}{\Delta w}\right)\exp\left( -\frac{x^{2}}{2\Delta w^{2}}\right)+\] \[-\frac{A^{2}}{2}\alpha c_{s}\Omega_{0}\left(\frac{H}{\Delta w} \right)^{3}\left(-\frac{x^{3}}{\Delta w^{3}}+\frac{x}{\Delta w}\right)\exp \left(-\frac{x^{2}}{\Delta w^{2}}\right)+\] \[-\frac{3A}{2}\alpha c_{s}\Omega_{0}\left(\frac{H}{\Delta w} \right)\left(\frac{x}{\Delta w}\right)\exp\left(-\frac{x^{2}}{2\Delta w^{2}} \right). \tag{37}\] Also, we use the conventional treatment that includes the dust concentration diffusion in the continuity equation and does not absorb \(\nu_{d}\) it into \(\nu_{d}\)(Huang & Bai, 2022, Equation (A1)). We expect no substantial deviation in terms of linear evolution where viscosity and dust diffusion processes are unimportant. However, the equilibrium profile is slightly influenced by the different setups in the simulation compared to our analytical derivation (mainly due to the use of two-fluid instead of single-fluid formalism), and we reach the steady state by a preliminary axisymmetric run. For a given set of parameters, we use a sheet size of \(x\times y=8\Delta w\times 0.078125\pi H\) with \(1024\times 12\) cells. Initially, we set \(\rho_{g}\) as in Equation (8), \(\nu_{gX}^{\prime}\) and \(\nu_{dy}^{\prime}\) as in Equation (26), and \(\nu_{gX}^{\prime}=\nu_{dx}^{\prime}=0\). The initial dust density is set as a Gaussian whose height satisfies \(f_{\rm gmin}\) and whose width ensures that the total dust weight equals to that calculated in Section 3. After the equilibrium is reached, we scale up the simulation with a sheet size of \(x\times y=8\Delta w\times 20\pi H\) with \(1024\times 3072\) cells, which has the same resolution as the preliminary run and is enough to capture a linear wave of \(k=0.1H^{-1}\). To verify the Type II DRWI, we use the parameter \(A=0.8,\Delta w/H=1.5,St=0.03,\alpha=1\times 10^{-4}\), and \(\overline{f_{g}}=0.980\) (or equivalently \(f_{\rm gmin}=0.536\)). Our linear calculations predict that this system is stable to the Type I DRWI while \(\gamma_{rms}=0.03\Omega_{0}\) for Type II. We preliminarily run this system for \(10000\Omega_{0}^{-1}\), after which the time derivative of the dust density is below \(10^{-6}\rho_{b}\Omega_{0}\). Then, we insert random noise of amplitude \(0.01c_{s}\) into the gas velocity and run the full-scale simulation. We term it the "mild bump" run. This run with the dust turned off is tested to be stable to the RWI. We also study a "sharp bump" run where the Type I DRWI dominates. The parameter is \(A=1.2,\Delta w/H=1.2,St=0.03,\alpha=1\times 10^{-4}\), and \(\overline{f_{g}}=0.980\) (or equivalently \(f_{\rm gmin}=0.541\)), to which our linear calculations predict that \(\gamma_{rms}=0.12\Omega_{0}\) and \(0.10\Omega_{0}\) for the Type I and II DRWI respectively. This run follows the same procedures as described above. We will describe the two runs separately in the following subsections. ### The mild bump run: development of the Type II DRWI The evolution of the "mild bump" run (\(A=0.8,\Delta w/H=1.5\)) is shown in Figure 12 and Figure 13. The dust-gas instability starts to evolve into the nonlinear regime when \(t\gtrsim 100\Omega_{0}^{-1}\). Before that, the dominant azimuthal wavenumber is approximately \(k=0.3\). The gas and dust density perturbations are anti-correlated. Moreover, Figure 13 shows azimuthally travelling gas density perturbation patterns with \(v_{y}\simeq\pm 0.9c_{s}\). These are characteristic of the Type II DRWI. The upward-moving patterns at \(x<0\) correspond to the Type II DRWI mode with \(\omega_{rms}>0\), while the downward-moving patterns at \(x>0\) the mode with \(\omega_{rms}<0\). Upon entering the nonlinear regime after \(t\geq 200\Omega_{0}^{-1}\), the dust ring is deformed into anticyclonic vortices, where dust becomes more and more concentrated into scales \(\lesssim H\), with maximum \(\rho_{d}\) constantly increasing. These dust-gathering vortices correspond to negative one-fluid vortensity perturbation seen in the bottom panels of Figure 12. Interestingly, their locations seem unrelated to the sign of the pressure perturbation: they may stay in either positive or negative pressure extrema or no extremum at all. More precisely, whereas the \(\rho_{g1}\) patterns are still travelling at an azimuthal velocity comparable to the sound speed3, the dust vortices become almost stationary. Moreover, the gas density perturbation gradually decays in magnitude (compare the maximum \(|\rho_{g1}|\) at \(t=200\Omega_{0}^{-1}\) and \(t=485\Omega_{0}^{-1}\)), reaching a characteristic level of \(\rho_{g1}/\rho_{b}\lesssim 10\%\), in contrast to the still-concentrating dust vortices. In the meantime, the system is accompanied by numerous fine-scale density waves, presumably triggered by local dust concentrations. Footnote 3: The pattern of \(\rho_{g1}\) at \(t=485\Omega_{0}^{-1}\) that appears as an extended density maxima in fact consists of two traveling waves, with the left and right halves to separate soon. The dust is continuously gathered and dusty vortices merge into each other. Several hundred \(\Omega_{0}^{-1}\) after we insert perturbation, one Figure 13: Same as the middle row in Figure 12, but in a shorter time-scale. Red and blue circles (\(\nu_{y}=\pm 0.9c_{s}\)) indicate azimuthally travelling patterns. or two dust-loaded vortices are left with maximum dust density \(\rho_{d\rm max}/\rho_{b}\) ranging from several tens to more than one hundred. Although this slightly falls short of the density threshold for gravitational collapse (e.g., \(\rho_{d}\max/\rho_{b}\gtrsim 200\) in typical outer disc conditions; see Equation (16) and Section 5.1 in Xu & Bai 2022), dust vertical settling is not included in this work. The equilibrium dust scale height can be estimated by \(H_{d}/H=\sqrt{\alpha/St}=0.06\) (Dubrulle et al. 1995). Under the assumption that the vertical dimension does not impact adversely on dust concentration in 2D, this will suffice to lead to planetesimal formation by clumping even if dust mass loading itself does not further promote settling (which could be observed in 2D axisymmetric and 3D simulations; see Lin 2019; Xu & Bai 2022). Moreover, the dust in each of these vortices is likely massive enough for the self-gravity to overcome turbulent diffusion, for which Klahr & Schreiber (2020) derived a critical minimum dust cloud radius \(l_{c}/H=\sqrt{\delta/9St}\), where \(\delta\equiv D/c_{s}H\) is the dimensionless diffusivity. In our problem (\(\delta<\alpha=1\times 10^{-4},St=0.03\)), \(l_{c}<0.02H\). In comparison, the typical length scale of the dust vortices at \(t=485\Omega_{0}^{-1}\), measured in regions with \(\rho_{d}\geq 12\rho_{b}\) (so that the actual dust-to-gas density ratio reaches \(\sim 200\) after accounting for dust settling), reaches \(\sim 0.06H\) in \(x\) and \(\sim 0.6H\) in \(y\). The total dust mass in each vortex that is gravitationally bound is estimated as \[m_{d,\rm vortex}\simeq 0.2\left(\frac{R_{0}}{30\rm au}\right)^{2}\left(\frac{ H/R_{0}}{0.1}\right)\left(\frac{\rho_{b}}{1\rm g/cm^{2}}\right)\rm M_{\oplus}. \tag{38}\] This is larger than the mass of typical planetesimals and may already be considered to be planetary embryos if it collapses into a single object. On the other hand, we caution that our study lacks the vertical dimension and does not include self-gravity, and thus the fate of such dust clumps remains to be revealed. In the absence of self-gravity they are quickly dissipated after several tens of \(\Omega_{0}^{-1}\), although they re-emerge \(\sim 400\Omega_{0}^{-1}\) later when the dust is spread back into the ring and then triggers a new round of the Type II DRWI. Moreover, the nonlinear outcome of the DRWI also likely depends on the nature of disc turbulence where our treatment is highly simplified. We speculate that the system may instead form multiple planetesimals (as suggested in Xu & Bai 2022), especially as there is no strong gas Figure 14: Snapshots in the “sharp bump” run (the Type I DRWI being unstable) of the dust and gas density after inserting the perturbation. The time is annotated on the top left of each panel. The top row is colored in the same logarithmic scale and the bottom in the same linear scale. vortex that may tend to gather all nearby dust towards a common collapse site at its center. One important characteristic of dust clumping in the Type II DRWI is that the dust ring is retained. This is primarily because of the weak density perturbations in the gas (as opposed to the Type I case to be discussed next). As a result, dust concentration may not be easily identified observationally, especially when the dust ring is optically thick. On the other hand, the Type II DRWI does induce certain level of azimuthal asymmetries in the form of non-uniform dust distribution and/or corrugation. For example, in the last column in Figure 12, the large-scale dust mass azimuthal contrast (estimated as the dust density within the most massive quarter of the \(y\) range divided by that within its opposite quarter) is \(\sim 3\). Azimuthal asymmetries in dust rings up to similar levels of contrast have been seen in a number of systems such as DM Tau (Hashimoto et al., 2021) and LkCa 15 (Facchini et al., 2020; Long et al., 2022), and they are suggested to be common (van der Marel et al., 2021). Such azimuthal asymmetries could serve as indirect evidence for the Type II DRWI and hence dust clumping. Our results further suggest that peaks in the azimuthal brightness profile in dust ring are not necessarily co-spatial with the azimuthal gas pressure maxima. ### The sharp bump run: dominance of the Type I DRWI In the "sharp bump" run (\(A=1.2\), \(\Delta w/H=1.2\)), shown in Figure 14, we observe that dust and gas vortices develop and merge, forming one single anticyclonic gas vortex at saturated state approximately \(200\Omega_{0}^{-1}\) after the initial perturbation. The overall evolution process closely resembles the development of the standard RWI in dusty discs (Mehcut et al., 2012; Zhu et al., 2014), and as a result, all the dust in the ring concentrates towards the gas vortex center. This is clearly distinct from the mild bump run where the dust ring is retained thanks to low levels of gas perturbation while developing dust concentration and clumping within the ring. In our sharp bump run, the contraction of the dust in the vortex continues to develop (while the gas vortex has already saturated), eventually saturating at \(t\simeq 600\Omega_{0}^{-1}\) with \(\rho_{d\rm max}/\rho_{b}\simeq 2\times 10^{3}\). This peak dust density is significantly higher than the Type-II case, and we can also estimate the total dust mass gathered in the vortex to be \[m_{d,\rm vortex}\simeq 4\pi R_{0}\Delta w\overline{\rho_{d}}=4 \left(\frac{R_{0}}{30\rm au}\right)^{2}\left(\frac{H/R_{0}}{0.1}\right)\left( \frac{\rho_{b}}{1\rm g/cm^{2}}\right)\rm M_{\oplus}\, \tag{39}\] where \(\rm M_{\oplus}\) is the Earth mass. This is also much higher than the Type II counterpart, and is in the mass range of planetary embryos. Again, future 3D studies including self-gravity is needed to reveal the fate of such dust clump. Also, we observe that the gas vortex has long lifetimes of at least \(1000\Omega_{0}^{-1}\), although the peak dust density fluctuates between \(10^{1}\rho_{b}\) and \(10^{3}\rho_{b}\) after saturation. The lifetime of dust-laden vortices in 2D has been studied extensively (Chang & Oishi, 2010; Fu et al., 2014; Crnkovic-Rubsamen et al., 2015; Lovascio et al., 2022) and depends on factors such as the initial dust-to-gas mass ratio, viscosity, dust feedback and dust grain size. The vortex in our sharp bump run is consistent with Lovascio et al. (2022) with similar spatial scale, dust size and total dust-to-gas mass ratio (lifetime \(\sim 10^{3}\Omega_{0}^{-1}\) there). Works on planet-induced (Li et al., 2020; Hammer et al., 2021) or 3D (Lyra et al., 2018; Hammer & Lin, 2023) vortices with dust feedback suggested similar longevity, although dust settling could disturb the midplane vortex structure. Since the dust is well confined in the long-lived gas vortex, we speculate that the stronger dust clump resulting from Type I DRWI is more likely to form massive planetesimals/planetary embryos. One uncertainty in our scenario is that maximum dust density in dusty vortices is very sensitive to the prescription of dust diffusivity \(D\), which is currently given as a function of the local dust and gas density. If we assume no weakening of turbulent diffusion due to the dust mass loading, \(\rho_{d\rm max}\) will reduce by approximately one order of magnitude for both the mild and sharp bump runs. In previous works, enhanced dust mass loading with dust feedback is found to reduce turbulent diffusion in the magneto-rotational instability (MRI) turbulence (Xu & Bai, 2022). SI-induced turbulent diffusivity was also found to be sensitive to the dust-to-gas ratio (Schreiber & Klahr, 2018). Further investigations in dust diffusivities within dust clumps are needed that incorporates more realistic background gas turbulence. ## 7 Summary and discussion We introduce a physically-motivated local shearing sheetmodel of turbulent dust-trapping rings in PPDs. We establish a pressure bump by implementing a forcing term that mimics torques that drive ring formation (e.g., by planets, or magnetic flux concentration), balanced by viscosity that mimics disc turbulence. The dust is modeled as a fluid including backreaction, which also evolves into an equilibrium dust bump profile by balancing radial drift towards pressure maxima and turbulent diffusion. We aim to identify linear instabilities that operate and potentially lead to planetesimal formation in this realistic setting. We find two types of instabilities, which we term the DRWI. Type I is generalised from the standard RWI while Type II is first identified here. The Type I DRWI, characterised by a vanishing phase velocity and (anti-)symmetric eigenfunction patterns, dominates in relatively sharp pressure bumps and/or bumps with low dust content. In contrast, the Type II DRWI travels along the \(y\) axis, has different perturbation magnitudes on either side of the pressure bump, and operates in relatively mild and dusty bumps. Its maximum growth rate is largely determined by the equilibrium gradients of the gas and dust density. The standard RWI is understood in terms of conservation of vortensity. However, our vortensity source analysis highlights the effective baroclinity in the dust bump, which only consumes the vortensity budget in the Type I DRWI but mainly contributes to the vortensity growth in Type II. Therefore, we believe that vortensity advection, the incentive of the classical RWI, also accounts for the growth of the Type I DRWI, while both the advection and baroclinity drive the Type II DRWI. The two types of DRWI are qualitatively verified in simulations, and they show distinct nonlinear outcomes with major observational implications. In general, Type I DRWI dominates in the presence of a sharp bump. This yields a standard gas vortex characterized by a pressure maximum in the center, and it traps and concentrates all the dust originally in the ring. On the other hand, in a mild bump, the Type II DRWI operates and develops into sub-\(H\)-sized dust anticyclones, whereas the gas density only shows weak perturbations. This allows the dust ring to be largely preserved, while exhibiting azimuthal asymmetries. In both cases, the non-linear evolution of the DRWI triggers significant dust mass loading in the form of dust vortices, which hold potential for dust clumping and hence planetesimal formation or direct formation of planetary embryos. ### Discussion The DRWIs are likely closely related to certain instability phenomena in previous simulations of pressure bumps with dust feedback. For example, they provide a potential explanation to simulations in Xu and Bai (2022b), where the ring could be broken into dusty non-axisymmetric filaments, qualitatively similar to the nonlinear patterns of the Type II DRWI. Also, at the outer edge of a dead zone, the steep increase of turbulent viscosity leads to radial local gas overdensity. While a sufficiently narrow transition width induces formation of large-scale gas vortices with dust concentrating inside (ascribed to the RWI, Miranda et al., 2017, with the dust content found to impede vortex formation and dust concentration), a smoother transition produces no large-scale gas vortices but dust clumps of scales \(\lesssim H\)(Huang et al., 2020). The edge of a gap opened by a massive planet is subject to similar instabilities, with large-scale gas vortices emerging only in the absence of dust backreaction while non-negligible dust concentration instead encourages formation of small dust vortices (Yang and Zhu, 2020). 3D simulations in VSI-turbulent pressure bumps also found a tendency of dusty vortex formation towards axisymmetric rings for increasing average dust-to-gas mass ratio or the Stokes number (Lehmann and Lin, 2022). While further investigation is needed, the Type II DRWI offers a viable physical explanation of these findings. It is worth considering how our local analyses and simulations of the DRWI fit in realistic global disc structures, which has geometric curvature as well as a background pressure gradient. We expect the instability to be qualitatively robust in the presence of the disc curvature since we recover the classical RWI. Pan and Yu (2020) noted that sustaining Rossby waves requires the presence of the second derivative of the background shear (of order \(\Omega_{0}/R_{0}\)), which the standard shearing sheet does not capture in background equilibrium. This is resolved as we form a pressure bump that provides strong radial structure (\(\partial^{2}v^{\prime}_{0y}/\partial x^{2}\sim\Omega_{0}/H\gg\Omega_{0}/R_{0}\)). Further, our physical ingredient analysis suggests that the DRWI is probably insensitive to the particular bump shape with or without a background pressure gradient, as long as the pressure maximum concentrates dust to serve as the vortensity source and the two bump flanks provide equilibrium vortensity slopes. On the other hand, a background pressure gradient can induce a net dust radial flux if no other dust trap exists outside the bump in question. The dust drift could trigger the two-dimensional SI in small scales (Pan and Yu, 2020) that may coexist and/or interact with the DRWI. The ubiquity of dust-trapping rings and the relatively rare occurrence of high-contrast asymmetries such as arcs and crescents (Andrews, 2020) suggest that most of the rings are likely moderate in sharpness: they must trap dust effectively in the presence of background radial drift while still stable to the vortex-forming Type I DRWI. This implication is related to the recent global study by Chang et al. (2023), which showed that isothermal axisymmetric pressure maxima remain (classical-) RWI-stable for a reasonably large range of bump widths. On the other hand, weak-to-modest level of azimuthal asymmetry in dust rings appears to be common (van der Marel et al., 2021). This is suggestive that the Type II DRWI likely operates and leads to dust clumping while preserving the overall morphology of the dust rings. Another possibility is that dust-laden vortices do form but quickly die out, although the exact lifetime is model-sensitive (e.g., Fu et al., 2014, 2014; Rometsch et al., 2021; Hammer and Lin, 2023). The dust-trapping ring rests in a broader context of spatial and size evolution of solids in PPDs. The Type II DRWI favors relatively large particles with \(\dot{M}=10^{-2}-10^{-1}\), consistent with upper bounds of drift-limited dust size in typical conditions in outer PPDs (Birnstiel et al., 2012). The ring is found to further enhance the average dust size by alleviating drift and fragmentation barriers (Li et al., 2019; Laine et al., 2020), thus likely encouraging the onset of the DRWI. It is conceivable that the pressure bump gathers and nurses the dust progressively over drift and coagulation time-scales until mature for the instability. The formation of planetesimals/embryos in the pressure bump bears on their later evolution paths. For instance, formation models built out of a self-interacting planetesimal ring (regardless of their origin) can be compatible with the formation scenario of terrestrial planets and super-Earths (Woo et al., 2023; Batygin and Morbidelli, 2023). A dust-trapping ring also likely allows pebble accretion to operate efficiently that leads to rapid planet assembly (Jiang and Ormel, 2023). The fact that Type II DRWI leaves the pressure bump largely intact likely favors the production of a planetesimal ring and/or direct formation of embryos which fit into the scenarios above. Our study bears a number of simplifications and caveats that deserve future studies. Among them includes the local treatment of the isolated pressure bump, as discussed above. Moreover, our 2D study also neglects vital 3D processes such as dust settling and vertical gas flow, which may alter the linear DRWI and its non-linear evolution. We approximate the dust-gas mixture with a single fluid, although two-fluid simulations largely agree with the calculations. Self-gravity is ignored throughout this work, and thus planetesimal/embryo formation is only inferred. Also, our treatment of the MRI turbulence as a diffusive process and of the dust diffusivity as a simplistic function calls for first-principle insights in modelling the MRI and/or other forms of turbulence. We intend to generalise our work to 3D in the future, with a more realistic and thorough consideration of physical processes. Despite current limitations, our work pioneers a rigorous effort to uncover fundamental dynamical scenarios that bridge widespread observed dust structures to the crucial evolutionary stage of solid material towards future planets. ## Acknowledgements We thank the anonymous referee for detailed comments and suggestions that helped improve the clarity of this paper. We thank Pinghui Huang for instructions on the multi-fluid dust module in Athena++, and Cong Yu, Min-Kai Lin and Marius Lehmann for useful discussions. We also acknowledge the Chinese Center of Advanced Science and Technology for hosting the Protoplanetary Disk and Planet Formation Summer School in 2022 where part of this work is conducted. This work is supported by the National Science Foundation of China under grant No. 12233004, and the China Manned Space Project, with No. CMS-CSST-2021-B09. We acknowledge the Tsinghua Astrophysics High-Performance Computing platform at Tsinghua University for providing computational and data storage resources that have contributed to the research results reported within this paper. Software: NumPy(Harris et al., 2020), SciPy(Virtanen et al., 2020), Matplotlib(Hunter, 2007), Findiff(Baer, 2018), Athena++(Stoen et al., 2020; Huang and Bai, 2022) ## Data Availability Data of the linear analyses and simulation in this paper are available upon request to the authors.
2309.03032
Angle between two random segments
The study of "random segments" is a classic issue in geometrical probability, whose complexity depends on how it is defined. But in apparently simple models, the random behavior is not immediate. In the present manuscript the following setting is considered. Let four independent random points that follow a uniform distribution on the unit disk. Two random segments are built with them, which always are inside of the disk. We compute the density function of the angle between these two random segments when they intersect each other. This type of problem tends to be complex due to the high stochastic dependency that exists between the elements that form them. The expression obtained is in terms of integrals, however it allows us to understand the behavior of the distribution of the random angle between the two random segments.
Paulo Manrique-Mirón
2023-09-06T14:20:06Z
http://arxiv.org/abs/2309.03032v1
# Angle between two random segments ###### Abstract. The study of _random segments_ is a classic issue in geometrical probability, whose complexity depends on how it is defined. But in apparently simple models, the random behavior is not immediate. In the present manuscript the following setting is considered. Let four independent random points that follow a uniform distribution on the unit disk. Two random segments are built with them, which always are inside of the disk. We compute the density function of the angle between these two random segments when they intersect each other. This type of problem tends to be complex due to the high stochastic dependency that exists between the elements that form them. The expression obtained is in terms of integrals, however it allows us to understand the behavior of the distribution of the random angle between the two random segments. ## 1. **Introduction** Geometrical probability deals with the study of classical geometry objects, points, segments, lines, planes, circles, spheres, etc., which are generated through some random mechanism [3, 4]. The development of this area dates back at least to 1733 when Georges-Louis Leclerc, Comte de Buffo, wrote "Memoire sur le jeu de franc-carreau", where he proposed and solved (not always correctly) three problems formulated as mathematical games. These are clean tiles problem, the needle problem, and the mesh problem [4]. Among them, possibly the best known is the needle problem, which consists of randomly throwing a needle of length \(l\) on a set of equidistant parallel lines, with separation \(h\), on the plane. The question is to determine the probability that the needle cuts a line. In fact, the value of this probability is \(\frac{2l}{\pi h}\). This game allows us to set up a method of simulation to determine the value of \(\pi\). The analysis of a random geometric object depends on the way in which it is generated by certain random mechanism. For example, Kendall and Moran [3] illustrate how to solve a problem proposed by Bertrand which consists of finding the probability that a _random chord_ of a circle is longer than the side of the equilateral triangle inscribed in it. To do this, three different ways of understanding what is a _random chord_ are proposed. First, the chord is form by joining two points which they are generated independently and uniformly on the circumference. The second model is to consider a chord which is perpendicular to the diameter and its point of intersection is uniformly distributed over the diameter. In the third, a point uniformly distributed on the circle is chosen and the chord is the perpendicular segment to the radius which passes through this point. The probability of the considered event is \(\frac{1}{3}\), \(\frac{1}{2}\), \(\frac{1}{4}\), respectively, [1]. Garwood and Holroyd [1] interpret a _random chord_ as the segment passing through two independent and uniformly distributed points \(P,Q\) on the circle of radius one. They computed the density function of the distance \(L\) of the chord to the center of the circle, \[f_{L}(l)=\frac{16}{3\pi}(1-l^{2})^{3/2}\mathds{1}_{\{l\in[0,1]\}},\] since this distance determines the length of the chord. Previously, Garwood and Tanner [2] found the density of the distance \(D\) between \(P\) and \(Q\), \[f_{D}(d)=\frac{2d}{\pi}\left(2\arccos\left(\frac{d}{2}\right)-\sin\left(2 \arccos\left(\frac{d}{2}\right)\right)\right)\mathds{1}_{\{d\in[0,2]\}}.\] In both works the _infinitisimal strategy_ is used to determine the densities of considered lengths, which consists in the following idea: if \(f(w)\) is the density of the random variable \(W\) then, intuitively, \(f(w)dw\) is the probability that \(W\in[w,w+dw]\). In this manuscript, the _random segments_ are defined with the same mechanism proposed by Garwood and Holroyd, two independent and uniformly distributed points on the unit circle are joined to form a segment. Generating two independent segments in this way, we compute the density of the angle between them when they intersect, see Figure 1.a. Formally, the following framework is considered. Let \(\mathds{D}=\{x\in\mathbb{R}^{2}:||x||\leq 1\}\) and \(X,Y\) two independent random points which are uniformly distributed on \(\mathds{D}\). From \(X\) and \(Y\) we define a _random segment_ as \[S_{XY}:=\left\{w\in\mathds{D}:w=(1-\alpha)X+\alpha Y\;\;\text{for}\;\;\alpha\in [0,1]\right\}.\] We consider four independent random points \(A,B,C,D\), all uniformly distributed on \(\mathds{D}\), and let \(S_{AB},S_{CD}\) be the random segments associated with \((A,B)\) and \((C,D)\), respectively. Note \(S_{AB}\cap S_{CD}\) could be empty. Our objective is compute the distribution of the angle between \(S_{AB},S_{CD}\) when they intersect, i.e., if \(\Theta:=\angle\left(S_{AB},S_{CD}\right)\), \[\mathbb{P}\left(\Theta\leq\theta|S_{AB}\cap S_{CD}\neq\O\right), \tag{1.1}\] with \(\theta\in[0,\pi]\). The angle \(\Theta\) is measured counterclockwise from \(S_{AB}\) to \(S_{CD}\), see Figure 1.b. In order to compute 1.1, we do a change of variable which permit to find an expression for it, which also permit to recover the results of Garwood and Holroyd and Garwood and Tanner at same time. The manuscript consists of two more sections. In the Main Result Section, the density of \(\mathbb{P}\left(\Theta\leq\theta|S_{AB}\cap S_{CD}\neq\O\right)\) is presented, as well as some consequences of the proposed change of variable. The last section gives the proof of this result. Finally, we are grateful for the comments of Victor Perez Abreu and the support of Alberto Saucedo Lara. ## 2. **Main Result** The main result of this manuscript is presented below. **Theorem 2.1**.: _The function density of the expression 1.1 is given by \(g(\theta)=\frac{1}{c}g^{*}(\theta)\mathds{1}_{\{\beta\in[0,\pi]\}}\), where \(c=\int_{0}^{\pi}g^{*}(\theta)d\theta\),_ \[g^{*}(\theta):=\int_{0}^{1}\int_{0}^{1}\frac{1}{\pi}g_{1}^{*}(\rho_{AB},\rho_ {CD},\theta)\mathds{1}_{\left\{\sqrt{1-\rho_{AB}^{2}}|\sin(\theta)|\geq|\rho_ {AB}\cos(\theta)+\rho_{CD}|\right\}}\mathrm{d}\rho_{AB}\mathrm{d}\rho_{CD},\] Figure 1. _and_ \[g_{1}^{*}(\rho_{AB},\rho_{CD},\theta) :=\left(\frac{8}{\pi}\right)^{2}\sqrt{1-\rho_{AB}^{2}}\sqrt{1-\rho_ {CD}^{2}}\] \[\qquad\times\left[1-\frac{\rho_{AB}^{2}+\rho_{CD}^{2}+2\rho_{AB} \rho_{CD}\cos\left(\theta\right)}{\sin^{2}\left(\theta\right)}\right]^{2} \mathds{1}_{\rho_{AB}\in[0,1]}\mathds{1}_{\rho_{CD}\in[0,1]}.\] Figure 2 shows the graph of \(g(\theta)\). The proof of Theorem 2.1 is based on the following change of variable. Observe that \(X=\sqrt{R_{X}}(\cos(\Gamma_{X}),\sin(\Gamma_{X}))^{T}\), where \(v^{T}\) is the transpose of \(v\), has uniform distribution on \(\mathds{D}\) where \(R_{X},\Gamma_{X}\) are independent random variables such that \(R_{X}\sim\mathrm{Unif}[0,1]\) and \(\Gamma_{X}\sim\mathrm{Unif}[0,2\pi]\). Thus, we can express \(S_{AB}\) as \[S_{AB}=\left\{w\in\mathbb{R}^{2}:w=(1-\alpha)\sqrt{R_{A}}\begin{pmatrix}\cos( \Gamma_{A})\\ \sin(\Gamma_{A})\end{pmatrix}+\alpha\sqrt{R_{B}}\begin{pmatrix}\cos(\Gamma_{B })\\ \sin(\Gamma_{B})\end{pmatrix},\;\alpha\in[0,1]\right\},\] and \(R_{A},R_{B},\Gamma_{A},\Gamma_{B}\) are independent random variables with \(R_{A},R_{B}\sim\mathrm{Unif}[0,1]\) and \(\Gamma_{A},\Gamma_{B}\sim\mathrm{Unif}[0,2\pi]\). Consider the perpendicular \(OF\) from the origin \(O\) to the segment \(S_{AB}\) (or its prolongation). Let \(\Gamma_{AB}\) the angle this perpendicular makes with the \(x\)-axis and \(R_{AB}\) the perpendicular distance of the segment \(S_{AB}\) from the origin. The points \(A,B\) are determined by \(R_{AB},\Gamma_{AB},T_{A},T_{B}\) where \(|T_{A}|\) and \(|T_{A}|\) are the distance of \(A\) and \(B\) from \(F\), respectively. See Figure 3. We denote particular values of \(R_{AB},\Gamma_{AB},T_{A},T_{B}\) by \(\rho_{AB},\gamma_{AB},t_{A},t_{B}\), respectively. Note that \[\sqrt{\rho_{j}}\cos\gamma_{j} =\rho_{AB}\cos\gamma_{AB}-t_{j}\sin\gamma_{AB},\] \[\sqrt{\rho_{j}}\sin\gamma_{j} =\rho_{AB}\sin\gamma_{AB}+t_{j}\cos\gamma_{AB}, \tag{2.1}\] for \(j\in\{A,B\}\). Figure 2. Thus, the join density of \((R_{A},\Gamma_{A},R_{B},\Gamma_{B})\) is \[f(\rho_{A},\gamma_{A},\rho_{B},\gamma_{B})=\frac{1}{(2\pi)^{2}}\mathds{1}_{\{\rho_ {A}\in[0,1]\}}\mathds{1}_{\{\gamma_{A}\in[0,2\pi]\}}\mathds{1}_{\{\rho_{B}\in[ 0,1]\}}\mathds{1}_{\{\gamma_{B}\in[0,2\pi]\}},\] and it can be expressed in terms of \(\rho_{AB},\gamma_{AB},t_{A},t_{B}\) as \[\left(\frac{2}{2\pi}\right)^{2}|t_{A}-t_{B}|\,\mathds{1}_{\{\rho_{AB}\in[0,1] \}}\mathds{1}_{\{\gamma_{AB}\in[0,2\pi]\}}\mathds{1}_{\left\{t_{A}\in\left[- \sqrt{1-\rho_{AB}^{2}}\sqrt{1-\rho_{AB}^{2}}\right]\right\}}\mathds{1}_{\left\{ t_{B}\in\left[-\sqrt{1-\rho_{AB}^{2}},\sqrt{1-\rho_{AB}^{2}}\right]\right\}}. \tag{2.2}\] This change of variable allows us to obtain more information of the random segment that does not seem clear from its original definition. For example, the results of Garwood and Holroyd and Garwood and Tanner can be deduced directly from this. The marginal density at \(\rho_{AB}\) retrieves the result from Garwood and Holroyd, \[f(\rho_{AB}) =\mathds{1}_{\{\rho_{AB}\in[0,1]\}}\frac{2}{\pi}\int\int|t_{A}-t_{ B}|\,\mathds{1}_{\left\{t_{A}\in\left[-\sqrt{1-\rho_{AB}^{2}},\sqrt{1-\rho_{AB}^{2}} \right]\right\}}\mathds{1}_{\left\{t_{B}\in\left[-\sqrt{1-\rho_{AB}^{2}},\sqrt {1-\rho_{AB}^{2}}\right]\right\}}\mathrm{d}t_{A}\mathrm{d}t_{B}\] \[=\frac{8}{3\pi}(1-\rho_{AB}^{2})^{3/2}\mathds{1}_{\{\rho_{AB}\in [0,1]\}}.\] Meanwhile, the marginal density at \((t_{A},t_{B})\), \[f(t_{A},t_{B})=\frac{2}{\pi}\,|t_{A}-t_{B}|\min\left\{\sqrt{1-t_{A}^{2}},\sqrt {1-t_{B}^{2}}\right\}\mathds{1}_{\{t_{A}\in[-1,1]\}}\mathds{1}_{\{t_{B}\in[-1,1]\}},\] allows to retrieve the result of Garwood and Tanner, \[\mathbb{P}\left(|S_{AB}|\leq d\right) =\mathbb{P}\left(|T_{A}-T_{B}|\leq d\right)\] \[=\int_{\{|t_{A}-t_{B}|\leq d\}}f(t_{A},t_{B})\mathrm{d}(t_{A},t_{ B})\] \[=4\times\frac{2}{\pi}\left[\int_{0}^{d/2}\int_{-t_{B}}^{t_{B}}(t_ {B}-t_{A})\sqrt{1-t_{B}^{2}}\mathrm{d}t_{A}\mathrm{d}t_{B}+\int_{d/2}^{1}\int _{t_{B}-d}^{t_{B}}(t_{B}-t_{A})\sqrt{1-t_{B}^{2}}\mathrm{d}t_{A}\mathrm{d}t_{ B}\right]\] \[=\int_{0}^{d}\frac{s}{\pi}\frac{-4s+s^{3}+8\sqrt{4-s^{2}}\arccos \left(\frac{2+s}{\sqrt{4-s^{2}}}\right)}{\sqrt{4-s^{2}}}\mathds{1}_{\{s\in[0,2 ]\}}\mathrm{d}s,\] with a little extra algebraic work. Figure 3. In the context of the main result of this work, the _interaction_ between the independent random variables \(\Gamma_{AB},\Gamma_{CD},R_{AB},R_{CD}\) induced by the condition that the segments \(S_{AB}\) and \(S_{CD}\) intersect makes the expression for the density \(g(\theta)\) be hard to reduce, as can be seen in the proof of Theorem 2.1. However, it allows to clearly build a simulation scheme to approximate its form, see Figure 2. Additionally, we are capable to estimate the probability of the event \(\{S_{AB}\cap S_{CD}\neq\O\}\), i.e., \[\mathbb{P}\left(S_{AB}\cap S_{CD}\neq\O\right)=\int_{0}^{\pi}g^{*}(\theta)d \theta\approx 0.9393598,\] which means that the random segments intersect quite often. An adequate change of variable can be allowed us to do a clearer analysis of a random geometric object. However, despite the apparently simplicity of the events, the complexity of the expressions continue due to the strong dependency that should be existed between the elements which conform the geometric object. ## 3. **Proof** In this section the proof of Theorem 2.1 is presented. From the expressions 2.1 we have the following relationships: \[\rho_{j} =\rho_{AB}^{2}+t_{j}^{2},\] \[\cos\gamma_{j} =\frac{\rho_{AB}\cos\gamma_{AB}-t_{j}\sin\gamma_{AB}}{\sqrt{\rho _{AB}^{2}+t_{j}^{2}}},\] \[\sin\gamma_{j} =\frac{\rho_{AB}\sin\gamma_{AB}+t_{j}\cos\gamma_{AB}}{\sqrt{\rho _{AB}^{2}+t_{j}^{2}}},\] \[\tan\gamma_{j} =\frac{\sin\gamma_{j}}{\cos\gamma_{j}}=\frac{\rho_{AB}\tan\gamma _{AB}+t_{j}}{\rho_{AB}-t_{j}\tan\gamma_{AB}},\] \[\gamma_{j} =\arctan\left(\frac{\rho_{AB}\tan\gamma_{AB}+t_{j}}{\rho_{AB}-t_ {j}\tan\gamma_{AB}}\right), \tag{3.1}\] for \(j\in\{A,B\}\). The join density of \((R_{A},\Gamma_{A},R_{B},\Gamma_{B})\) \[f(\rho_{A},\gamma_{A},\rho_{B},\gamma_{B})=\frac{1}{(2\pi)^{2}}\mathds{1}_{\{ \rho_{A}\in[0,1]\}}\mathds{1}_{\{\gamma_{A}\in[0,2\pi]\}}\mathds{1}_{\{\rho_{ B}\in[0,1]\}}\mathds{1}_{\{\gamma_{B}\in[0,2\pi]\}}\] is written in terms of \(\rho_{AB},\gamma_{AB},t_{A},t_{B}\), i.e., \[f(\rho_{AB},\gamma_{AB},t_{A},t_{B})\] \[\quad=\frac{1}{(2\pi)^{2}}\left|J\right|\mathds{1}_{\{\rho_{AB} \in[0,1]\}}\mathds{1}_{\{\gamma_{AB}\in[0,2\pi]\}}\mathds{1}_{\left\{t_{A} \in\left[-\sqrt{1-\rho_{AB}^{2}},\sqrt{1-\rho_{AB}^{2}}\right]\right\}} \mathds{1}_{\left\{t_{B}\in\left[-\sqrt{1-\rho_{AB}^{2}},\sqrt{1-\rho_{AB}^{ 2}}\right]\right\}},\] where \(\left|J\right|\) is absolute value of the determine of the Jacobian matrix \(J\), which is \[J=\begin{pmatrix}\frac{\partial\rho_{A}}{\partial t_{A}}&\frac{\partial\rho_{A }}{\partial t_{B}}&\frac{\partial\rho_{A}}{\partial\rho_{AB}}&\frac{\partial \rho_{A}}{\partial\gamma_{AB}}\\ \frac{\partial\gamma_{A}}{\partial t_{A}}&\frac{\partial\gamma_{A}}{\partial t _{B}}&\frac{\partial\gamma_{A}}{\partial\rho_{AB}}&\frac{\partial\gamma_{A}} {\partial\gamma_{AB}}\\ \frac{\partial\rho_{B}}{\partial t_{A}}&\frac{\partial\rho_{B}}{\partial t_{B}} &\frac{\partial\rho_{B}}{\partial\rho_{AB}}&\frac{\partial\rho_{B}}{\partial \gamma_{AB}}\\ \frac{\partial\gamma_{B}}{\partial t_{A}}&\frac{\partial\gamma_{B}}{\partial t_{B }}&\frac{\partial\gamma_{B}}{\partial\rho_{AB}}&\frac{\partial\gamma_{B}}{ \partial\gamma_{AB}}\\ \end{pmatrix}=\begin{pmatrix}2t_{A}&0&2\rho_{AB}&0\\ \frac{\rho_{AB}}{\rho_{AB}^{2}+t_{A}^{2}}&0&-\frac{t_{A}}{\rho_{AB}^{2}+t_{A}^{2}} &1\\ 0&2t_{B}&2\rho_{AB}&0\\ 0&\frac{\rho_{AB}}{\rho_{AB}^{2}+t_{B}^{2}}&-\frac{t_{B}}{\rho_{AB}^{2}+t_{B}^ {2}}&1\\ \end{pmatrix}.\] Then \(\left|J\right|=4\left|t_{A}-t_{B}\right|\). Thus, the joint density \(f(\rho_{AB},\gamma_{AB},t_{A},t_{B})\) is \[\left(\frac{2}{2\pi}\right)^{2}\left|t_{A}-t_{B}\right|\mathds{1}_{\left\{\rho_{ AB}\in[0,1]\right\}}\mathds{1}_{\left\{\gamma_{AB}\in[0,2\pi]\right\}}\mathds{1}_{ \left\{t_{A}\in\left[-\sqrt{1-\rho_{AB}^{2}},\sqrt{1-\rho_{AB}^{2}}\right] \right\}}\mathds{1}_{\left\{t_{B}\in\left[-\sqrt{1-\rho_{AB}^{2}},\sqrt{1- \rho_{AB}^{2}}\right]\right\}}. \tag{3.2}\] From here, we consider \(S_{AB}\) and \(S_{CD}\) in terms of \(\rho_{AB},\gamma_{AB},t_{A},t_{B}\). Note that the cardinality of \(S_{AB}\cap S_{CD}\) is such that \(\left|S_{AB}\cap S_{CD}\right|\in\left\{0,1\right\}\) with probability \(1\). If \(S_{AB}\) is fixed, then the probability that \(C\) and \(D\) remain in the chord induced by \(S_{AB}\) is zero, since \(C\) and \(D\) have a continuous distribution. Let \(l_{AB},l_{CD}\) be the associated lines induce by \(S_{AB},S_{CD}\), respectively. Let \[F_{AB}=\rho_{AB}\begin{pmatrix}\cos\gamma_{AB}\\ \sin\gamma_{AB}\end{pmatrix},\ \ F_{CD}=\rho_{CD}\begin{pmatrix}\cos\gamma_{ CD}\\ \sin\gamma_{CD}\end{pmatrix}.\] Note that the following system of equations 3.3 has always a unique solution \(z=(z_{x},z_{y})^{T}\) due to the considered random variables have continuous distributions. \[y+\frac{1}{\tan\gamma_{AB}}x =\rho_{AB}\sin\gamma_{AB}+\frac{1}{\tan\gamma_{AB}}\rho_{AB}\cos \gamma_{AB},\] \[y+\frac{1}{\tan\gamma_{CD}}x =\rho_{CD}\sin\gamma_{CD}+\frac{1}{\tan\gamma_{CD}}\rho_{CD}\cos \gamma_{CD}. \tag{3.3}\] The point \(z\) is the intersected point between \(l_{AB}\) and \(l_{CD}\), see Figure 4. From the system 3.3, we have that \[z=\frac{1}{\sin\gamma_{AB}\cos\gamma_{CD}-\cos\gamma_{AB}\sin\gamma_{CD}} \begin{pmatrix}\rho_{CD}\sin\gamma_{AB}-\rho_{AB}\sin\gamma_{CD}\\ \rho_{AB}\cos\gamma_{CD}-\rho_{CD}\cos\gamma_{AB}\end{pmatrix}, \tag{3.4}\] and the norm of \(z\) is \[\left|\left|z\right|\right|^{2}=\frac{\rho_{AB}^{2}+\rho_{CD}^{2}-2\rho_{AB} \rho_{CD}\cos\left(\gamma_{A}-\gamma_{B}\right)}{\sin^{2}\left(\gamma_{A}- \gamma_{B}\right)}. \tag{3.5}\] In order that \(S_{AB}\cap S_{CD}\neq\mathcal{O}\), we need that the point \(z\) satisfies the condition \(\left|\left|z\right|\right|^{2}\leq 1\), so that it is inside of \(\mathds{D}\), and to assure that there exist \(\alpha_{AB},\alpha_{CD}\in[0,1]\) such that \[\left(1-\alpha_{AB}\right)A+\alpha_{AB}B =z,\] \[\left(1-\alpha_{CD}\right)C+\alpha_{CD}D =z.\] Figure 4. But note, \[z =\left(1-\alpha_{AB}\right)A+\alpha_{AB}B\] \[=\left(1-\alpha_{AB}\right)\begin{pmatrix}\cos\gamma_{AB}&-\sin \gamma_{AB}\\ \sin\gamma_{AB}&\cos\gamma_{AB}\end{pmatrix}\begin{pmatrix}\rho_{AB}\\ t_{A}\end{pmatrix}+\alpha_{AB}\begin{pmatrix}\cos\gamma_{AB}&-\sin\gamma_{AB} \\ \sin\gamma_{AB}&\cos\gamma_{AB}\end{pmatrix}\begin{pmatrix}\rho_{AB}\\ t_{B}\end{pmatrix}\] which means \[\left|\left|z\right|\right|^{2}=\rho_{AB}^{2}+\left[(1-\alpha_{AB})t_{A}+ \alpha_{AB}t_{B}\right]^{2}.\] Observe that \(\left|\left|z\right|\right|^{2}-\rho_{AB}^{2}\geq 0\) is always satisfied, then \[\alpha_{AB}^{(1)}=\frac{t_{A}(t_{A}-t_{B})-\left|t_{A}-t_{B}\right|\sqrt{ \left|\left|z\right|\right|^{2}-\rho_{AB}^{2}}}{(t_{A}-t_{B})^{2}},\ \ \alpha_{AB}^{(2)}=\frac{t_{A}(t_{A}-t_{B})+\left|t_{A}-t_{B}\right|\sqrt{ \left|\left|z\right|\right|^{2}-\rho_{AB}^{2}}}{(t_{A}-t_{B})^{2}}. \tag{3.6}\] Similarly, for the pair \((C,D)\) we obtain \(\left|\left|z\right|\right|^{2}-\rho_{CD}^{2}\geq 0\) and \[\alpha_{CD}^{(1)}=\frac{t_{C}(t_{C}-t_{D})-\left|t_{C}-t_{D}\right|\sqrt{ \left|\left|z\right|\right|^{2}-\rho_{CD}^{2}}}{(t_{C}-t_{D})^{2}},\ \ \alpha_{CD}^{(2)}=\frac{t_{C}(t_{C}-t_{D})+\left|t_{C}-t_{D}\right|\sqrt{ \left|\left|z\right|\right|^{2}-\rho_{CD}^{2}}}{(t_{C}-t_{D})^{2}}. \tag{3.7}\] In order to compute the value of 1.1, we only need to consider the event \[\mathcal{E}:=\left\{\Theta\leq\theta,S_{AB}\cap S_{CD}\neq\O\right\}.\] From the expressions 3.6, 3.7 and Figure 4, we have that the event \(\mathcal{E}\) can be expressed in the following way: \[\mathcal{E} =\left\{\Theta\leq\theta,\exists s_{1},s_{2}\in[0,1]:(1-s_{1})A+ s_{1}B=(1-s_{2})C+s_{2}D\right\}\] \[=\cup_{i,j\in\{1,2\}}\left\{\left|\left|\gamma_{AB}-\gamma_{CD} \right|-\pi\right|\leq\theta,\left|\left|z\right|\right|^{2}\leq 1,\alpha_{AB} ^{(i)},\alpha_{CD}^{(j)}\in[0,1]\right\}\] \[=\mathcal{E}_{0}\cap\mathcal{E}_{1}\cap\mathcal{E}_{2},\] where \[\mathcal{E}_{0}:=\left\{\left|\left|\gamma_{AB}-\gamma_{CD}\right|-\pi\right| \leq\theta\right\},\,\mathcal{E}_{1}:=\left\{\left|\left|z\right|\right|^{2} \leq 1\right\},\text{ and }\mathcal{E}_{2}:=\cup_{i,j\in\{1,2\}}\left\{\alpha_{AB}^{(i)}, \alpha_{CD}^{(j)}\in[0,1]\right\}.\] Let \(f_{1}:=f(\rho_{AB},\gamma_{AB},t_{A},t_{B})\) and \(f_{2}:=f(\rho_{CD},\gamma_{CD},t_{C},t_{D})\) be the densities associated to \((A,B)\) and \((C,D)\), respectively. Thus, the probability of the event \(\mathcal{E}\) is \[\mathbb{P}\left(\mathcal{E}\right) =\int_{\mathcal{E}}f_{1}f_{2}d(\rho_{AB},\gamma_{AB},t_{A},t_{B}, \rho_{CD},\gamma_{CD},t_{C},t_{D})\] \[=\int f_{1}f_{2}\mathds{1}_{\mathcal{E}_{0}}\mathds{1}_{\mathcal{ E}_{1}}\mathds{1}_{\mathcal{E}_{2}}d(\rho_{AB},\gamma_{AB},t_{A},t_{B},\rho_{CD}, \gamma_{CD},t_{C},t_{D}). \tag{3.8}\] In order to compute the integral 3.8, we assume that \(\left|\left|z\right|\right|^{2}\), \(\rho_{AB}\), \(\rho_{CD}\), \(\gamma_{AB}\), and \(\gamma_{CD}\) are fixed values satisfying the conditions described by \(\mathcal{E}_{0}\) and \(\mathcal{E}_{1}\). We note that \[f_{1}f_{2}\mathds{1}_{\mathcal{E}_{2}}=\sum_{i,j\in\{1,2\}}f_{1}f_{2}\mathds{1 }_{\left\{\alpha_{AB}^{(i)},\alpha_{CD}^{(j)}\in[0,1]\right\}}=\sum_{i,j\in\{1, 2\}}f_{1}\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}}f_{2}\mathds{1}_ {\left\{\alpha_{CD}^{(j)}\in[0,1]\right\}}.\] As \(f_{1}\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}}\) does not sharing variables with \(f_{2}\mathds{1}_{\left\{\alpha_{CD}^{(j)}\in[0,1]\right\}}\), under the previous assumption, we have \[\int\int\int f_{1}f_{2}\mathds{1}_{\mathcal{E}_{2}}\mathrm{d}t_{A }\mathrm{d}t_{B}\mathrm{d}t_{C}\mathrm{d}t_{D}\] \[=\sum_{i,j\in\{1,2\}}\int\int\int f_{1}\mathds{1}_{\left\{\alpha_{ AB}^{(i)}\in[0,1]\right\}}f_{2}\mathds{1}_{\left\{\alpha_{CD}^{(j)}\in[0,1] \right\}}\mathrm{d}t_{A}\mathrm{d}t_{B}\mathrm{d}t_{C}\mathrm{d}t_{D}\] \[=\sum_{i,j\in\{1,2\}}\left[\int\int f_{1}\mathds{1}_{\left\{ \alpha_{AB}^{(i)}\in[0,1]\right\}}\mathrm{d}t_{A}\mathrm{d}t_{B}\right]\left[ \int\int f_{2}\mathds{1}_{\left\{\alpha_{CD}^{(j)}\in[0,1]\right\}}\mathrm{d} t_{C}\mathrm{d}t_{D}\right].\] We observe that \[\int\int f_{1}\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1] \right\}}\mathrm{d}t_{A}\mathrm{d}t_{B}=\int\int f_{1}\mathds{1}_{\left\{ \alpha_{AB}^{(i)}\in[0,1]\right\}}\left(\mathds{1}_{\left\{t_{A}\geq t_{B} \right\}}+\mathds{1}_{\left\{t_{A}<t_{B}\right\}}\right)\mathrm{d}t_{A} \mathrm{d}t_{B}\] \[=\int\int f_{1}\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1] \right\}}\mathds{1}_{\left\{t_{A}\geq t_{B}\right\}}\mathrm{d}t_{A}\mathrm{d}t _{B}+\int\int f_{1}\mathds{1}_{\left\{\alpha_{AB}^{(i)}\in[0,1]\right\}} \mathds{1}_{\left\{t_{A}<t_{B}\right\}}\mathrm{d}t_{A}\mathrm{d}t_{B}.\] For \(i=1\), \[\int\int f_{1}\mathds{1}_{\left\{\alpha_{AB}^{(1)}\in[0,1] \right\}}\mathrm{d}t_{A}\mathrm{d}t_{B}\] \[=\int\int f_{1}\mathds{1}_{\left\{t_{A}\geq\sqrt{\|z\|^{2}-\rho_{ AB}^{2}\geq t_{B}}\right\}}\mathrm{d}t_{A}\mathrm{d}t_{B}+\int\int f_{1} \mathds{1}_{\left\{t_{B}\geq-\sqrt{\|z\|^{2}-\rho_{AB}^{2}\geq t_{A}}\right\}} \mathrm{d}t_{A}\mathrm{d}t_{B},\] and \(i=2\), \[\int\int f_{1}\mathds{1}_{\left\{\alpha_{AB}^{(2)}\in[0,1] \right\}}\mathrm{d}t_{A}\mathrm{d}t_{B}\] \[=\int\int f_{1}\mathds{1}_{\left\{t_{A}\geq-\sqrt{\|z\|^{2}-\rho_{ AB}^{2}\geq t_{B}}\right\}}\mathrm{d}t_{A}\mathrm{d}t_{B}+\int\int f_{1} \mathds{1}_{\left\{t_{B}\geq\sqrt{\|z\|^{2}-\rho_{AB}^{2}\geq t_{A}}\right\}} \mathrm{d}t_{A}\mathrm{d}t_{B}.\] Similarly expressions are obtained for the case of \(f_{2}\). Carrying out the corresponding calculations, we have \[f_{3}^{**} :=\int f_{1}f_{2}\mathds{1}_{\mathcal{E}_{2}}d(t_{A},t_{B},t_{C},t_{D})\] \[=\left(\frac{2}{\pi}\right)^{4}\sqrt{1-\rho_{AB}^{2}}\sqrt{1- \rho_{CD}^{2}}\left(1-||z||^{2}\right)^{2}\mathds{1}_{\left\{\rho_{AB}\in[0,1 ]\right\}}\mathds{1}_{\left\{\rho_{CD}\in[0,1]\right\}}\mathds{1}_{\left\{ \gamma_{AB}\in[0,2\pi]\right\}}\mathds{1}_{\left\{\gamma_{CD}\in[0,2\pi] \right\}}.\] Observe that \(\gamma_{AB},\gamma_{CD}\) are independent variables, which are also independent of the others considered variables. As they are uniformly distributed on \([0,2\pi]\), we have that the density \(h(\gamma)\) of \(\Gamma:=\Gamma_{AB}-\Gamma_{CD}\) is \[h(\gamma)=\left\{\begin{array}{cl}\frac{1}{2\pi}-\frac{\gamma}{(2\pi)^{2}}& \gamma\in[0,2\pi]\\ \frac{1}{2\pi}+\frac{\gamma}{(2\pi)^{2}}&\gamma\in[-2\pi,0]\\ 0&\gamma\not\in[-2\pi,2\pi]\end{array}\right..\] From this observation, if we define \[f_{3}^{*}(\rho_{AB},\rho_{CD},\gamma) :=\left(\frac{8}{\pi}\right)^{2}\sqrt{1-\rho_{AB}^{2}}(\sqrt{1- \rho_{CD}^{2}}\] \[\qquad\times\left[1-\frac{\rho_{AB}^{2}+\rho_{CD}^{2}-2\rho_{AB} \rho_{CD}\cos{(\gamma)}}{\sin^{2}{(\gamma)}}\right]^{2}\mathds{1}_{\left\{ \rho_{AB}\in[0,1]\right\}}\mathds{1}_{\left\{\rho_{CD}\in[0,1]\right\}},\] \[f_{3}(\rho_{AB},\rho_{CD},\gamma) :=h(\gamma)f_{3}^{*}(\rho_{AB},\rho_{CD},\gamma),\] thus the probability of the event \(\mathcal{E}\) can be expressed as \[\mathbb{P}\left(\mathcal{E}\right) =\int f_{3}(\rho_{AB},\rho_{CD},\gamma)\mathds{1}_{\mathcal{E}_{0}} \mathds{1}_{\mathcal{E}_{1}}d(\gamma,\rho_{AB},\rho_{CD})=\int f_{3}(\rho_{AB}, \rho_{CD},\gamma)\mathds{1}_{\{|\gamma|-\pi|\leq\theta\}}\mathds{1}_{\mathcal{E }_{1}}d(\gamma,\rho_{AB},\rho_{CD})\] \[=2\int f_{3}(\rho_{AB},\rho_{CD},\gamma)\mathds{1}_{\{|\gamma-\pi| \leq\theta\}}\mathds{1}_{\{\gamma\in[0,2\pi]\}}\mathds{1}_{\mathcal{E}_{1}}d( \gamma,\rho_{AB},\rho_{CD})\] \[=2\int f_{3}(\rho_{AB},\rho_{CD},\gamma)\left[\mathds{1}_{\{\pi- \gamma\leq\theta\}}\mathds{1}_{\{\gamma\in[0,\pi]\}}+\mathds{1}_{\{\gamma-\pi \leq\theta\}}\mathds{1}_{\{\gamma\in(\pi,2\pi]\}}\right]\mathds{1}_{\mathcal{E }_{1}}d(\gamma,\rho_{AB},\rho_{CD}).\] Now, if we take the change of variable \(\beta=\pi-\gamma\), then \[2\int f_{3}(\rho_{AB},\rho_{CD},\gamma)\mathds{1}_{\{\pi-\gamma \leq\theta\}}\mathds{1}_{\{\gamma\in[0,\pi]\}}\mathds{1}_{\mathcal{E}_{1}}d( \gamma,\rho_{AB},\rho_{CD})\] \[\quad=2\int f_{3}(\rho_{AB},\rho_{CD},\pi-\beta)\mathds{1}_{\left\{ \sqrt{1-\rho_{AB}^{2}}|\sin(\pi-\beta)|\geq|\rho_{AB}\cos(\pi-\beta)-\rho_{CD} |\right\}}\mathds{1}_{\{\beta\leq\theta\}}\mathds{1}_{\{\beta\in[0,\pi]\}}d( \gamma,\rho_{AB},\rho_{CD})\] \[\quad=\int_{0}^{\theta}\left[2\int_{0}^{1}\int_{0}^{1}f_{3}(\rho_ {AB},\rho_{CD},\pi-\beta)\mathds{1}_{\left\{\sqrt{1-\rho_{AB}^{2}}|\sin(\beta) |\geq|\rho_{AB}\cos(\beta)+\rho_{CD}|\right\}}\mathrm{d}\rho_{AB}\mathrm{d} \rho_{CD}\right]\mathds{1}_{\{\beta\in[0,\pi]\}}\mathrm{d}\beta.\] Similarly, if we take \(\beta=\gamma-\pi\), then \[2\int f_{3}(\rho_{AB},\rho_{CD},\gamma)\mathds{1}_{\{\gamma-\pi \leq\theta\}}\mathds{1}_{\{\gamma\in(\pi,2\pi]\}}\mathds{1}_{\mathcal{E}_{1}}d (\gamma,\rho_{AB},\rho_{CD})\] \[\quad=2\int f_{3}(\rho_{AB},\rho_{CD},\pi+\beta)\mathds{1}_{\left\{ \sqrt{1-\rho_{AB}^{2}}|\sin(\pi+\beta)|\geq|\rho_{AB}\cos(\pi+\beta)-\rho_{CD} |\right\}}\mathds{1}_{\{\beta\leq\theta\}}\mathds{1}_{\{\beta\in(0,\pi]\}}d( \gamma,\rho_{AB},\rho_{CD})\] \[\quad=\int_{0}^{\theta}\left[2\int_{0}^{1}\int_{0}^{1}f_{3}(\rho_ {AB},\rho_{CD},\pi+\beta)\mathds{1}_{\left\{\sqrt{1-\rho_{AB}^{2}}|\sin(\beta) |\geq|\rho_{AB}\cos(\beta)+\rho_{CD}|\right\}}\mathrm{d}\rho_{AB}\mathrm{d} \rho_{CD}\right]\mathds{1}_{\{\beta\in(0,\pi]\}}\mathrm{d}\beta.\] Note \(f_{3}^{*}(\rho_{AB},\rho_{CD},\pi-\beta)=f_{3}^{*}(\rho_{AB},\rho_{CD},\pi+\beta)\). Thus, we have \[f_{3} (\rho_{AB},\rho_{CD},\pi-\beta)+f_{3}(\rho_{AB},\rho_{CD},\pi+\beta)\] \[=h(\pi-\beta)f_{3}^{*}(\rho_{AB},\rho_{CD},\pi-\beta)+h(\pi+\beta )f_{3}^{*}(\rho_{AB},\rho_{CD},\pi+\beta)\] \[=\frac{1}{2\pi}f_{3}^{*}(\rho_{AB},\rho_{CD},\pi-\beta).\] From the above, we can write the probability of the event \(\mathcal{E}\) as \[\mathbb{P}\left(\mathcal{E}\right)=\int_{0}^{\theta}g^{*}(\beta)\mathds{1}_{\{ \beta\in[0,\pi]\}}\mathrm{d}\beta, \tag{3.9}\] where \[g^{*}(\beta):=\int_{0}^{1}\int_{0}^{1}\frac{1}{\pi}f_{3}^{*}(\rho_{AB},\rho_{ CD},\pi-\beta)\mathds{1}_{\left\{\sqrt{1-\rho_{AB}^{2}}|\sin(\beta)|\geq|\rho_{AB} \cos(\beta)+\rho_{CD}|\right\}}\mathrm{d}\rho_{AB}\mathrm{d}\rho_{CD}. \tag{3.10}\] Let \(c:=\int_{0}^{\pi}g^{*}(\beta)\mathrm{d}\beta\) and \(g(\beta)=\frac{1}{c}g^{*}(\beta)\). Then \[\mathbb{P}\left(\Theta\leq\theta|S_{AB}\cap S_{CD}\neq\O\right)=\int_{0}^{ \theta}g(\beta)\mathds{1}_{\{\beta\in[0,\pi]\}}\mathrm{d}\beta,\] i.e., \(g(\beta)\) is the density of the angle between the random segments \(S_{AB}\) and \(S_{CD}\) when they intersect.
2301.00319
Quantum Hairy Black Hole Formation and Horizon Quantum Mechanics
After introducing the gravitational decoupling method and the hairy black hole recently derived from it, we investigate the formation of quantum hairy black holes by applying the horizon quantum mechanics formalism. It enables us to determine how external fields, characterized by hairy parameters, affect the probability of spherically symmetric black hole formation and the generalized uncertainty principle.
R. T. Cavalcanti, J. M. Hoff da Silva
2023-01-01T01:33:04Z
http://arxiv.org/abs/2301.00319v1
# Quantum Hairy Black Hole Formation and Horizon Quantum Mechanics ###### Abstract After introducing the gravitational decoupling method and the hairy black hole recently derived from it, we investigate the formation of quantum hairy black holes by applying the horizon quantum mechanics formalism. It enables us to determine how external fields, characterized by hairy parameters, affect the probability of spherically symmetric black hole formation and the generalized uncertainty principle. ## I Introduction Given their intrinsic connection with intense gravitational fields, solid theoretical basis [1; 2; 3], and several observational results corroborating their existences, black holes play a central role in contemporary high-energy physics and astrophysics [4; 5; 6; 7]. Despite the characterization of the horizon of stationary black hole solutions being well-known within general relativity [3; 8], the nature of the horizons of non-stationary or stationary solutions beyond general relativity is still a source of extensive research [9; 10; 11; 12]. The investigation of black holes is not restricted to astrophysical objects; they are also expected to be formed whenever a high concentration of energy is confined to a small region of spacetime, producing so-called quantum black holes [13; 14; 15; 16; 17]. However, the precise formation mechanism of classical and quantum black holes is still unknown. Although we do not have a theory of quantum gravity, phenomenology suggests that some features of quantum black holes are expected to be model-independent [7]. From a certain scale, candidate theories should modify the results of general relativity, giving birth to some alternatives to Einstein's theory of gravity [18; 19]. Examples could allow for the presence of non-minimal coupled fundamental fields or higher derivative terms during the action, which directly affects the uniqueness theorems of black holes in general relativity. The famous no-hair theorem is not preserved outside the general relativity realm. These solutions lead to effects that are potentially detectable near the horizon of astrophysical black holes [20; 21; 22], or in quantum black holes' formation [23; 24], and may provide hints for the quantum path. One of the major challenges in general relativity is finding physically relevant solutions to Einstein's field equations. On the other hand, deriving new solutions from other previously known ones is a widespread technique. This approach is precisely what the so-called gravitational decoupling (GD) method intends to achieve. It has recently commanded the community's attention due to its simplicity and effectiveness [25; 26; 27] in generating new, exact analytical solutions by considering additional sources to the stress-energy tensor. The recent description of anisotropic stellar distributions [28; 29], whose predictions might be tested in astrophysical observations [30; 31; 32; 33], as well as the hairy black hole solutions by gravitational decoupling, are particularly interesting. The latter describes a black hole with hair sourced by generic fields, possibly of quantum nature, surrounding the vacuum Schwarzschild solution [27]. Exciting results have been found during investigation of this solution [34; 35; 36]. From the quantum side, one of the key features of quantum gravity phenomenology is the generalized uncertainty principle (GUP), which modifies the Heisenberg uncertainty principle accordingly \[\Delta x\Delta p\gtrsim\hbar\left(1+\epsilon(\Delta p)^{2}\right). \tag{1}\] This expression of the GUP, which stems from different approaches to quantum gravity [37; 38; 39; 40; 41; 42; 43; 44; 45; 46], characterizes a minimum scale length \(\Delta x\). This feature emerges quite naturally in the horizon quantum mechanics formalism (HQM) [16; 47]. In addition to the GUP, HQM also provides an estimation of the probability of quantum black hole formation. In a scenario of extra-dimensional spacetimes, the HQM gave an explanation for the null results of quantum black hole formation in current colliders [23; 24]. Could it also tell us something about a mechanism for decreasing the fundamental scale to something near the scale of current colliders? Our aim is to investigate the quantitative and qualitative effects of black hole hair, regarding the probability of black hole formation and the GUP by applying the horizon quantum mechanics formalism. This paper is organized as follows: Section II is dedicated to reviewing the gravitational decoupling procedure, the metric for GD hairy black holes, and an approximation for the horizon radius. In Section III, we apply the horizon quantum mechanics formalism to the hairy black hole solution of the previous section. We compare the probability of quantum black hole formation and the GUPs of hairy black holes for a range of hair parameters, unveiling the effects of the hair fields. Finally, Section IV is dedicated to conclusions and discussion. ## II Hairy black holes and horizon radius Starting from Einstein's field equations \[G_{\mu\nu}=8\pi\,\ddot{T}_{\mu\nu}, \tag{2}\] where \(G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}\) denotes the Einstein tensor, the gravitational decoupling (GD) [25] method takes the energy-momentum tensor decomposed as \[\bar{T}_{\mu\nu}=T_{\mu\nu}+\Theta_{\mu\nu}. \tag{3}\] Here, \(T_{\mu\nu}\) is the source of a known solution to general relativity, while \(\Theta_{\mu\nu}\) introduces a new field or extension of the gravitational sector. From \(\nabla_{\mu}\,G^{\mu\nu}=0\), we also have \(\nabla_{\mu}\,\bar{T}^{\mu\nu}=0\). The effective density and the tangential and radial pressures can be determined by examining the field equations \[\ddot{\rho} = \rho+\Theta_{0}^{\ 0}, \tag{4a}\] \[\ddot{p}_{t} = p-\Theta_{2}^{\ 2},\] (4b) \[\ddot{p}_{r} = p-\Theta_{1}^{\ 1}. \tag{4c}\] The idea is to deform a known solution to split the field equations in a sector containing the known solution with source \(T_{\mu\nu}\) and a decoupled one governing the deformation, encompassing \(\Theta_{\mu\nu}\). In fact, assuming a known spherically symmetric metric, \[ds^{2}=-e^{\kappa(r)}dt^{2}+e^{\zeta(r)}dr^{2}+r^{2}d\Omega^{2}, \tag{5}\] and deforming \(\kappa(r)\) and \(\zeta(r)\) as \[\kappa(r) \mapsto \kappa(r)+\alpha f_{2}(r), \tag{6a}\] \[e^{-\zeta(r)} \mapsto e^{-\zeta(r)}+\alpha f_{1}(r), \tag{6b}\] the resulting decoupled field equations read \[8\pi\,\Theta_{0}^{\ 0} = \alpha\left(\frac{f_{1}}{r^{2}}+\frac{f_{1}^{\prime}}{r}\right), \tag{7a}\] \[8\pi\,\Theta_{1}^{\ 1}-\alpha\,\frac{e^{-\zeta}\,f_{2}^{\prime}}{r} = \alpha\,f_{1}\left(\frac{1}{r^{2}}+\frac{\kappa^{\prime}(r)+ \alpha f_{2}^{\prime}(r)}{r}\right),\] (7b) \[8\pi\Theta_{2}^{\ 2}\!-\!\alpha f_{1}Z_{1}(r)= \alpha\frac{f_{1}^{\prime}}{4}\left(\kappa^{\prime}(r)+\alpha f_{ 2}^{\prime}(r)\!+\!\frac{2}{r}\right)\!+\!\alpha Z_{2}(r), \tag{7c}\] where [25] \[Z_{1}(r) = \alpha^{2}f_{2}^{\prime}\left(r\right)^{2}+2\,\alpha\!\left(f_{2} ^{\prime}\left(r\right)\kappa^{\prime}\left(r\right)+\frac{f_{2}^{\prime}\left( r\right)}{r}+f_{2}^{\prime\prime}\left(r\right)\right)+\kappa^{\prime}\left(r \right)^{2}+\frac{2\,\kappa^{\prime}\left(r\right)}{r}+2\,\kappa^{\prime\prime }\left(r\right), \tag{8a}\] \[Z_{2}(r) = \alpha e^{-\zeta}\left(2f_{2}^{\prime\prime}+f_{2}^{\prime 2}+\frac{2f_{2}^{ \prime}}{r}+2\kappa^{\prime}f_{2}^{\prime}-\zeta^{\prime}f_{2}^{\prime} \right). \tag{8b}\] The above equations state that if the deformation parameter \(\alpha\) goes to zero, then \(\Theta_{\mu\nu}\) must go to zero. It is worth mentioning that for extended geometric deformation, that is, for \(f_{2}\neq 0\), the sources are not individually conserved in general. However, as discussed in [26], in this case, the decoupling of the field equations without an exchange of energy is allowed in two scenarios: (a) when \(T_{\mu\nu}\) is a barotropic fluid whose equation of state is \(T_{0}^{\ 0}=T_{1}^{\ 1}\) or (b) for vacuum regions of the first system \(T_{\mu\nu}=0\). When minimal geometric deformation is applied, on the other hand, the sources are shown to be individually conserved [25; 26]. Assuming the Schwarzschild solution to be the known one and requiring a well-defined horizon structure [27], from \(g_{rr}=-\frac{1}{g_{tt}}\) follows \[\left(1-\frac{2M}{r}\right)\left(e^{\alpha f_{2}(r)}-1\right)=\alpha f_{1}(r). \tag{9}\] Therefore, one is able to write \[ds^{2} = -\left(1-\frac{2M}{r}\right)e^{\alpha f_{2}(r)}dt^{2}+\left(1-\frac{ 2M}{r}\right)^{-1}e^{-\alpha\,f_{2}(r)}dr^{2}+r^{2}\,d\Omega^{2}. \tag{10}\] Further, assuming strong energy conditions, \[\tilde{\rho}+\tilde{p}_{r}+2\,\tilde{p}_{t} \geq 0, \tag{11a}\] \[\tilde{\rho}+\tilde{p}_{r} \geq 0,\] (11b) \[\tilde{\rho}+\tilde{p}_{t} \geq 0, \tag{11c}\] and managing the field equations, a new hairy black hole solution was found [27] \[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}d\Omega^{2}, \tag{12}\] where \[f(r)=1-\frac{2GM+\alpha\ell}{r}+\alpha e^{-\frac{r}{GM}}. \tag{13}\] The dimensionless parameter \(0\leq\alpha\leq 1\) tracks the deformation of the Schwarzschild black hole, \(e\) is the Euler constant, and \(\ell\) is the direct effect of the nonvanishing additional font \(\Theta_{\mu\nu}\). Notice that by taking \(\alpha=0\), the Schwarzschild solution is restored. Further, the \(\ell\) parameter is limited to \(2GM/e^{2}\leq\ell\leq 1\) due to the assumption of a strong energy condition. In extreme cases, \(\ell=2GM/e^{2}\) and \[f_{e}(r)=1-\frac{2GM}{r}+\alpha\left(e^{-\frac{r}{GM}}-\frac{2GM}{e^{2}\,r} \right). \tag{14}\] The hairy black hole has a single horizon, located at \(r=r_{H}\), such that \[\left(1+\alpha e^{-\frac{r_{H}}{GM}}\right)r_{H}=2GM+\alpha\ell. \tag{15}\] Such an equation has no analytical solution. Nevertheless, a very accurate analytical approximation is found by Taylor expanding it around the Schwarzschild horizon radius \(r_{S}=2GM\), \[\frac{r_{H}}{GM}\approx\frac{4\left(\alpha\ell e^{2}/GM-3\,\alpha+e^{2}\right) }{\alpha\ell e^{2}/GM-4\,\alpha+2\,e^{2}}. \tag{16}\] Figure 1 shows a comparison between the exact and approximated horizon radii for different values of the hairy parameters. In the following section, we are going to use Equation (16) for the analytical expression of the hairy black hole's horizon radius. Figure 1: The radius of the hairy black hole horizon \(r_{H}\) as a function of \(\ell\) for different values of the parameter \(\alpha\). The colored dashed lines represent the approximated radius, and the gray lines are the exact ones. It shows how the hairy horizon deviates from the Schwarzschild horizon for an increasing \(\alpha\) and \(\ell\). The ranges for \(\alpha\) and \(\ell\) were fixed due to the assumption of a strong energy condition [27]. ## III The horizon quantum mechanics formalism Horizon quantum mechanics (also known as horizon wave function formalism) is an effective approach capable of providing the signatures of black hole physics to the Planck scale [48; 49; 50; 51] (see [47] for a comprehensive review). The main idea is to extend quantum mechanics and gravity further than the current experimental limits. In such an approach, we face the conceptual challenge of consistently describing classical and quantum mechanical objects, such as horizons and particles. This is achieved by assigning wave functions to the quantum black hole horizon. This association allows the use of quantum mechanical machinery to distinguish between particles and quantum black holes and to estimate the GUPs. Nevertheless, first, we must choose a model describing the particle wave function to derive the results. Due to the previous results' simplicity and efficiency, we shall use the Gaussian model. From classical general relativity, we know that the horizons of black holes are described by trapping surfaces, whose locations are determined by \[g^{ij}\nabla_{i}r\nabla_{j}r\,=\,0\, \tag{17}\] where \(\nabla_{i}r\) is orthogonal to the surfaces of the constant area \(\mathcal{A}=4\pi r^{2}\). A trapping surface then exists if there are values of \(r\) and \(t\) such that the gravitational radius \(R_{\rm H}\) satisfies \[R_{\rm H}(r,t)\,\geq\,r. \tag{18}\] Considering a spinless point-particle of mass \(m\), an uncertainty in the spatial particle localization of the same order of the Compton scale \(\lambda_{m}\simeq\hbar/m=l_{p}\,m_{p}/m\) follows from the uncertainty principle, where \(l_{p}\) and \(m_{p}\) are the Planck length and mass, respectively. Arguing that quantum mechanics gives a more precise description of physics, \(R_{\rm H}\) makes sense only if it is larger than the Compton wavelength associated with the same mass, namely \(R_{\rm H}\,\gtrsim\,\lambda_{m}\). Thus, for the Schwarzschild radius \(R_{S}=2Gm=2\frac{l_{p}}{m_{p}}m\), \[l_{p}\,m/m_{p}\,\gtrsim\,l_{p}\,m_{p}/m\quad\Longrightarrow\quad m\,\gtrsim \,m_{p}. \tag{19}\] This suggests that the Planck mass is the minimum mass such that the Schwarzchild radius can be defined. From quantum mechanics, the spectral decomposition of a spherically symmetric matter distribution is given by the expression \[\left|\psi_{\rm S}\right\rangle=\sum_{E}C(E)\left|\psi_{E}\right\rangle\, \tag{20}\] with the usual eigenfunction equation \[\hat{H}\left|\psi_{E}\right\rangle=E\left|\psi_{E}\right\rangle\, \tag{21}\] regardless of the specific form of the actual Hamiltonian operator \(\hat{H}\). Using the energy spectrum and inverting the expression of the Schwarzschild radius, we have \[E=m_{p}\frac{r_{\rm H}}{2l_{p}}. \tag{22}\] Putting it back into the wave function, one can define the (unnormalized) horizon wave function as \[\psi_{H}(r_{\rm H})=C\left(m_{p}\frac{r_{\rm H}}{2l_{p}}\right) \tag{23}\] whose normalization is fixed, as usual, by the inner product \[\left\langle\psi_{H}\,|\,\phi_{H}\right\rangle=4\pi\int_{0}^{\infty}\psi_{H}^{ \ast}(r_{\rm H})\phi_{H}(r_{\rm H})r_{\rm H}^{2}dr_{\rm H}. \tag{24}\] However, the classical radius \(R_{H}\) is thus replaced by the expected value of the operator \(\hat{R}_{H}\). From the uncertainty of the expectation value, it follows that the radius will necessarily be "fuzzy", similar to the position of the source itself. The next aspect one has to approach to establish a criterion for deciding if a mass distribution does or does not form a black hole is if it lies inside its horizon of radius \(r=r_{\rm H}\). From quantum mechanics, one finds that it is given by the product \[\mathcal{P}_{<}(r<r_{\rm H})=P_{S}(r<r_{\rm H})\mathcal{P}_{H}(r_{\rm H}), \tag{25}\] where the first term, \[P_{S}(r<r_{\rm H})=4\pi\int_{0}^{r_{\rm H}}|\psi_{S}(r)|^{2}r^{2}dr, \tag{26}\] is the probability that the particle resides inside the sphere of radius \(r=r_{\rm H}\), while the second term, \[\mathcal{P}_{H}(r_{\rm H})=4\pi r_{\rm H}^{2}|\psi_{H}(r_{\rm H})|^{2} \tag{27}\] is the probability density that the value of the gravitational radius is \(r_{\rm H}\). Finally, the probability that the particle described by the wave function \(\psi_{S}\) is a BH will be given by the integral of (25) over all possible values of the horizon radius \(r_{\rm H}\). Namely, \[P_{BH}=\int_{0}^{\infty}\mathcal{P}_{<}(r<r_{\rm H})dr_{\rm H}, \tag{28}\] which is one of the main outcomes of the formalism. ### Gaussian Sources The previous construction can be made explicit by applying the Gaussian model for the wave function. To implement this idea, let us recall that spectral decomposition is also assumed to be valid for momentum. Therefore, from (20), \(\langle p\,|\psi_{\rm S}\rangle=C(p)\equiv\psi_{H}(p)\). The Gaussian wave function for \(\psi_{\rm S}\) scales as \(r^{2}\) in the position space and leads to a Gaussian wave function in the momentum space, scaling as \(p^{2}\), naturally. Finally, since the dispersion relation relates \(p^{2}\) with energy, we are able to have \(\langle p\,|\psi_{\rm S}\rangle=\psi_{H}(r_{H})\) via (22). Hence, starting with a Gaussian wave function, we can describe a spherically symmetric massive particle at rest, such as \[\psi_{\rm S}(r)=\frac{e^{-\frac{r^{2}}{2\,l^{2}}}}{(l\,\sqrt{\pi})^{3/2}}. \tag{29}\] The corresponding function in momentum space is thus given by \[\tilde{\psi}_{\rm S}(p) =4\pi\int_{0}^{\infty}\frac{\sin(rp)}{\sqrt{8\pi^{3}}rp}\frac{e^{ -\frac{r^{2}}{2\,l^{2}}}}{(l\,\sqrt{\pi})^{3/2}}r^{2}dr\] \[=\frac{e^{-\frac{p^{2}}{2\,\Delta^{2}}}}{(\Delta\,\sqrt{\pi})^{3/ 2}}\, \tag{30}\] where \(\Delta=m_{p}\,l_{p}/l\) is the spread of the wave packet in momentum space, whose width \(l\) the Compton length of the particle should diminish, \[l\geq\lambda_{m}\sim\frac{m_{p}\,l_{p}}{m}. \tag{31}\] In addition to the straightforward handling of a Gaussian wave packet, it is also relevant to recall that the Gaussian wave function leads to a minimal uncertainty for the expected values computed with it. Had we used another wave function, it would certainly imply a worsening uncertainty, eventually leading to unnecessary extra difficulties relating to the HQM and GUP (see next section). Back to our problem, assuming the relativistic mass-shell relation in flat space [48] \[p^{2}=E^{2}-m^{2}\, \tag{32}\] the energy \(E\) of the particle is expressed in terms of the related horizon radius \(r_{\rm H}=R_{\rm H}(E)\), following from Equation (16), \[E=\frac{\alpha m_{p}\ell e^{2}+\big{(}\alpha-e^{2}\big{)}m_{p}r_{H}}{2\,(2\, \alpha-e^{2})l_{p}}. \tag{33}\] Thus, from Equations (30) and (33), one finds the the horizon wave function of the hairy black hole \[\psi_{\rm H}(r_{\rm H})=\mathcal{N}_{\rm H}\Theta(r_{\rm H}-R_{\rm H})\,e^{\left( C_{2}r_{H}^{2}+C_{1}r_{H}+C_{0}\right)},\] where \[C_{0}=-\frac{\alpha^{2}l^{2}m_{p}^{2}\ell^{2}e^{4}}{8\left(2\, \alpha-e^{2}\right)^{2}l_{p}^{2}},\quad C_{1}=-\frac{\left(\alpha-e^{2}\right) \alpha l^{2}m_{p}^{2}\ell e^{2}}{4\left(2\,\alpha-e^{2}\right)^{2}l_{p}^{2}}, \quad C_{2}=-\frac{\left(\alpha-e^{2}\right)^{2}l^{2}m_{p}^{2}}{8\left(2\, \alpha-e^{2}\right)^{2}l_{p}^{2}}. \tag{34}\] The Heaviside step function \(\Theta\) appears above due to the imposition \(E\geq m\). The normalisation factor \(\mathcal{N}_{\rm H}\) is fixed according to \[\mathcal{N}_{\rm H}^{-2}=4\pi\int_{0}^{\infty}|\psi_{\rm H}(r_{\rm H})|^{2}\,r _{\rm H}^{2}\,dr_{\rm H}.\] The normalized horizon wave function is thus given as follows \[\psi_{\rm H}(r_{\rm H}) =-\frac{2\,C_{2}^{\frac{3}{2}}\,e^{\frac{A(r_{H})}{2}}}{\sqrt{ \pi}\sqrt{4\,C_{1}C_{2}e^{A(R_{H})}-\left(2\,\sqrt{2}C_{2}\Gamma\left(\frac{3} {2},-A(R_{H})\right)+\sqrt{2\pi}C_{1}^{2}\left(\mathrm{erf}\left(\frac{\sqrt{2 }\left(2\,C_{2}R_{H}+C_{1}\right)}{2\,\sqrt{-}C_{2}}\right)-1\right)\right) \sqrt{-C_{2}}}}, \tag{35}\] \[A(x) =\frac{4\,C_{2}^{2}x^{2}+4\,C_{1}C_{2}x+C_{1}^{2}}{2\,C_{2}}.\] Here, \(\Gamma(s,x)\) denotes the upper incomplete Euler-Gamma function and \(\mathrm{erf}(x)\) the error function. The expression above has two classes of parameters. Two of these, \(\alpha\) and \(\ell\), are related to the hairy black hole, and two are non-fixed _a priori_: the particle mass \(m\), encoded in \(R_{H}\), and the Gaussian width \(l\). The resulting probability \(P_{BH}=P_{BH}(l,m,\ell,\alpha)\) will also depend on the same parameters. According to the previous discussion, before finding the probability distribution, we have first to find the probability that the particle resides inside a sphere with the radius \(r=r_{\rm H}\). From Equations (26) and (29), one obtains \[P_{S}(r<r_{\rm H})=4\pi\int_{0}^{r_{\rm H}}|\psi_{S}(r)|^{2}r^{2}dr=\frac{2}{ \sqrt{\pi}}\gamma\left(\frac{3}{2},\frac{r_{\rm H}^{2}}{l^{2}}\right),\] with \(\gamma(s,x)=\Gamma(s)-\Gamma(s,x)\), the lower incomplete Gamma function. Equations (27) and (35) yield \(\mathcal{P}_{H}(r_{\rm H})\), as depicted in Figure 2. Combining the previous results, one finds that the probability density for the particle resides within its own gravitational radius \[\mathcal{P}_{<}(r<r_{\rm H})=8\sqrt{\pi}\gamma\left(\frac{3}{2}, \frac{r_{\rm H}^{2}}{l^{2}}\right)r_{\rm H}^{2}|\psi_{H}(r_{\rm H})|^{2}.\] Figure 2: The probability density for the value of the gravitational radius is \(r_{H}\) for \(\alpha=\ell/(GM)=0.5\) and different values of the Gaussian width. The probability of the particle described by the Gaussian to be a black hole is finally given by \[P_{BH}(l,m,\ell,\alpha)=8\sqrt{\pi}\int_{R_{\rm H}}^{\infty}\gamma \left(\frac{3}{2},\frac{r_{\rm H}^{2}}{l^{2}}\right)r_{\rm H}^{2}|\psi_{H}(r_{ \rm H})|^{2}, \tag{36}\] which has to be calculated numerically. Assuming the Gaussian width has the same order as the particle Compton length, we could set \(l\sim m^{-1}\) on Equation (36) and find the probability depending on either \(l\) or \(m\). On the other hand, by departing again from Equation (31), we may set values for \(m\) in terms of the Planck mass and find the probability in this scenario. Applying \(l\sim m^{-1}\) yields \[P_{BH}(l,\ell,\alpha)=8\sqrt{\pi}\int_{R_{\rm H}}^{\infty}\gamma \left(\frac{3}{2},\frac{r_{\rm H}^{2}}{l^{2}}\right)r_{\rm H}^{2}|\psi_{H}(r_{ \rm H})|^{2}, \tag{37}\] or \[P_{BH}(m,\ell,\alpha)=8\sqrt{\pi}\int_{R_{\rm H}}^{\infty}\gamma \left(\frac{3}{2},r_{\rm H}^{2}m^{2}\right)r_{\rm H}^{2}|\psi_{H}(r_{\rm H})|^ {2}. \tag{38}\] The resulting probabilities are shown in Figure 3 below. Figure 4 displays the probability for \(m\) given as a fraction of the Planck mass. Figure 4: The probability of a ”particle” being a black hole depending on the Gaussian width and mass \(m\) given as a fraction of the Planck mass, with \(m=m_{p}\) (solid), \(m=3m_{p}/4\) (dashed), and \(m=m_{p}/2\) (dotted). Figure 3: The probability of a ”particle” being a black hole depending on the Gaussian width or mass, assuming \(l\sim m^{-1}\). ### HQM and GUP Since the horizon quantum mechanics formalism applies the standard wave function description for particles, a natural question is whether it affects the Heisenberg uncertainty principle. As mentioned, it produces a GUP similar to that produced by Equation (1). In quantum mechanics, the uncertainty principle may be derived by calculating the uncertainty associated with the wave function. Here, we start from the same point. From the Gaussian wave function (29), the particle size uncertainty is given by \[\Delta r_{0}^{2} =\langle r^{2}\rangle-\langle r\rangle^{2}\] \[=4\pi\int_{0}^{\infty}|\psi_{S}(r)|^{2}r^{4}dr-\left(4\pi\int_{0} ^{\infty}|\psi_{S}(r)|^{2}r^{3}dr\right)^{2}\] \[=\frac{3\pi-8}{2\pi}l^{2}. \tag{39}\] One might find the uncertainty of the horizon radius in an analogous way,1 Footnote 1: The analytical expression of \(\Delta r_{\rm H}^{2}\) is huge and little enlightening. \[\Delta r_{\rm H}^{2}=\langle r_{\rm H}^{2}\rangle-\langle r_{\rm H}\rangle^{2}. \tag{40}\] The total radial uncertainty can now be taken as a linear combination of the quantities calculated above, \(\Delta r=\Delta r_{0}+\epsilon\Delta r_{\rm H}\). For the uncertainty in momentum, we have \[\Delta p^{2}=\langle p^{2}\rangle-\langle p\rangle^{2}=\frac{3\pi-8}{2\pi} \frac{m_{p}^{2}l_{p}^{2}}{l^{2}}.\] Note that the momentum uncertainty and the width \(l\) are related such that \(\Delta p\sim 1/l\). Using this fact in \(\Delta r=\Delta r_{0}+\epsilon\Delta r_{\rm H}\), one is able to find \[\frac{\Delta r}{l_{p}}=\frac{3\pi-8}{2\pi}\frac{m_{p}}{\Delta p}+\epsilon \Delta_{\rm H}\left(\frac{\Delta p}{m_{p}}\right), \tag{41}\] which is similar to the GUP discussed previously. The function \(\Delta_{\rm H}\) also depends on the wave function and hairy black hole parameters. Figure 5 shows the behavior of the GUP as a function of the momentum uncertainty, taking \(\epsilon=1\). There, we can see a minimum \(\Delta r\) placed around the Planck scale. From the GUP expression, it is straightforward to see that a larger \(\epsilon\) means significant correction to the quantum mechanics' uncertainty. The hairy parameters, however, have a small qualitative effect on fixing the minimum scale. As shown in Figure 5, their effects become prominent for a large \(\Delta p\). Figure 5: GUP profile emerged from the horizon wave function formalism for \(\epsilon=1\). The dotted line represents the particle size uncertainty \(\Delta r_{0}\), the dashed line represents the uncertainty of the horizon radius \(\Delta r_{\rm H}\), and the solid lines describe the GUP. Discussion A few years ago, effective theories suggested lowering the scale of quantum black hole formation to TeV. Thus, in principle, it became experimentally accessible. In spite of no quantum black holes being detected, solid theoretical results point out that such objects should exist in nature [7; 14]. They could give us valuable hints about quantum gravity features [13; 7; 14]. One of this paper's motivating questions was whether a generic black hole hair could significantly change the scale of quantum black hole formation. However, regarding the analysis carried out here, the hairy black holes look qualitatively similar to the Schwarzschild one, with a probability \(P_{BH}\) of a similar shape and a related GUP, leading to the existence of a minimum length scale. Nevertheless, one of the main results of the present paper is that the existence of hair increases the probability \(P_{BH}\). This is indeed a point to be stressed. Its explanation rests upon the fact that the hairy black hole radius is slightly larger than the one for Schwarzschild. This implies that, although the scale of quantum black hole formation is still beyond the current experimental scale, additional fields may lower such scale. Those results might impact future colliders' estimations of quantum black holes coming from alternative theories of gravity and potentially stimulate investigations of specific models of quantum hairy black holes [17]. ## Acknowledgements R.T.C. thanks Unesp--AGRUP for the financial support. J.M.H.d.S. thanks CNPq (grant No. 303561/2018-1) for the financial support.
2305.05796
1100 days in the life of the supernova 2018ibb -- The best pair-instability supernova candidate, to date
Abridged - Stars with ZAMS masses between 140 and $260 M_\odot$ are thought to explode as pair-instability supernovae (PISNe). During their thermonuclear runaway, PISNe can produce up to several tens of solar masses of radioactive nickel, resulting in luminous transients similar to some superluminous supernovae (SLSNe). Yet, no unambiguous PISN has been discovered so far. SN2018ibb is a H-poor SLSN at $z=0.166$ that evolves extremely slowly compared to the hundreds of known SLSNe. Between mid 2018 and early 2022, we monitored its photometric and spectroscopic evolution from the UV to the NIR with 2-10m class telescopes. SN2018ibb radiated $>3\times10^{51} \rm erg$ during its evolution, and its bolometric light curve reached $>2\times10^{44} \rm erg\,s^{-1}$ at peak. The long-lasting rise of $>93$ rest-frame days implies a long diffusion time, which requires a very high total ejected mass. The PISN mechanism naturally provides both the energy source ($^{56}$Ni) and the long diffusion time. Theoretical models of PISNe make clear predictions for their photometric and spectroscopic properties. SN2018ibb complies with most tests on the light curves, nebular spectra and host galaxy, potentially all tests with the interpretation we propose. Both the light curve and the spectra require 25-44 $M_\odot$ of freshly nucleosynthesised $^{56}$Ni, pointing to the explosion of a metal-poor star with a He-core mass of 120-130 $M_\odot$ at the time of death. This interpretation is also supported by the tentative detection of [Co II]$\lambda$1.025$\mu$m, which has never been observed in any other PISN candidate or SLSN before. Powering by a central engine, such as a magnetar or a black hole, can be excluded with high confidence. This makes SN2018ibb by far the best candidate for being a PISN, to date.
Steve Schulze, Claes Fransson, Alexandra Kozyreva, Ting-Wan Chen, Ofer Yaron, Anders Jerkstrand, Avishay Gal-Yam, Jesper Sollerman, Lin Yan, Tuomas Kangas, Giorgos Leloudas, Conor M. B. Omand, Stephen J. Smartt, Yi Yang, Matt Nicholl, Nikhil Sarin, Yuhan Yao, Thomas G. Brink, Amir Sharon, Andrea Rossi, Ping Chen, Zhihao Chen, Aleksandar Cikota, Kishalay De, Andrew J. Drake, Alexei V. Filippenko, Christoffer Fremling, Laurane Freour, Johan P. U. Fynbo, Anna Y. Q. Ho, Cosimo Inserra, Ido Irani, Hanindyo Kuncarayakti, Ragnhild Lunnan, Paolo Mazzali, Eran O. Ofek, Eliana Palazzi, Daniel A. Perley, Miika Pursiainen, Barry Rothberg, Luke J. Shingles, Ken Smith, Kirsty Taggart, Leonardo Tartaglia, WeiKang Zheng, Joseph P. Anderson, Letizia Cassara, Eric Christensen, S. George Djorgovski, Lluis Galbany, Anamaria Gkini, Matthew J. Graham, Mariusz Gromadzki, Steven L. Groom, Daichi Hiramatsu, D. Andrew Howell, Mansi M. Kasliwal, Curtis McCully, Tomas E. Müller-Bravo, Simona Paiano, Emmanouela Paraskeva, Priscila J. Pessi, David Polishook, Arne Rau, Mickael Rigault, Ben Rusholme
2023-05-09T23:01:02Z
http://arxiv.org/abs/2305.05796v2
# 1100 Days in the Life of the Supernova 2018ibb -- ###### Abstract Stars with zero age main sequence masses between 140 and 260 \(M_{\odot}\) are thought to explode as pair-instability supernovae (PISNe). During their thermonuclear runaway, PISNe can produce up to several tens of solar masses of radioactive nickel, resulting in luminous transients similar to some superluminous supernovae (SLSNe). Yet, no unambiguous PISN has been discovered so far. SN 2018ibb is a hydrogen-poor SLSN at \(z=0.166\) that evolves extremely slowly compared to the hundreds of known SLSNe. Between mid 2018 and early 2022, we monitored its photometric and spectroscopic evolution from the UV to the NIR with 2-10 m class telescopes. SN 2018ibb radiated \(>3\times 10^{51}\) erg during its evolution, and its bolometric light curve reached \(>2\times 10^{44}\) erg s\({}^{-1}\) at peak. The long-lasting rise of \(>93\) rest-frame days implies a long diffusion time, which requires a very high total ejected mass. The PISN mechanism naturally provides both the energy source (\({}^{56}\)Ni) and the long diffusion time. Theoretical models of PISNe make clear predictions for their photometric and spectroscopic properties. SN 2018ibb complies with most tests on the light curves, nebular spectra and host galaxy, potentially all tests with the interpretation we propose. Both the light curve and the spectra require 25-44 \(M_{\odot}\) of freshly nucleosynthesis \({}^{56}\)Ni, pointing to the explosion of a metal-poor star with a helium core mass of 120-130 \(M_{\odot}\) at the time of death. This interpretation is also supported by the tentative detection of [Co ii]\(\lambda\) 1.025\(\mu\)m, which has never been observed in any other PISN candidate or SLSN before. We observe a significant excess in the blue part of the optical spectrum during the nebular phase in tension with predictions of existing PISN models. However, we have compelling observational evidence for an eruptive mass-loss episode of the progenitor of SN 2018ibb shortly before the explosion, and our dataset reveals that the interaction of the SN ejecta with this oxygen-rich circumstellar material contributed to the observed emission. That may explain this specific discrepancy with PISN models. Powering by a central engine, such as a magnetar or a black hole, can be excluded with high confidence. This makes SN 2018ibb by far the best candidate for being a PISN, to date. ## 1 Introduction Observations of stellar nurseries (e.g., Krumholz et al. 2019), and massive stars (e.g., Crowther 2007) and their fates (e.g., Filippenko 1997; Gal-Yam 2017) have led to stellar evolution models of ever-increasing complexity (e.g., McKee & Ostriker 2007). These models also predict the existence of stars with \(\gtrsim 100\)\(M_{\odot}\)(e.g., Heger & Woosley 2002; Heger et al. 2003), which may have no analogues in the local Universe (Mackey et al. 2003; Bromm & Larson 2004; Langer et al. 2007, but see Brands et al. 2022), and exotic types of stellar explosions (Fowler & Hoyle 1964; Rakavy et al. 1967; Woosley et al. 2007; Sakstein et al. 2022). One of those predicted, yet not securely discovered object classes, is pair-instability supernovae (PISNe). This SN class is produced by the thermonuclear runaway of metal-poor stars with zero age main sequence (ZAMS) masses between 140 and 260 \(M_{\odot}\)(Fowler & Hoyle 1964; Barkat et al. 1967; Rakavy et al. 1967). When such a massive star dies, its helium core will have grown to 65-130 \(M_{\odot}\)(Heger & Woosley 2002). The combination of relatively low matter density and high temperature leads to the production of \(e^{-}e^{+}\) pairs, reducing the radiation pressure that supports the star against the gravitational collapse. As a result, implosive oxygen and silicon burning produce enough energy to revert the collapse and obliterate the entire star, leaving no remnant behind. During the past 15 years, PISNe have been a focus of fundamental physics and supernova science. Stars with helium-topped cores slightly less massive than \(\sim 65\)\(M_{\odot}\) presumably leave black holes behind, and stars whose helium-topped cores exceed \(\sim 130\)\(M_{\odot}\) are thought to collapse directly into black holes. In this paradigm, there should be a dearth of black holes with masses between \(\sim 50\) and \(\sim 120\)\(M_{\odot}\)(Farmer et al., 2019; Renzo et al., 2020). Observations by the LIGO and VIRGO gravitational wave detectors found tentative evidence for the existence of such a drop in the black-hole mass function (The LIGO Scientific Collaboration et al., 2020). A more recent study by the LIGO-VIRGO collaboration using the larger Gravitational-Wave Transient Catalog 3 shows that the evidence of a mass gap at \(\sim 50\)\(M_{\odot}\) is inconclusive (The LIGO Scientific Collaboration et al., 2021). However, this could be due to the inclusion of binary black holes formed through dynamical channels involving repeated mergers rather than evidence for the lack of a mass gap (e.g., Belczynski et al., 2020; Gerosa and Fishbach, 2021, and references therein). Finding PISNe is one of the main challenges in the SN field. PISN models predict that up to \(\sim 57\)\(M_{\odot}\) of radioactive \({}^{56}\)Ni are produced during the thermonuclear runaway (Heger and Woosley, 2002). Such high Ni-yield PISNe are thought to produce long-lived (rise times \(>80\) days), luminous (\(M_{\rm peak}<-21\) mag) transients (Kasen et al., 2011; Kozyreva et al., 2017) in the regime of superluminous supernovae (SLSNe; Quimby et al., 2011; Gal-Yam, 2012, 2019). Although the powering mechanism of SLSNe is debated (Gal-Yam et al., 2009; Blinnikov and Sorokina, 2010; Inserra et al., 2013), numerous studies of both H-poor and H-rich SLSNe have revealed that nickel is not the primary source of energy (e.g., Chatzopoulos and Wheeler, 2012; Chen et al., 2013; Inserra et al., 2013; Nicholl et al., 2017; Inserra et al., 2018; Moriya et al., 2018; Gal-Yam, 2019; Inserra, 2019; Kangas et al., 2022; Chen et al., 2023). Yet, a few SLSNe had markedly broad and luminous light curves similar to predictions of PISN models, e.g., SN 1999as, SN 2007bi, PTF12dam, PS1-14bj, and SN 2015bn (Hatano et al., 2001; Gal-Yam et al., 2009; Nicholl et al., 2013; Chen et al., 2015; Lunnan et al., 2016; Nicholl et al., 2016; Kozyreva et al., 2017). However, the published candidates either had incomplete datasets, not long enough rise times, too high ejecta velocities, too blue spectra, or exploded in galaxies with too high metallicity to conclusively argue for the discovery of a PISN (e.g., Nicholl et al., 2013; Jerkstrand et al., 2017). Starting from June 2018, the Zwicky Transient Facility (ZTF; Bellm et al., 2019; Graham et al., 2019) surveys the northern sky every 2-3 nights in two filters and detects thousands of supernovae every year (Fremling et al., 2020; Perley et al., 2020). Until autumn 2021, we carried out a systematic survey for SLSNe in ZTF (Chen et al., 2023, 2018). SN 2018ib, the slowest evolving SLSN in our sample, has several properties that match predictions of PISN models. Between mid 2018 and early 2022, we built a comprehensive photometric and spectroscopic dataset covering the evolution from \(t_{\rm max}\)\(-\)93 to \(t_{\rm max}\)\(+\)1000 rest-frame days to scrutinise SLSN and PISN models. In this paper, we present this dataset along with our conclusions on SN 2018ibb's source of energy and progenitor. The paper is structured as follows: we report the SN discovery in Section 2 and describe the observations in Section 3. In Section 4, we derive the properties of SN 2018ibb's light curve, spectra and host galaxy and in Section 5 we contrast SLSN and PISN models with our dataset. Finally, in Section 6 we summarise our findings and present our conclusions on the nature of SN 2018ibb and its connection to PISNe. Throughout the paper, we provide all uncertainties at \(1\sigma\) confidence. The photometry is reported in the AB system. We assume \(\Lambda\)CDM cosmology with \(H_{0}=67.8\)\(\rm km\,s^{-1}\,Mpc^{-1}\), \(\Omega_{\rm M}=0.308\), and \(\Omega_{\Lambda}=0.692\) (Planck Collaboration et al., 2016). Phase information is reported in the rest-frame with respect to the \(g\)\(l\)-band maximum (\(t_{\rm max}\)) at MJD=58455. ## 2 Discovery SN 2018ibb, located at \(\alpha=04\):38:56.950, \(\delta=-20\):39:44.10 (J2000), was discovered by the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry, 2011; Smith et al., 2020) survey as ATLAS18unu on 10 September 2018 with an apparent magnitude of \(o=18.89\) mag (wavelength range 5600-8200 A; Tonry et al., 2018). Later detections were reported by the public northern sky survey of the Zwicky Transient Facility (Bellm et al., 2019) on 16 November 2018 (internal name: ZTF18acenoto), the Pan-STARRS Survey for Transients (Huber et al., 2015) on 8 January 2019 (internal name: PS19crg) and the _Gaia_ Photometric Science Alerts transient survey (Hodgkin et al., 2021) on 4 July 2019 (internal name: Gaia19cvo). A false-colour image Figure 1: A false-colour image of the field when SN 2018ibb was bright (left) and after it had faded below the host level (right). The SN position, marked by the crosshair, is located \(\sim 1\) kpc from the centre of its star-forming dwarf host-galaxy (\(M_{\rm peak}^{\rm host}\sim-15.4\) mag, \(M_{\star}\sim 10^{7.6}\)\(M_{\odot}\)). For more information about the host, see Section 4.6. The false-colour image was built with STIFF version 2.4.0 (Bertin, 2012). of the field when SN 2018ibb was bright and after it had faded is shown in Figure 1. Fremling et al. (2018a) initially classified SN 2018ibb as a Type Ia SN on 5 December 2018 but retracted this classification on 6 December 2018 and set a new classification to'supernova' on 6 December 2018 (Fremling et al., 2018b). Pursiainen et al. (2018) obtained a spectrum with the 3.58 m New Technology Telescope at La Silla Observatory (Chile) as a part of the Extended Public ESO Spectroscopic Survey of Transient Objects (ePESSTO; Smartt et al., 2015) on 14 December 2018 and classified SN 2018ibb as a H-poor SLSN at \(z=0.16\). ## 3 Observations and data reduction ### Supernova photometry Our imaging campaign had three tiers: _i_) all-sky surveys with sufficient depth and cadence to monitor the evolution from \(t_{\rm max}\)-93 to \(t_{\rm max}\)+306 days; _ii_) dedicated follow-up campaigns to expand the wavelength coverage to the UV and near-IR and to extend the light curve coverage to \(t_{\rm max}\)+1000 days; and _iii_) smaller targeted campaigns to mitigate data gaps, expand the wavelength coverage to the near-IR, and ensure a good flux calibration of the photometric and spectroscopic data. Owing to the large number of facilities involved in this effort, we present the details of each campaign and the data reduction in Appendix A. The ground-based photometry was calibrated with field stars from PanSTARRS1 (Chambers et al., 2016, PS1), the Dark Energy Survey (DES; The Dark Energy Survey Collaboration, 2005), the Dark Energy Spectroscopic Instrument (DESI) Legacy Imaging survey (LS; Dey et al., 2019), and the Two Micron All-Sky Survey (2MASS; Skrutskie et al., 2006). We applied known colour equations between PS1/DES and Bessel-l/GROND/SDSS/ZTF filters (Finkbeiner et al., 2016; Drlica-Wagner et al., 2018; Greiner et al., 2008; Medford et al., 2020) and Lupton1, to account for differences in the filter response function. We applied the offsets from Blanton & Roweis (2007) to convert all measurements to the AB photometric system. The _Swift_/UVOT data were calibrated with zeropoints from the _Swift_ pipeline and converted to the AB system following Breeveld et al. (2011). Footnote 1: [https://www.sdss.org/dr12/algorithms/sdssubvrittransform](https://www.sdss.org/dr12/algorithms/sdssubvrittransform) SN spectra are characterised by strong absorption and emission features that evolve with time. This can lead to time-dependent colour terms between similar but not identical filters (e.g., Stritzinger et al., 2002) and add a non-negligible systematic scatter to the light curves if these differences are not corrected. To illustrate this issue, we compute the synthetic magnitude in ZTF/\(g\), GROND/\(g\) and EFOSC2/\(g\) at \(t_{\rm max}\) and \(t_{\rm max}\)+210 days2. At \(t_{\rm max}\), the colour term between the EFOSC2/GROND and ZTF filters is \(-\)0.01 and +0.04 mag, respectively, but at \(t_{\rm max}\)+210 days the differences increased to \(-\)0.13 and +0.12 mag. Since the EFOSC2 and GROND data cover the late-time evolution, the differences in the filters would be well visible in the final light curve if they remained uncorrected. Footnote 2: The GROND, ZTF and EFOSC2 \(g\)-band filters have an effective wavelength of 4504, 4723, 5104 Å and width of 1373, 1282, 788 Å, respectively (retrieved from the Spanish Virtual; Rodrigo et al., 2012, and references therein). To calibrate the various datasets into the same photometric system, we defined a set of reference filters consisting of the _Swift_ filters, ZTF/\(gr\), GROND/\(izJH\) and 2MASS/\(K\). Then, we extracted synthetic photometry of all ground-based filters used in our campaign from the Keck and VLT spectra (Section 3.3), which were obtained in clear/photometric conditions, and measured the expected colours with respect to our reference filter system as a function of time. After applying this s-correction (Stritzinger et al., 2002), we merged the different datasets to build a photometric sequence of SN 2018ibb from \(t_{\rm max}\)-93 to \(t_{\rm max}\)+706 days. We omitted these corrections for the \(BVJHK\) data because most observations in these filters were done with the same instrument. Table 1 in Appendix A summarises the homogenised SN photometry. The measurements are not corrected for Galactic extinction along the line of sight [\(E(B-V)=0.03\) mag; Schlafly & Finkbeiner, 2011], but this correction is applied to all derived properties and photometric data presented in this paper. The photometry is available on WISeREP3(Yaron & Gal-Yam, 2012). It will also be available as a machine-readable table in the electronic version of this paper. Footnote 3: [https://www.wiserep.org](https://www.wiserep.org) ### Host galaxy photometry We obtained additional photometry with the ESO VLT, the 3.58 m New Technology Telescope and the _Hubble Space Telescope_ approximately 1000 days after maximum (Appendix A). The brightness of the host galaxy was measured with elliptical apertures encircling the entire host galaxy and calibrated in the same way as the SN photometry. The _HST_ photometry was done with a custom-made aperture photometry tool, based on the python package photutils(Bradley et al., 2020) version 1.5, using an aperture comparable in area to the ground-based images and calibrated against tabulated zeropoints in pysmphot version 2.0.0 (STSc1 Development Team, 2013). In the \(R\)-band, we measure a brightness of \(24.39\pm 0.05\) mag. The brightness in the other filters is reported in Table 1. ### Spectroscopy We collected a series of spectra spanning from the time of maximum to \(t_{\rm max}\)+989.2 days. Similarly to the imaging campaign, we utilised a large number of 2-10 m class telescopes. A brief summary of the observations is provided in Table 2. The details of the observations and data reduction are presented in Appendix B. All spectra were absolute-flux-calibrated with multi-band photometry. Since the photometry was not obtained contemporaneously with the spectroscopic observation, we linearly interpolated between adjacent observations. The spectra obtained after August 2021 have an increasing contribution from the host galaxy. The host contamination was removed with the FORS2 spectrum from January 2022 (\(t_{\rm max}\)+989.2 days). The slit did not cover the entire host galaxy. \begin{table} \begin{tabular}{l c c c} \hline \hline Telescope & Instrument & Filter & Brightness \\ & & & (mag) \\ \hline _HST_ & WFC3 & \(F336W\) & \(>26.04\) \\ NTT & EFOSC2 & \(B\) & \(24.94\pm 0.22\) \\ VLT & FORS2 & \(g\)\_HIGH & \(24.95\pm 0.05\) \\ VLT & FORS2 & \(R\)\_SPECIAL & \(24.39\pm 0.05\) \\ VLT & FORS2 & \(l\)\_BESSELL & \(24.32\pm 0.10\) \\ VLT & FORS2 & \(z\)\_SPECIAL & \(23.78\pm 0.14\) \\ \hline \end{tabular} 3 \end{table} Table 1: Photometry of the host galaxy We scaled the spectrum to the flux encircled by the slit. Note, to determine whether SN 2018ibb contributed to the observed spectrum from January 2022, we compared the observed spectrum to the fit of the spectral energy distribution (SED) of the entire host galaxy. The continuum level of the January 2022 spectrum is fully consistent with the best fit to the host galaxy SED (Figure 1). The only remaining SN feature is broad [O iii] \(\lambda\lambda\) 4959,5007 in emission, produced by the interaction of the SN ejecta with circumstellar material (Section 5.1). Owing to that, we masked the region and estimated the host galaxy flux with linear interpolation. To recover the host-subtracted spectrum of SN 2018ibb from the January 2022 epoch, we utilised the best fit to the galaxy SED. All data were also corrected for Milky-Way (MW) extinction. Note, a few spectra were affected by adverse weather conditions. The absolute-flux calibrated spectra _without_ MW extinction correction are available on WISeREP. ### Imaging polarimetry To measure the ejecta geometry, we acquired four epochs of imaging polarimetry in the v_HIGH filter with VLT/FORS2 between \(t_{\rm max}\)+31.9 and \(t_{\rm max}\)+94.4 days (Table 3). In addition, we got one epoch with the R_SPECIAL filter at \(t_{\rm max}\)+94.4 days. Each polarisation measurement required four exposures at four different retarder-plate angles: 0\({}^{\circ}\), 22\(\aas@@fstack{\circ}\)5, 45\({}^{\circ}\), and 67\(\aas@@fstack{\circ}\)5. The beam was split with a Wollaston prism into the ordinary (o) and the extraordinary (e) ray. The o-ray and the e-ray were placed at the 7th and the 8th multi-object spectroscopy (MOS) stripes, respectively. We reduced the data in a standard manner using IRAF (Tody 1993) tasks. The flux of the SN in the o-ray and e-ray were measured through aperture photometry at all four retarder-plate angles using the DAOPHOT.PHOT package (Stetson 1987). Stokes parameters and polarisation of the target were derived based on the FORS2 manual (Anderson 2018), and the polarisation degrees were corrected for polarisation bias, caused by the non-negativity nature of the polarisation degree, following Wang et al. (1997). The extracted, debiased polarisation properties are summarised in Table 3. These values need to be corrected for polarisation induced by dichroic extinction from non-spherical dust grains that aligned with the magnetic field of the interstellar medium of the Milky Way (MW) and the host galaxy. Following Serkowski et al. (1975), the polarisation level from the Milky Way can be as high as \(\leq\) 9% \(\times\)\(E(B-V)\). With a Galactic extinction of \(E(B-V)=0.03\) mag towards SN 2018ibb, the MW polarisation level could be up to 0.26%. The determination of the interstellar polarisation from SN 2018ibb's host galaxy is not feasible. Note, the polarisation degree is only \(p\lesssim\)0.3% in v_HIGH between \(t_{\rm max}\)+32 \begin{table} \begin{tabular}{l l l l l l l l} \hline MJD & Phase & Telescope/Instrument & Disperser & Sit & Wavelength & Spectral & Exposure \\ & (day) & & & width (\({}^{\circ}\)) & range (\(\lambda\)) & resolution & time (\(\iota\)) \\ \hline 58453.349 & -1.4 & Keck-1/LRS & 400/3400 + 4008500 & 1.0 & 3076 - 9350 & 600/1200 & 300/300 \\ 58461.248 & 5.4 & P60/SEDm & & IFU & 4650 - 9200 & 100 & 2250 \\ 58462.228 & 7.9 & P60/SEDm & & IFU & 3950 - 9200 & 100 & 2250 \\ 58465.246 & 8.8 & P70/DDSP & 600/316 & 1.5 & 3500 - 10000 & 1000/1000 & 1200 \\ 58467.254 & 10.5 & NT/FOSPC2 & Gr13 & 1.0 & 3650 - 9250 & 350 & 3600 \\ 58482.127 & 21.6 & P60/SEDm & & IFU & 5500 - 8850 & 100 & 1200 \\ 58483.292 & 24.3 & Li/Kark & 600/4310 + 300/7500 & 2.0 & 3500 - 100.500 & 800 & 2400 \\ 58487.225 & 27.6 & Li/Kark & 600/4310 + 300/7500 & 2.0 & 3500 - 100.500 & 800 & 2400 \\ 58490.9349 & 30.8 & NOT/ALFOSCI & Gr13 & 1.3 & 3600 - 9600 & 280 & 1800 \\ 58491.226 & 31.1 & NT/FOSPC2 & Gr11 + 01/60530 & 1.0 & 3345 - 9995 & 460/460 & 1800/1800 \\ 58493.006 & 32.7 & VLT/FOSPC & & 1.0/0.9/0.9 & 3000 - 24.800 & 5409890/50600 & 1800 \\ 58590.904 & 47.1 & NOT/ALFOSCI & Gr84 & 1.3 & 3800 - 9450 & 280 & 600 \\ 58510.179 & 47.3 & Li/Kark & 600/4310 + 300/7500 & 2.0 & 3500 - 10.500 & 800 & 3600 \\ 58515.100 & 51.5 & NT/FOSPC2 & Gr11 + Gr81/60530 & 1.0 & 3345 - 9995 & 460/460 & 1800/17380 \\ 58525.048 & 60.1 & VLT/X-shooter & & 1.0/0.9/0.9 & 3000 - 24.800 & 5409890/5060 & 2400 \\ 58536.929 & 70.3 & NOT/ALFOSCI & Gr4 & 1.3 & 3900 - 9600 & 280 & 2400 \\ 58541.083 & 73.8 & NT/FOSPC2 & Gr11 + Gr1/60530 & 1.0 & 3145 - 9995 & 460/460 & 2200/2200 \\ 58550.007 & 81.5 & NT/FOSPC2 & & 1.0/0.9/0.9 & 3000 - 24.800 & 5409890/5060 & 3600 \\ 58550.007 & 81.5 & NT/FOSPC2 & & 1.0/0.9/0.9 & 3000 - 24.800 & 5409890/50600 & 3600 \\ 585915.153 & 89.3 & Li/Kark & 600/4310 + 300/7500 & 3.0 & 3500 - 100.500 & 800 & 2400 \\ 58565.000 & 94.3 & VLT/X-shooter & & 1.0/0.9/0.9 & 3000 - 24.800 & 5409890/50600 & 3600 \\ 58718.304 & 25.8 & NT/FOSPC2 & Gr13 & 1.0/0.9/0.9 & 3000 - 24.800 & 5409890/5060 & 3600 \\ 58724.585 & 25.312 & Keel-LRS & 400/3400 + 400/8500 & 1.0 & 3600 - 9250 & 350 & 5400 \\ 58776.290 & 275.6 & NT/FOSPC2 & Gr13 & 1.5 & 3650 - 9250 & 230 & 5400 \\ 5876.907 & 276.1 & L2/TMOB-LCCF\({}^{a}\) & 6400L/6670L/G200 & 1.2/1/2.0 & 3200 - 12.000 & 2925115/1100 & 3009/303/3800 \\ 58789.287 & 286.7 & VL/X-shooter\({}^{b}\) & & 1.0/1.0/0.9 & 3000 - 2000 & 5409890/50600 & 3600 \\ 58866.134 & 352.6 & VL/X-shooter\({}^{b}\) & & 1.0/1.0/0.9 & 3000 - 2000 & 75049890/50600 & 3600 \\ 58876.668 & 361.2 & L3/TMOB-LCCF\({}^{a}\) & 6400L/6670L/G200 & 1.2/1.0/1.0 & 3200 - 23500 & 925115/100 & 11000 /200/2003/4002800 \\ 5895.120 & 37.5 & VL/X-shooter\({}^{b}\) & & 1.0/1.0/0.9 & 3000 - 2000 & 5409890/5060 & 3600 \\ 59191.507 & 562.3 & Keck-LIRS & 600/4000 + 400/8500 & 1.0 & 3400 - 10275 & 1000/1200 & 4935/4935 \\ 5913.730 & 565.0 & VL/FORS2 & 300V & 1.0 & 3800 - 9600 & 440 & 14400 \\ 59607.089 & 988.1 & Gemini-S/CMOS & R1510/G455 & 1.0 & 5000 - 10.000 & 310 & 4800 \\ 59608.3 and \(t_{\rm max}\)+94 days (see Table 3). Such a low level of polarisation is very unlikely to be caused by a high intrinsic polarisation aligned and cancelled to a comparable level of significant interstellar polarisation. Therefore, without correcting for the polarisation from the host galaxy, the observations point to a high degree of spherical symmetry of SN 2018ibb during the phase of our polarisation measurement. ### X-ray Observations #### 3.5.1 Swift/XRT While monitoring SN 2018ibb with UVOT between \(t_{\rm max}\)+8.4 and \(t_{\rm max}\)+224 days, _Swift_ also observed the field with the X-ray telescope XRT between 0.3 and 10 keV in photon-counting mode (Burrows et al., 2005). We analysed these data with the online-tools of the UK _Swift_ team4 that use the software package HEASGerr version 6.26.1 and methods described in Evans et al. (2007, 2009). Footnote 4: [https://www.swift.ac.uk/user_objects](https://www.swift.ac.uk/user_objects) SN 2018ibb evaded detection in all epochs. The median \(3\sigma\) count-rate limit of all observing blocks is 0.005 count s\({}^{-1}\) (0.3-10 keV). Using the dynamic rebinning option in the _Swift_ online tools pushes the \(3\sigma\) count-rate limits to 0.002 count s\({}^{-1}\) (median value). A list of the limits from the stacking analysis is shown in Table 4. To convert the count-rate limits into a flux, we used WebPIMMS5 and assumed a power-law spectrum with a photon index6 of \(\Gamma=2\) and a Galactic neutral hydrogen column density of \(1.97\times 10^{20}\) cm\({}^{-2}\)(H14PI Collaboration et al., 2016). The average energy conversion factor for the unabsorbed flux is \(3.66\times 10^{-11}\left(\rm erg\,s^{-1}\,cm^{-2}\right)/\left(\rm ct\,s^{-1}\right)\). The median count-rate limit corresponds to an unabsorbed flux of \(<7.4\times 10^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\) between 0.3-10 keV and a luminosity of \(<4.9\times 10^{22}\) erg s\({}^{-1}\). The flux and luminosity limits of the individual bins are shown in Table 4. Footnote 5: [https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl](https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl) Footnote 6: The photon index is defined as the power-law index of the photon flux density (\(N(E)\propto E^{-1}\)). #### 3.5.2 XMM-Newton The field of SN 2018ibb was also observed by _XMM-Newton_(Jansen et al., 2001, Principal Investigator: R. Margutti, University of California, Berkeley, USA). Four epochs were taken with the European Photon Imaging Camera (EPIC) with the pn (Struder et al., 2001) and MOS1\(|2\) cameras (Turner et al., 2001) between 28 January 2019 and 28 August 2019 (\(t_{\rm max}\)+48.5 - \(t_{\rm max}\)+230.4). We reduced the _XMM-Newton_/EPIC pn data using the _XMM-Newton_ Science Analysis System7 (SAS) following standard procedures. We extracted the source using a circular region with a radius of \(32\arcsec\), and the background from a source-free region on the same CCD. The MOS data are shallower than the pn data, so we omit reporting them in this paper. Footnote 7: [https://www.cosmos.esa.int/web/xmm-newton/sas](https://www.cosmos.esa.int/web/xmm-newton/sas) All _XMM-Newton_ observations led to non-detections with count rate limits between 0.009 and 0.020 ct s\({}^{-1}\) between 0.3 and 10 keV. Using the same spectral model as for XRT and an energy conversion factor of \(1.88\times 10^{-12}\left(\rm erg\,s^{-1}\,cm^{-2}\right)/\left(\rm ct\,s^{-1}\right)\), these limits translate to unabsorbed flux limits between 1.6 and \(3.7\times 10^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\). Table 4 summarises the measurements. ### Radio observations The field was observed by the VLA Sky Survey (Lacy et al., 2020) between 2 and 4 GHz on 27 October 2020 (\(t_{\rm max}\)+595 days). No source was detected. The flux at the SN position is \(-47\pm 223\)\(\mu\)Jy, translating to a \(3\sigma\) flux limit of 622 \(\mu\)Jy and a luminosity of \(2\times 10^{37}\) erg s\({}^{-1}\). Eftekhari et al. (2021) presented sub-mm observations at 100 GHz obtained with the Atacama Large Millimeter Array on 24 December 2019 (\(t_{\rm max}\)+331 days). These authors also reported a non-detection with an r.m.s. of 19 \(\mu\)Jy, translating to a \(3\sigma\) flux limit of 58 \(\mu\)Jy and a luminosity of \(5\times 10^{39}\) erg s\({}^{-1}\). \begin{table} \begin{tabular}{c c c c c c} \hline \hline MJD & Phase & Count rate & \(F_{X}\) & \(L_{X}\) \\ & (day) & (\(10^{-2}\)\(\rm ct\,s^{-1}\)) & (\(10^{-3}\)\(\rm erg\,s^{-1}\,cm^{-2}\)) & (\(10^{2}\)\(\rm ct\,s^{-1}\)) \\ \hline \multicolumn{6}{c}{_Swift_} \\ \hline 58469.56 & \(12.5^{+6.5}_{-3.9}\) & \(<0.7\) & \(<27.0\) & \(<2.2\) \\ 58567.93 & \(96.9^{+3.2}_{-3.3}\) & \(<1.9\) & \(<68.6\) & \(<5.6\) \\ 58592.17 & \(117.7^{+0.9}_{-0.9}\) & \(<2.3\) & \(<82.7\) & \(<6.7\) \\ 58683.35 & \(200.1^{+2.4}_{-2.2}\) & \(<0.6\) & \(<22.9\) & \(<1.9\) \\ 58741.72 & \(245.9\pm 12.8\) & \(<1.8\) & \(<64.9\) & \(<5.3\) \\ \hline \multicolumn{6}{c}{_XMM-Newton_} \\ \hline 58511.22 & \(48.5\pm 0.3\) & \(<9.1\) & \(<17.1\) & \(<1.4\) \\ 58561.70 & \(91.8\pm 0.3\) & \(<10.2\) & \(<19.2\) & \(<1.6\) \\ 58694.68 & \(205.8\pm 0.3\) & \(<19.9\) & \(<37.4\) & \(<3.0\) \\ 58723.37 & \(20.4\pm 0.3\) & \(<8.7\) & \(<16.4\) & \(<1.3\) \\ \hline \end{tabular} 1 \end{table} Table 4: Log of X-ray observations \begin{table} \begin{tabular}{c c c c c|c c c c} \hline \hline MJD & Phase & Exposure & Mean airmass & Filter & \(q\) & \(u\) & \(p\) & \(\theta\) \\ & (day) & time (s) & & & (\%) & (\%) & (\%) & (\%) \\ \hline 58492.241 & 31.9 & \(4\times 100\) & 1.60 & \(v_{\rm HIGH}\) & \(0.14\pm 0.08\) & \(-0.24\pm 0.08\) & \(0.28\pm 0.08\) & \(150.2\pm 7.9\) \\ 58512.121 & 49.0 & \(4\times 100\) & 1.15 & \(v_{\rm HIGH}\) & \(0.10\pm 0.09\) & \(-0.31\pm 0.09\) & \(0.33\pm 0.09\) & \(140.0\pm 8.3\) \\ 58524.125 & 59.3 & \(4\times 100\) & 1.35 & \(v_{\rm HIGH}\) & \(0.11\pm 0.10\) & \(-0.25\pm 0.10\) & \(0.28\pm 0.10\) & \(146.8\pm 10.2\) \\ 58565.007 & 94.4 & \(4\times 250\) & 1.33 & \(v_{\rm HIGH}\) & \(0.21\pm 0.07\) & \(-0.09\pm 0.07\) & \(0.23\pm 0.07\) & \(168.5\pm 5.2\) \\ 58565.020 & 94.4 & \(4\times 250\) & 1.43 & \(R_{\rm SPECIAL}\) & \(0.45\pm 0.07\) & \(-0.16\pm 0.07\) & \(0.48\pm 0.07\) & \(170.4\pm 4.0\) \\ \hline \end{tabular} 1 \end{table} Table 3: Log of polarimetric observations ## 4 Results ### Redshift The X-shooter spectra between \(t_{\rm max}\)+32.7 and \(t_{\rm max}\)+94.3 days show narrow absorption lines of Mg ii \(\lambda\lambda\) 2852 and Mg ii \(\lambda\lambda\) 2796,2803 from the host galaxy at a common redshift of \(z=0.1660\) (Figure 2, top panel). The low-resolution FORS2 spectrum obtained at \(t_{\rm max}\)+565.3 days, shown in the bottom panel of Figure 2, reveals narrow emission lines from hydrogen and oxygen from the H ii regions in the host galaxy at the same redshift as the absorption-line redshift. This redshift translates to a luminosity distance of 822.6 Mpc using the cosmological parameters from Planck Collaboration et al. (2016). ### Light curve #### 4.2.1 General properties Figure 3 shows the evolution of SN 2018ibb from \(t_{\rm max}\)\(-\)93 to \(t_{\rm max}\)+706 rest-frame days. The early-time evolution was captured by the ATLAS, _Gaia_ and ZTF surveys. Human scanners discovered SN 2018ibb shortly before maximum light, which triggered our large monitoring campaign from UV to NIR wavelengths. The \(g\), \(r\) and \(o\) band light curves cover the evolution from early to late times. We use these datasets to infer the time of maximum light and the rise and decline time scales. Fitting the light curves with 3rd order polynomials between MJD = 58425 and MJD = 58485 returns the time of maximum light at MJD = 58458 \(\pm\) 2, 58454 \(\pm\) 2 and 58452 \(\pm\) 4 in \(g\), \(r\) and \(o\), respectively. Throughout the paper, we adopt the weighted mean MJD 58455 \(\pm\) 2 as the time of maximum light. At the time of peak, SN 2018ibb reached a brightness of \(17.54\pm 0.02\), \(17.65\pm 0.01\) and \(17.92\pm 0.04\) in \(g\), \(r\) and \(o\) band, respectively (all corrected for MW extinction; Table 5).8 Using the Keck spectrum at \(t_{\rm max}\)\(-\)1.4 days, we infer k-corrected absolute magnitudes of \(-21.79\pm 0.02\), \(-21.66\pm 0.01\) and \(-21.43\pm 0.04\) mag in the aforementioned bands (Table 5) and a k-corrected \(g-r\) colour of \(-0.12\pm 0.02\) mag at peak (corrected for MW extinction), a typical luminosity and colour for a H-poor SLSN (Nicholl et al., 2015; De Cia et al., 2018; Lunnan et al., 2018; Angus et al., 2019; Chen et al., 2023). Footnote 8: The host extinction is negligible (Section 4.6). Similar to Chen et al. (2023), we measure the rise and decline time scales from 10% and 50% peak flux to peak in all three bands. In the \(g\) band, we obtain \(\tau_{1/2,\rm{rise}}=52\pm 1\) days, \(\tau_{1/2,\rm{decline}}=88^{+1}_{-2}\) days, \(\tau_{1/10,\rm{rise}}>79.3\) days, and \(\tau_{1/10,\rm{decline}}=242\pm 1\) days, i.e., 1 mag (100 days)\({}^{-1}\) (all measured in the rest-frame). The light curve parameters in the other bands are summarised in Table 5. Although the _Gaia_ light curve is poorly sampled, the data are of sufficient quality to improve the lower limit on the rise timescale \(\tau_{1/10,\rm{rise}}\). The _Gaia_ alert database reports the first detection on MJD = 58346.11 (16 August 2018), 11.5 and 13.3 rest-frame days before the first ZTF and ATLAS9 detection, respectively. At the time of discovery, SN 2018ibb had a brightness of 19.8 mag; around the time of maximum light, the brightness reached 17.7 mag. This sets a lower limit of \(>93.4\) days on \(\tau_{1/10,\rm{rise}}\). Footnote 9: The last ATLAS non-detection is from 17 August 2018, i.e., 1.3 rest-frame days after the first _Gaia_ detection, reaching a limiting magnitude of \(o\approx 19.9\) mag at 3 sigma confidence. Between July 2018 and the date of the first _Gaia_ detection (16 August 2018), the field was visible to observing facilities in the southern hemisphere. We searched the data archives of the Australian Astronomical Observatory, the European Southern Observatory, the Gemini Observatory, and the Las Cumbres Observatory for serendipitous observations of this field but found no relevant data. We conclude that SN 2018ibb's progenitor exploded \(>93\) rest-frame days before the maximum light, but we have no firm constraint on the explosion date.10 Footnote 10: The _Gaia_ alert database reports an observation on 5 July 2018 but no measurement. This could either mean \(i\)) a non-detection (limiting magnitude \(G=20.7\) mag) and hence imposing an upper limit of \(<129\) days on \(\tau_{1/10,\rm{rise}}\), \(ii\)) the observation was not performed, or _iii)_ a problem in the data processing (Hodgkin et al. 2021). The light curve exhibits several peculiar properties. Figure 4 compares the absolute magnitude versus the rest-frame phase of SN 2018ibb to the light curves of the 78 H-poor SLSNe from ZTF-I presented in Chen et al. (2023). The absolute magnitude of all objects is computed with \(M=m-\rm{DM}(z)+2.5\log{(1+z)}\), where DM is the distance modulus and \(z\) the redshift. SN 2018ibb has the longest rise in the ZTF sample. The \(g\)-band rise time \(\tau_{1/10,\rm{rise}}\) exceeds the sample mean value (41.9 days) by a factor of 2.1 times the sample standard deviation [\(\sigma(\tau_{1/10,\rm{rise}})=17.8\) days; Chen et al. 2023]. This factor could increase to even \(4.9\sigma\) if the _Gaia_ data are a good proxy of the rise time in ZTF/\(g\). The light curve fades by 1.1 mag (100 days)\({}^{-1}\) for 500-600 days before the decline steepens to 1.5 mag (100 days)\({}^{-1}\). The decline time scale is slower than for any of the other H-poor SLSNe from the ZTF-I sample. The rise is even slower than any of the \(>100\) H-poor SLSNe found by other surveys (as queried from the Transient Name Server and ADS Abstract Service). Only the H-poor SLSN PS1-14bj (Lunnan et al., 2016) had a longer rise. We discuss this in more detail in Section 5.3. A number of SNe have shown a pre-bump with observed peak luminosities between \(M_{g}\sim-18\) and \(-23\) mag (Leloudas et al., 2012; Nicholl et al., 2015; Smith et al., 2016; Angus et al., 2019). Such a bump is not visible in the light curve of SN 2018ibb (Figures 3 and 4). However, the progenitor of SN 2018ibb exploded before the field came from behind the sun, precluding drawing a firm conclusion on the absence or presence of a pre-bump. Figure 2: Galaxy absorption and emission lines at a common redshift of \(z=0.166\) in the supernova spectra at \(t_{\rm max}\)+32.7 days (top) and at \(t_{\rm max}\)+565.3 days (bottom). The error spectrum of each epoch is shown in grey. #### 4.2.2 Bolometric light curve We compute the bolometric luminosity of SN 2018ibb over a wavelength range from \(\sim 1800\) to \(\sim 14,300\) A (rest-frame), which is defined by the wavelength coverage of our photometric dataset. However, our dataset does not have the same wavelength coverage throughout the entire duration of the observations. In the following, we describe how the bolometric light curve is constructed and discuss the bolometric corrections that we derived for time intervals with incomplete spectral information. The bolometric light curve is built as follows: _i_) correcting all photometric data for the MW extinction, _ii_) dividing the entire dataset into segments defined by the observing seasons, _iii_) interpolating the light curve in each band of each observing season with a Gaussian process with the python package George version 0.4.0 (Ambikasaran et al. 2015)1, _iv_) constructing the spectral energy distributions for every time step, _v_) calculating the bolometric flux by numerical integration of each SED, and _vi_) multiplying the bolometric flux by \(4\pi\,d_{L}^{2}\), where \(d_{L}\) is the luminosity distance, to obtain the bolometric luminosity. Footnote 1: We added a systematic error of 5% to all optical and NIR filters and 10% to all UV filters in quadrature to account for uncertainties in the flux calibration. Our dataset has the best spectral coverage between \(t_{\rm max}\) and \(t_{\rm max}\)+375 days: 1800 to 14,300 A between \(t_{\rm max}\) to \(t_{\rm max}\)+100 days and 3000 to 14,300 A between \(t_{\rm max}\)+200 to \(t_{\rm max}\)+375 days. The bolometric light curves for these time intervals are shown as solid lines in Figure 5 and their 1\(\sigma\) confidence intervals as a shaded region. A tabulated version can be found in Table 6. Based on the blackbody fits to the data from \(u\) to \(H\) band, we estimate that \(\lesssim 3\%\) of the observed bolometric flux is emitted at longer wavelengths between \(t_{\rm max}\) and \(t_{\rm max}\)+100 days, and therefore we omit to correct the observed bolometric flux. At later Figure 3: Multi-band light curve of SN 2018ibb from 1800 to 18,500 Å (rest-frame) after correcting for the Galactic extinction. SN 2018ibb was first detected by _Gaia_. The last non-detections before the first detection by _Gaia_ and ATLAS are shown by the downward-pointing triangles. With a rise time of \(>93\) rest-frame days, SN 2018ibb is one of the slowest evolving SLSN known. The decline of 1.1 mag (100 days)\({}^{-1}\) is similar to the decay time of radioactive \({}^{56}\)Co. After \(t_{\rm max}\)+575 days, the decline steepened to 1.5 mag (100 days)\({}^{-1}\). The light curve shows undulations up to \(t_{\rm max}\)+100 days and a longer-lasting bump at \(\sim 300\) rest-frame days. Vertical bars represent the epochs of spectroscopy and imaging polarimetry. The absolute magnitude is computed with \(M=m-{\rm DM}(z)+2.5\,\log{(1+z)}\), where DM is the distance modulus and \(z\) the redshift. \begin{table} \begin{tabular}{l c c c} \hline Property & \(g\) & \(r\) & \(o\) \\ \hline Peak time (MJD) & \(58458\pm 2\) & \(58454\pm 2\) & \(58452\pm 4\) \\ \(m_{\rm peak}\) (mag) & \(17.54\pm 0.02\) & \(17.65\pm 0.01\) & \(17.72\pm 0.02\) \\ \(M_{\rm peak}\) (mag) & \(-21.80\pm 0.02\) & \(-21.66\pm 0.01\) & \(-21.62\pm 0.02\) \\ \(\tau_{1/2,\rm line}\) (day) & \(52\pm 1\) & \(60\pm 1\) & \(64\pm 1\) \\ \(\tau_{1/2,\rm line}\) (day) & \(88^{+}_{-1}\) & \(93\pm 1\) & \(95\pm 3\) \\ \(\tau_{1/2,\rm line}\) (day) & \(68.3\pm 0.4\) & \(72.5\pm 0.5\) & \(73.8\pm 0.5\) \\ \(\tau_{1/2,\rm line}\) (day) & \(102\pm 2\) & \(117\pm 1\) & \(>107\) \\ \(\tau_{1/10,\rm line}\) (day) & \(>79.3\) & \(>82\) & \(>80\) \\ \(\tau_{1/10,\rm line}\) (day) & \(242\pm 1\) & \(248\pm 1\) & \(235^{+1}_{-4}\) \\ \hline \end{tabular} * **Notes.** All magnitudes are corrected for Galactic extinction. The absolute magnitudes include a k-correction inferred from the Keck spectrum at \(t_{\rm max}\)\(-\)1.4 days. All time scales are reported in the rest-frame. The uncertainties reflect the 1\(\sigma\) statistical errors. \end{table} Table 5: Light curve properties phases, the spectrum does not resemble a blackbody anymore (Figure 6), and we cannot quantify the missing flux at longer wavelengths. For the other epochs, we used these time intervals (\(t_{\rm max}\) to \(t_{\rm max}\)+100 days and \(t_{\rm max}\)+200 to \(t_{\rm max}\)+375 days) to estimate bolometric corrections. The pre-max dataset consists of photometry in ZTF \(g\) and \(r\), and ATLAS \(c\) and \(o\) filters, and the _Gaia_ white band. We only use the ZTF data when computing the bolometric luminosity because the ATLAS and _Gaia_ filters are too broad for building SEDs. At the time of the first epoch with coverage from \(u2\) to \(H_{*}\sim 26\%\) of the bolometric flux was emitted in \(g+r\) band. We use this flux ratio as an estimate of the missing flux. Since SN ejecta cool with time, such a universal correction will progressively underestimate the bolometric flux towards earlier epochs. Between the first and second observing seasons, we continued the follow-up with _Swift_/UVOT in _ubv_ when SN 2018ibb was no longer visible from the ground. Similar to the pre-max data, we chose time intervals with data from \(w2\) to \(H\) or \(u\) to \(H\) band to correct for the missing flux. At phases later than \(t_{\rm max}\)+500 days, photometric data are only available from \(g\) to \(z\) band. We omit to apply any bolometric correction for this time interval because we have no good estimate of the missing bolometric flux. SN 2018ibb reached a peak luminosity of \(L_{\rm bol,\,peak}\geq 2\times 10^{44}\) erg s\({}^{-1}\). Integrating the light curve from \(t_{\rm max}\)\(-\)93 to \(t_{\rm max}\)+706 days yields \(\geq 3\times 10^{51}\) erg for the total radiated energy \(E_{\rm rad}\). We emphasise that both values are strict lower limits. Our multi-band campaign only started when SN 2018ibb peaked in the \(g\) and \(r\) bands, which was likely after the bolometric peak. Between \(t_{\rm max}\) and \(t_{\rm max}\)+100 days, the spectra of SN 2018ibb are characterised by a cooling photosphere (Figure 6), and the spectral energy distributions from the \(u\) to \(H\) band are adequately fitted with a Planck function. The red and blue curves in the inset of Figure 5 show the evolution of the blackbody temperature and radius (see also Table 3), respectively. The photosphere has a temperature of 12,000 K at the time of maximum light and cools by 3000 K in 100 rest-frame days. During the same time interval, the location of the photosphere hardly changes from its mean value of \(5\times 10^{15}\) cm. The values of the blackbody radius and temperature are comparable to regular SLSNe (Chen et al. 2023a) and the slow-evolving SLSN 2015bn (Nicholl et al. 2016b), which have observations in the UV. The blackbody temperature of SN 2018ibb evolves slower than for regular SLSNe (Chen et al. 2023a), mirroring its slowly evolving light curve. We remark that including data at shorter wavelengths would have led to lower temperatures (\(\approx 0.1\) dex at \(t_{\rm max}\)) and larger radii (\(\approx 0.12\) dex at \(t_{\rm max}\)) due to absorption lines in the UV (Yan et al. 2017; Lunnan et al. 2018a; Angus et al. 2019). Owing to this, we omit these data to infer the blackbody radius and temperature. ### Spectroscopy #### 4.3.1 Spectroscopic sequence Figure 6 shows the spectral evolution between \(\sim 2800\) A to \(\sim 10,000\) A from the time of maximum to \(t_{\rm max}\)+990 days (all rest-frame). The spectra up to \(t_{\rm max}\)+100 days capture the photospheric phase. To identify the elements and ions responsible for the most prominent features, we model the spectrum at \(t_{\rm max}\)+32.7 days with the spectrum synthesis code SYNOW (Branch et al. 2005). The SYNOW fit, shown in the top panel of Figure 7, was obtained for a photospheric expansion velocity of 8000 km s\({}^{-1}\) (Section 4.3.2) and for a blackbody temperature of 12,000 K (Section 4.2.2; a range in the order of \(\pm\)500 is applicable for both properties). The major ions that are securely identified and match the spectrum well are those of: O, Mg ii, Si ii, Ca ii, and Fe ii (the Mg ii mainly improves the match of the feature around 4400 A, together with the Fe ii line), in agreement with Konyves-Toth and Vinko (2021). Various additional iron group elements, such as Ti ii, clearly help to lower the model flux on the blue side (3000-4000 A). However, we do not include those in the final SYNOW fit because the overall fit was not convincingly improved. Absorption from O ii between 3700 A and 4700 A, as seen in many SLSN spectra around peak (Quimby et al. 2018), is not present. Owing to the limitations of the SYNOW approach, for instance, the simplifying underlying assumptions such as spher Figure 4: The \(g\)-band light curve of SN 2018ibb in the context of the homogeneous ZTF-I SLSN sample. SN 2018ibb has a typical peak absolute magnitude. The rise of \(>93\) rest-frame days is significantly longer than of the average ZTF SLSN. The long-lasting rise implies a long diffusion time, which requires a very high total ejected mass. The high peak luminosity requires a very energetic explosion. Both properties together hint to an explosion mechanism that might be different from that of regular SLSNe. Figure 5: The bolometric light curve of SN 2018ibb from 1800 to 14,300 Å (rest-frame). The dotted lines indicate time segments with partial wavelength coverage. At peak SN 2018ibb reached a luminosity of \(>2\times 10^{44}\) erg s\({}^{-1}\). Integrating over the light curve from \(t_{\rm max}\)\(-\)93 to \(t_{\rm max}\)+706 days yields a radiated energy of \(3>\times 10^{41}\) erg. Both values are conservative lower limits. The inset shows the evolution of the blackbody temperature and radius of the photospheric phase where photometry has been carried out from the \(u\) to \(H\) bands. The shaded regions indicate the \(1\sigma\) statistical uncertainties. ical, homologous expansion, resonant scattering line formation above a sharp blackbody spectrum-emitting photosphere, we perform this modelling only for the identification and verification of the major features. We avoid any fine-tuning of the different ion parameters and assessing the elemental abundances or relative mass fractions. A complementary analysis with the National Institute of Standards and Technology (NIST) Atomic Spectra Database (Kramida et al., 2018), following the methodology described in Gal-Yam (2019a), which includes the same elements as above for relative intensities \(\geq 0.5\) in the range 2000-10,000 A (and \(\geq 0.2\) in the range 3000-6000 A for the Fe ii lines), reveals additional possible identification of features that are not accounted for by the SYNOW fit. For instance, lines of Mg ii and/or Si ii may contribute to the small dip redward of the \(\sim 7773\) A (rest-frame) O i triplet. Also, numerous Fe ii lines may contribute to the valley around 3000-3200 A as well as additional Mg ii lines accounting for the dips around 4300 A. Remarkably, in addition to absorption lines from the SN ejecta, the first spectrum at \(t_{\rm max}\)\(-\)1.4 days shows conspicuous [Ca ii] \(\lambda\lambda\) 7291,7323 in emission. This is one of the strongest forbidden emission lines seen in nebular SN spectra (Filippenko, 1997; Gal-Yam, 2017). The only SLSNe that show [Ca ii] during the photospheric phase are slow-evolving SLSNe (e.g., SN 2007bi, LSQ14an and SN 2015bn; Gal-Yam et al., 2009; Nicholl et al., 2019; Inserra et al., 2017). During the first seasonal observing gap, the photosphere recedes and we start to see the core of the explosion. The nebular spectra (right panel in Figure 6) are dominated by emission lines with widths up to 10,000 \(\rm km\,s^{-1}\) and a blue pseudo-continuum, similar to that seen in SNe Ia-CSM, Ibn, and Icn and some SNe IIn (e.g., Silverman et al., 2013; Hosseinzadeh et al., 2017; Gal-Yam et al., 2022; Perley et al., 2022). Following previous observations of slow-evolving SLSNe (Nicholl et al., 2016; Lunnan et al., 2016) and theoretical models by Jerkstrand et al. (2016), we identify the most conspicuous emission lines as allowed and forbidden transitions from neutral and ionised calcium, iron, magnesium, and oxygen (Figure 7). Common to both the photospheric and the nebular phase is that the evolution is very slow with the exceptions of the regions at \(\sim 4360\), \(\sim 5000\) and 7300 A (highlighted in grey in Figure 6). At about 30 days after maximum, the region at \(\sim 5000\) A shows a rapidly growing emission feature. A weak emission line at \(\sim 4360\) A also emerges and reveals a similar trend to the \(\sim 5000\) A feature. Owing to this, we identify the Figure 6: Spectroscopic sequence from 2500 Å to 10,000 Å and from the time of maximum to \(t_{\rm max}\)+1000 days (rebinned to 5 Å and smoothed with a Savitzky-Golay filter). Spectra up to \(t_{\rm max}\)+100 days (left panel) are characterised by a blackbody continuum with superimposed absorption lines from the SN ejecta, expanding with a velocity of \(\sim 8500\)\(\rm km\,s^{-1}\). Between \(t_{\rm max}\)+100 and \(t_{\rm max}\)+225 days (while SN 2018ibb was behind the sun), the spectroscopic behaviour of SN 2018ibb evolved drastically. The late-time spectra (right panel) are characterised in the blue (\(<5000\) Å) by a pseudo-continuum and emission lines produced by the interaction of the SN ejecta with circumstellar material and in the red (\(>5000\) Å) by nebular emission lines from the \({}^{56}\)Ni-heated SN core. The regions with the fastest evolution are highlighted by the grey-shaded regions. Figure 7 shows the identification of the most prominent features of the photospheric and nebular phases. Regions affected by strong telluric absorption were clipped. Their locations are indicated in Figure 7. two features as [O iii] \(\lambda\) 4363 and [O iii] \(\lambda\lambda\) 4959,5007, respectively. Most remarkably, the [O iii] \(\lambda\lambda\) 4959,5007 emission lines are present throughout the entire post-max evolution, even in the spectrum at \(t_{\rm max}\)+989.2 days after all other SN features faded below the detection threshold of the 4-hour VLT spectrum. This has never been observed in any SLSN before. Simultaneous with the rise of [O iii], the centre of the 7300 A feature moves a few A to longer wavelengths, the line profile changes from roughly hot to bell-shaped, the width decreases and the peak flux increases by a factor \(\sim 2\) within \(<60\) days (Figure 6, left panel). This suggests that this line complex, commonly identified as [Ca ii] \(\lambda\lambda\) 7291,7324, gets dominated by [O ii] \(\lambda\lambda\) 7320,7330. [O ii] and even more so [O iii] are not common features for SNe. [O iii] was only observed in the slow-evolving H-poor SLSNe LSQ14an (Inserra et al., 2017) and PS1-14bj (Lunnan et al., 2016) during the photospheric phase and in SN 2015bn in the nebular phase (Nicholl et al., 2016; Jerkstrand et al., 2017). Occasionally, it is also seen in regular H-poor and H-rich SNe predominantly years after the explosion (e.g., SNe II 1979C and 1980K, Milisavljevic et al., 2009 and Fesen et al., 1999; SN IIb 1993J, Milisavljevic et al., 2012; SNe IIn 1995N, 1996T, 2010jl, Fransson et al., 2002; Bauer et al., 2008; Milisavljevic et al., 2012; Fransson et al., 2014; SN Ib 2012au, Milisavljevic et al., 2018), and even more rarely during the photospheric phase of regular SNe (e.g., Type Ic SN 2021ocs; Kuncarayakti et al., 2022). Possible mechanisms to produce [O ii] and [O iii] are _i_) excitation by CSM interaction (Chevalier & Fransson, 1994), _ii_) photoionisation by the interaction of the pulsar wind nebula with the SN ejecta (Chevalier & Fransson, 1992; Omand & Jerkstrand, 2022), and _iii_) radioactivity (for high ratios of deposited energy to O-density; Jerkstrand et al., 2017). In Section 5.1, we show that [O ii] and [O iii] are produced by the interaction of the SN ejecta with circumstellar material. Our series of NIR spectra (shown in Figure 8) covers the photospheric phase from \(t_{\rm max}\)+33 to \(t_{\rm max}\)+94 days, and the nebular phase from \(t_{\rm max}\)+276 to \(t_{\rm max}\)+378 days. The NIR spectra show a limited number of absorption and emission lines. The photospheric-phase spectra show two features at 1.093 and 1.13 \(\mu\)m. Following Jerkstrand et al. (2015), Hsiao et al. (2019) and Shahbandeh et al. (2022), we tentatively identify the former as an absorption line of Mg ii \(\lambda\) 1.092 \(\mu\)m blueshifted by \(\sim 8500\) km s\({}^{-1}\), and the latter as the recombination line O i \(\lambda\) 1.13 \(\mu\)m. These features can also be blended with emission lines from sulphur. The emission lines clearly stand out in the nebular-phase spectra. Our NIR spectra at \(t_{\rm max}\)+378 days show a prominent emission line at 1.025 \(\mu\)m that we tentatively identify as [Co ii] \(\lambda\) 1.025. This is the first time that a cobalt line has been detected in a SLSN spectrum. In Section 5.2.5, we examine this detection in more detail. Figure 7: Line identification of the photospheric-phase spectrum (top) and nebular-phase spectrum (bottom). **Top:** The photospheric phase spectrum was fitted with the parameterised spectral synthesis code SYNOW (red curve). Most of the spectral features can be attributed to O i, Mg ii, Si ii, Ca ii, and Fe ii as seen in other SLSNe during their cool photospheric phase (Gal-Yam, 2019b). In addition to the absorption lines in the SN ejecta, the photospheric phase spectrum shows conspicuous [Ca ii] \(\lambda\lambda\) 7291, 7324, a feature that gets dominated by [O ii] \(\lambda\lambda\)7320,7330 at about \(t_{\rm max}\)+30 days. **Bottom:** The spectrum of the nebular phase consists of a blue pseudo-continuum and a series of allowed and forbidden emission lines from singly and doubly ionised oxygen, calcium, magnesium and iron. Remarkable is the presence of [O ii] and [O iii] in emission (as early as \(t_{\rm max}\)+30 days), indicating ionising radiation from shock interactions (Section 5.1). SN absorption lines are indicated by dashed lines, and the locations mark the absorption trough minima (blueshifted by 8500 km s\({}^{-1}\) from their rest wavelengths). SN emission lines are indicated by solid lines; their line centres are at the velocity coordinate \(v=0\). Regions of strong atmospheric absorption are grey-shaded. #### 4.3.2 Ejecta velocity The photospheric-phase spectra of SN 2018ibb show a large number of narrow absorption lines, mirroring a low ejecta velocity and the slow light curve evolution. The ejecta velocities are commonly measured from Fe ii \(\lambda\) 5169. Owing to the high velocities of SLSNe (e.g., Liu et al. 2017; Chen et al. 2023b), this line is usually blended with Fe ii \(\lambda\) 4924 and Fe ii \(\lambda\) 5018, necessitating template matching techniques to extract the velocities (Modjaz et al. 2016; Liu et al. 2017). However, the ejecta velocity of SN 2018ibb is slow, and the Fe ii \(\lambda\) 5169 region is not blended and resolves into three absorption lines that we identify as Fe ii \(\lambda\lambda\) 4924, 5018 and 5169 (Figure 9). By measuring the minima of the three absorption lines, we extract a photospheric velocity of \(\approx 8500\) km s\({}^{-1}\) that remains constant between \(t_{\rm max}\) and \(t_{\rm max}\)+100 days as demonstrated in Figure 9 (all measurements are summarised in Table 6). The maximum ejecta velocity is best determined from the blue edge of the strong Mg ii \(\lambda\lambda\) 2796,2803 and Ca ii \(\lambda\lambda\) 3934, 3968 resonance lines. In Figure 10, we show the regions around the two features at \(t_{\rm max}\)+32.7 days, centred on the blue doublet components. Because of the complexity of line features, we omit to subtract any continuum. For illustration purposes, we normalise the spectral regions so that the peak intensity and maximum absorption approximately match both lines. The blue components of the doublets exhibit complex profiles at low velocities because of the superposition with the wings of the red doublet components. The highest velocities are less affected by this. The Ca ii \(\lambda\) 3934 line gives the best estimate for the maximum ejecta velocity, \(\sim 12,500\) km s\({}^{-1}\). This is consistent with the extent of the absorption component of Mg ii \(\lambda\) 2796, which, however, is more affected by other SN lines. The absorption minima of Mg ii \(\lambda\) 2796 and Ca ii \(\lambda\) 3934 are at \(\sim 8000\) km s\({}^{-1}\), but are affected by the doublet nature of the lines. Nonetheless, the locations of the absorption minima are consistent with the photospheric velocity determined from the absorption minima of Fe ii \(\lambda\lambda\) 4924,5018,5169. Figure 8: Spectroscopic sequence from 9500 to \(21,500\) Å. The spectral sequence covers the evolution of the photospheric (top) and nebular (bottom) phases. The NIR spectra at \(>1\)\(\mu\)m show only a few features in contrast to the optical spectra (Figure 6). The most prominent features are labelled. All spectra were rebinned to 5 Å and smoothed with a Savitzky-Golay filter, except the spectrum at \(t_{\rm max}\)+361.6 days that was rebinned to 10 Å. The grey scale at the bottom of each panel displays the strength of telluric features (white = transparent, black = opaque). In addition, regions of strong atmospheric absorption are grey-shaded. To put the measurements in the context of other SLSNe, we first compare the velocity of SN 2018ibb at maximum light to those of SLSNe in the ZTF-I sample (Chen et al., 2023b). The histogram in the top panel of Figure 11 shows a kernel density estimate of the velocity distribution of the 27 SLSNe from the ZTF-I sample with Fe ii velocities measured within \(\pm 20\) rest-frame days from maximum light. After bootstrapping the sample and propagating the measurement uncertainties with a Monte Carlo simulation, the median velocity of the ZTF-I sample is \(14,800\) km s\({}^{-1}\) and its \(1\sigma\) confidence region extends from \(10,500\) to \(19,000\) km s\({}^{-1}\). SN 2018ibb lies in the bottom 8% of this sample, but its velocity is not unparalleled. SN 2019aamu had a lower photospheric velocity at peak, but the measurement is poorly constrained (\(5876^{+510}_{-349}\) km s\({}^{-1}\); Chen et al., 2023b). In the bottom panel of Figure 11, we show the evolution of the Fe ii velocities of SN 2018ibb together with those of H-poor SLSNe from Liu et al. (2017) (in grey). Within 50 days after maximum, the ejecta usually decelerate from \(\sim 15,000\) km s\({}^{-1}\) to \(\lesssim 10,000\) km s\({}^{-1}\), whereas SN 2018ibb shows no evolution. #### 4.3.3 A CSM shell around the progenitor of SN 2018ibb The X-shooter spectra between \(t_{\rm max}\)+32.7 days and \(t_{\rm max}\)+94.3 days show two Mg ii absorption line systems (Figure 12). The narrow component is associated with the gas in the SLSN host galaxy (Section 4.6). The lines of the broader component have a full width at half maximum (FWHM) of 406 km s\({}^{-1}\) and are blueshifted by 2918 km s\({}^{-1}\) (not varying between \(t_{\rm max}\) and \(t_{\rm max}\)+90 days; upper panels in Figure 12). They are significantly broader than expected for the interstellar medium in the dwarf host galaxy or any intervening dwarf galaxy12 but also significantly narrower than the narrowest SN features (\(\sim 1900\) km s\({}^{-1}\); measured from Fe ii). The equivalent widths are \(2.00\pm 0.09\) and \(1.27\pm 0.08\) A for Mg ii \(\lambda\) 2796 and Mg ii \(\lambda\) 2803, respectively. The observed line ratio is \(1.57\pm 0.12\) in tension with the predicted value of 2 for unsaturated lines. Assuming that the Mg ii lines are unsaturated, we can convert their equivalent widths to a lower limit on the column density of singly ionised magnesium in the CSM shell. The rest-frame equivalent width EW\({}_{\rm r}\) is related to the column density \(N\), in units of atoms per cm\({}^{2}\), via \(N=1.13\times 10^{20}\) EW\({}_{\rm r}\) / \(\left(\lambda_{\rm r}^{2}\,f\right)\) where \(\lambda_{\rm r}\) is the rest-frame wavelength, in units of A, and \(f\) the oscillator strength (Ellison et al., 2004). Using the oscillator strengths from Theodosiou & Federman (1999) for Mg ii \(\lambda\) 2796 and Mg ii \(\lambda\) 2803, we derive a lower limit of \(N>5\times 10^{13}\) cm\({}^{-2}\). Footnote 12: Based on the correlations between the stellar mass of galaxies and the width of galaxy absorption and emission lines (Kruhler et al., 2015; Arabsalmani et al., 2018). The only other SLSN that showed such a blueshifted Mg ii component was the H-poor SLSN iPTF16eh (Lunnan et al., 2018b). For that SLSN, the Mg ii doublet was blueshifted by 3300 km s\({}^{-1}\). Lunnan et al. (2018b) also detected Mg ii in emission between 100 and 300 days after maximum light. Moreover, the line centre of the emission lines moved from -1600 to +2900 km s\({}^{-1}\) during that time interval. These authors attributed the blueshifted Mg ii absorption line system with a CSM shell expelled decades before the explosion and the time and frequency variable Mg ii emission lines with a light echo from that shell. How such a light echo evolves depends mainly on its distance \begin{table} \begin{tabular}{c c c c} \hline Phase & Velocity & Phase & Velocity \\ (day) & \(\left(\rm km\,s^{-1}\right)\) & (day) & \(\left(\rm km\,s^{-1}\right)\) \\ \hline -1.4 & 8489 \(\pm\) 88 & 51.5 & 8371 \(\pm\) 126 \\ 8.8 & 8610 \(\pm\) 28 & 60.1 & 8382 \(\pm\) 211 \\ 10.5 & 8349 \(\pm\) 64 & 70.3 & 8303 \(\pm\) 198 \\ 30.8 & 8453 \(\pm\) 121 & 73.8 & 8313 \(\pm\) 198 \\ 31.1 & 8426 \(\pm\) 205 & 81.5 & 8417 \(\pm\) 200 \\ 32.7 & 8637 \(\pm\) 168 & 94.3 & 8431 \(\pm\) 201 \\ 47.1 & 8433 \(\pm\) 218 & & \\ \hline \end{tabular} \end{table} Table 6: Fe ii absorption line velocities during the photospheric phase Figure 10: The maximum ejecta velocity. The extent of the Ca ii \(\lambda\) 3934 (black) absorption on the blue side can be traced to \(\sim 12,500\) km s\({}^{-1}\) at \(t_{\rm max}\)+32.7 days. Mg ii \(\lambda\) 2796 (dark grey) has a comparable maximum velocity, albeit this region is affected by additional SN features. The location of the blue and red doublet components of Mg ii \(\lambda\lambda\) 2796, 2803 and Ca ii \(\lambda\lambda\) 3934, 3968 of the host galaxy ISM are indicated by brackets in a darker shade at the top of the figure. We also mark the position of the doublets of the CSM shell with brackets in a lighter shade. The CSM shell is detected through an additional Mg ii absorption-line system blueshifted by 2918 km s\({}^{-1}\). The CSM shell is not detected in Ca ii. Figure 9: Zoom-in onto the Fe ii absorption lines from the SN ejecta at selected epochs of the photospheric phase. The SN photosphere expands with a velocity of merely \(\approx 8500\) km s\({}^{-1}\). There are no signs of deceleration between \(t_{\rm max}\) and \(t_{\rm max}\)+100 days. Starting at about \(t_{\rm max}\)+30 days, emission from [O iii] \(\lambda\lambda\) 4959, 5007 (grey shaded region), produced by the interaction of the SN ejecta with circumstellar material, contaminates the blue wing of Fe ii \(\lambda\) 5169. to the progenitor star. With that in mind, we analyse the Keck and X-shooter spectra between \(t_{\rm max}\)+230 and \(t_{\rm max}\)+378 days to constrain the properties of the CSM shell. Rebinning the spectra reveals Mg ii in emission (Figures 6 and 7). However, due to heavy rebinning, the information about the variability of the line centre was lost. We can, therefore, not ascertain whether the Mg ii emission is connected with illuminated magnesium in the CSM shell or produced by the interaction of the SN ejecta with circumstellar material. Motivated by the discovery of a CSM shell around SN 2018ibb, we next search for corresponding Ca ii \(\lambda\lambda\) 3934,3969 absorption in the X-shooter spectrum from \(t_{\rm max}\)+32.7 (Figure 10). The search is aggravated by how the two Ca ii doublets (CSM shell and SN ejecta) overlap in contrast to the Mg ii doublets. Using the wavelength of Ca ii \(\lambda\) 3934 as the velocity reference, the blue doublet absorption of Ca ii should be at the same velocity as the Mg ii doublet (\(-2918\) km s\({}^{-1}\)). The red component of the Ca ii doublet will, however, be displaced by 34.8 A, or 2653 km s\({}^{-1}\), to \(\sim-265\) km s\({}^{-1}\). The position of a possible Ca ii CSM component is marked with the light grey bracket in Figure 10. We do indeed see a sharp drop in the Ca ii profile at zero velocity, which could be the result of a red CSM absorption. For the blue component, it is more difficult because we do not know the line profile of the Ca ii absorption from the SN ejecta. Therefore, it is difficult to assess the significance of this. However, we conclude that there is no evidence for Ca ii absorption from the CSM shell. #### 4.3.4 Circumstellar interaction -- bumps and undulations in the light curve The multi-band light curves show a series of bumps and wiggles throughout the entire evolution of SN 2018ibb (Figures 3 and 5). Between \(t_{\rm max}\) and \(t_{\rm max}\)+100 days, the bumps are well visible from \(u\) to \(H\) band (luminosity increases by a few 0.1 mag). The amplitudes of the bumps in SN 2018ibb are comparable to the bumps seen in light curves of the other SLSNe (e.g., Nicholl et al. 2016; Inserra et al. 2017; Fiore et al. 2021; Hosseinzadeh et al. 2022; Chen et al. 2023b). Following the nomenclature in Chen et al. (2023b), these bumps fall in the 'weak' category. The bumps in SN 2018ibb also introduce wiggles in the evolution of its blackbody radius and temperature (Figure 5). These modulations are well within the measurement uncertainties of the long-term trends of these parameters, hindering a more in-depth analysis of these features. The late-time photometric evolution of SN 2018ibb reveals an increase in luminosity of 0.2 dex between \(t_{\rm max}\)+240 and \(t_{\rm max}\)+340 days (Figures 3 and 5) that is well isolated allowing for a more in-depth analysis. The bolometric light curve before and after this bump exhibits a decline rate of 1.18 mag (100 days)\({}^{-1}\). After subtracting the underlying fading light curve, we conclude that the light curve bump lasted for \(\sim 80\) days (measured between zero intensity) and reached its highest luminosity at \(t_{\rm max}\)+300 days. In total, \(6.7\pm 0.8\times 10^{48}\) erg are radiated in excess to the \(8.1\times 10^{49}\) erg that SN 2018ibb would have emitted without the bump during this time. Figure 13 presents the spectroscopic evolution of SN 2018ibb during the bump phase. Assuming that all spectral features fade on exponential timescales similar to the multi-band and bolometric light curves, we use the spectra Figure 11: Fe ii ejecta velocities of SN 2018ibb and general SLSN samples (grey) at the time of maximum (top panel) and as a function of time (bottom panel). SN 2018ibb has a remarkably low velocity at the time of maximum and an unprecedentedly flat velocity evolution, which is in stark contrast to known SLSNe. Figure 12: Normalised X-shooter spectra from \(t_{\rm max}\)+32.7 days to \(t_{\rm max}\)+94.3 days (top panels) and their inverse-variance weighted co-added spectrum (bottom panel). The individual and stacked spectra show barely resolved, narrow absorption lines from the host ISM (marked by the solid vertical lines). In addition, a blue-shifted (2918 km s\({}^{-1}\)) absorption line system is visible (marked by the dashed vertical lines). The FWHMs of the blue-shifted component are 406 km s\({}^{-1}\), significantly larger than the ISM lines but significantly smaller than the SED lines. This blue-shifted absorption-line system is connected with a shell of circumstellar material expelled by the progenitor star shortly before the explosion. No significant evolution in the position or shape of the absorption lines can be seen in the individual spectra (upper panels). The error spectrum is shown in grey. obtained before (blue) and after (yellow) the light curve bump to interpolate the spectrum at \(t_{\rm max}\)+286.7 days (black). Such an approach estimates the spectroscopic behaviour of SN 2018ibb in the absence of the bump. The bottom panel of Figure 13 shows the observed spectrum at \(t_{\rm max}\)+286.7 days in black and the estimated spectrum without the bump in red. The difference spectrum (blue) reveals substantially enhanced line fluxes in [O ii] and [O iii] but no change in [O i]. The lightcurve bump might also have increased the flux of the continuum level blueward of 5000 A. Its shape is reminiscent of the blue pseudo-continuum seen in interaction-powered SNe (Silverman et al., 2013; Hosseinzadeh et al., 2017; Gal-Yam et al., 2022; Perley et al., 2022). Considering the similarity of the difference spectrum to the spectrum before and after the bump raises the question of whether a larger fraction of the emission blueward of 5000 A in all nebular spectra is due to CSM interaction. We investigate that further in Section 5.2.4. ### Radio and X-ray emission The interaction of the SN ejecta with circumstellar material and heating of the SN ejecta by a central engine (e.g., magnetar or a black hole) can produce thermal X-ray emission and non-thermal radio emission (Chevalier & Fransson, 1992, 1994). SN 2018ibb was observed in the X-rays and radio between \(t_{\rm max}\)+13 and \(t_{\rm max}\)+246 days (Sections 3.5 and 3.6). All observations led to non-detections with detection limits between 1 and \(6\times 10^{41}\) erg s\({}^{-1}\) in the X-rays and between \(10^{39}\) and \(10^{40}\) erg s\({}^{-1}\) in the radio. To put those measurements in the context of the UV-to-NIR bolometric light curve, we show the radio and X-ray measurements together with the bolometric light curve in Figure 14. From that, we conclude that \(<2\%\) and \(<10\%\) of the total emission are radiated in the radio and X-rays, respectively. The non-detection limits are in the observed range of other SLSNe with X-ray and radio observations (Levan et al., 2013; Coppejans et al., 2018; Marguti et al., 2018; Law et al., 2019; Eftekhari et al., 2021; Murase et al., 2021). Only four SLSNe were detected at X-ray or radio frequencies: PTF10bgi (radio; Eftekhari et al., 2019; Law et al., 2019), PTF12dam (X-ray; Margutti et al., 2018; Eftekhari et al., 2021), SCP06P6 (X-ray; Levan et al., 2013), and SN 2020tow (radio and X-ray; Coppejans et al., 2021; Matthews et al., 2021). Their measurements13, shown in Figure 14, are a factor of \(>50\) smaller than the detection limits of SN 2018ibb. Footnote 13: PTF10bgi was detected in the radio \(>7.5\) years after the SN explosion. Owing to this, we omit to show PTF10bgi in that figure. To put the radio and X-ray properties of SN 2018ibb in the context of interaction-powered SNe, we also show the light curves of the most luminous X-ray and radio SNe in Figure 14. The Type IIn SNe 2006jd and 2010jl are the most luminous X-ray SNe with absorption-corrected luminosities of \(\sim 10^{42}\) erg s\({}^{-1}\)(Chandra et al., 2012, 2015). The radio-loudest SNe (e.g, SN Ic-BL PTF11qcj Corsi et al., 2014) reached luminosities of \(\sim 10^{38}\) erg s\({}^{-1}\), i.e., \(\lesssim 10\) times fainter than the limits for SN 2018ibb. Their observed luminosities before correcting for host absorption can be significantly dimmer for hundreds of days (Chandra et al., 2015). In conclusion, the non-detection of SN 2018ibb neither rules out CSM interaction nor a central engine as the dominant powering mechanism. Furthermore, the non-detection of SN 2018ibb also agrees with theoretical models of magnetar- and interaction-powered SLSNe that predict no bright radio and X-ray emission for years after the SN explosion (Murase et al., 2016; Margalit et al., 2018; Omand et al., 2018). ### Imaging polarimetry Our polarimetric observations between \(t_{\rm max}\)+31.9 days and \(t_{\rm max}\)+94.4 days revealed a polarisation signal of \(0.27\pm 0.04\) % in \(V\) (weighted average of all epochs) and \(0.48\pm 0.07\) % in the \(R\) band (Table 3). Dust grains in the Milky Way and the host galaxy could introduce a polarisation signal. As detailed in Section 3.4, the polarisation level of the MW could be up to 0.26%. The level of polarisation from the SN host galaxy is unknown, meaning that all reported measurements are upper limits. Considering the observed low degree of polarisation and the consistent levels of Stokes parameters measured from SN 2018ibb (Table 3), we conclude that the continuum polarisation intrinsic to SN 2018ibb is \(\lesssim 0.3\%\) in \(V\) band between \(t_{\rm max}\)+31.9 days and \(t_{\rm max}\)+94.4 days. To convert this measurement into an asphericity of the ejecta, we assume an oblate ellipsoidal ejecta with a Thomson scattering atmosphere and a number density distribution of \(N(r)<r^{-n}\), where \(r\) is the ejecta radius and \(n\) is the power-law index. Adopting \(p\lesssim 0.3\%\), we infer an axis ratio B/A (minor axis vs. major axis) of \(\gtrsim 0.9\) for an optical depth of \(\tau=1\) and a power-law index of \(n=2\), and B/A of \(\gtrsim 0.8\) for \(\tau=5\) and \(n=3\)-5 (Hoflich, 1991). The degree of polarisation in the \(R\) band is slightly higher (\(p\approx 0.5\%\)). Therefore, we cannot exclude that the continuum polarisation is \(p>0.3\%\). A polarisation degree \(p\sim 0.5\%\) implies an axis ratio B/A of \(\sim\)0.88 for \(\tau=1\) and \(n=2\)(Hoflich, 1991). Figure 13: Impact of the light curve bump between \(t_{\rm max}\)+240 and \(t_{\rm max}\)+340 days on the SN spectrum at \(t_{\rm max}\)+286.7 days. **Top**: The spectra before, during and after the light curve bump. **Bottom**: The observed spectrum at \(t_{\rm max}\)+286.7 days (\(\approx 13\) days before the peak of the bump) is shown in black. We estimate the ‘bump-free’ spectrum of SN 2018ibb at \(t_{\rm max}\)+286.7 days (red) based on the spectra obtained before and after the bump. The difference between the observed (black) and interpolated (red) spectra at \(t_{\rm max}\)+286.7 days is shown in blue. It reveals a series of emission lines that can be attributed to [O ii] and [O iii]. An excess blueward of 5000 Å is also visible, while no apparent residual can be seen at the location of [O i]. Therefore, we suggest that SN 2018ibb's photosphere exhibits a high degree of spherical symmetry. Pursiainen et al. (2023) analysed the data of the 16 SLSNe-I with polarimetric observations, including SN 2018ibb. After correcting the phases of all objects for the diverse photometric decline rates, the properties of SN 2018ibb are well within the observed distribution. While some of the events exhibit a non-zero level of polarisation at similar phases to SN 2018ibb (e.g., SN 2015bn and SN 2021fpl; Leloudas et al. 2017; Inserra et al. 2016; Poidevin et al. 2023), most SLSNe show a consistently low polarisation degree at comparable normalised phases (see figure 6 in Pursiainen et al. 2023). The presence of any component in the atmosphere of SN 2018ibb significantly deviating from spherical symmetry is thus unlikely within the photospheric phases covered by VLT polarimetry observations. Although Thomson scattering is wavelength independent, broad emission lines (see spectra in Figure 6), which are in general not polarised, may dominate the polarisation spectrum in the \(V\) band and produce the apparent low polarisation values. Furthermore, iron-group elements in the ejecta (Figure 7) have a large number of bound-bound transitions in the blue and UV part of the spectrum, which can also depolarise the signal (e.g., Chornock and Filippenko 2008), accounting for the slightly different polarisation levels measured in \(V\) and \(R\) bands. ### Host galaxy SN 2018ibb's host galaxy was detected in several optical broad-band filters (\(m_{R}\sim 24.4\) mag; Table 1). A false colour image of the field is shown in Figure 1. The SN explosion site, marked by the crosshair, is \(\approx 1\) kpc from the centre of its host galaxy, a common offset for SLSNe (Lunman et al. 2014; Schulze et al. 2018, 2021). To infer the mass and star-formation rate of the host, we model the observed spectral energy distribution (black data points in Figure 15) with the software package Prospector version 1.1 (Johnson et al. 2021).14 We assume a Chabrier IMF (Chabrier 2003) and approximate the star formation history (SFH) by a linearly increasing SFH at early times followed by an exponential decline at late times [functional form \(t\times\exp{(-t/\tau)}\), where \(t\) is the age of the SFH episode and \(\tau\) is the \(e\)-folding timescale]. The model is attenuated with the Calzetti et al. (2000) model. The priors of the model parameters are set identical to those used by Schulze et al. (2021). The observed \begin{table} \begin{tabular}{l c c} \hline \hline Transition & EW\({}_{r}\) & Flux \\ & (Å) & \(\left(10^{-18}\,\mathrm{erg\,cm^{-2}\,s^{-1}}\right)\) \\ \hline \multicolumn{4}{c}{**Absorption lines**} \\ \hline Mn ii \(\lambda\) 2594 & \(0.18\pm 0.13\) & \(\cdots\) \\ & (\(<0.39\)) & \\ Fe ii \(\lambda\) 2600 & \(0.07\pm 0.13\) & \(\cdots\) \\ & (\(<0.39\)) & \\ Mn ii \(\lambda\) 2606 & \(-0.12\pm 0.15\) & \(\cdots\) \\ & (\(<0.45\)) & \\ Mg ii \(\lambda\) 2796 & \(0.51\pm 0.04\) & \(\cdots\) \\ Mg ii \(\lambda\) 2804 & \(0.46\pm 0.04\) & \(\cdots\) \\ Mg i \(\lambda\) 2852 & \(0.14\pm 0.04\) & \(\cdots\) \\ & (\(<0.16\)) & \\ Ca ii \(\lambda\) 3934 & \(0.03\pm 0.01\) & \(\cdots\) \\ Ca ii \(\lambda\) 3969 & \(0.01\pm 0.01\) & \(\cdots\) \\ & (\(<0.03\)) & \\ \hline \multicolumn{4}{c}{**Emission lines**} \\ \hline H\(\beta\) & \(\cdots\) & \(3.68\pm 0.78\) \\ \([\mathrm{O\,m\,l}]\) \(\lambda\) 4363 & \(\cdots\) & \(0.25\pm 0.16\) \\ & \(\cdots\) & (\(<0.48\)) \\ \([\mathrm{O\,m\,l}]\) \(\lambda\) 4959 & \(\cdots\) & \(1.96\pm 0.81\) \\ & \(\cdots\) & (\(<2.43\)) \\ \([\mathrm{O\,m\,l}]\) \(\lambda\) 5007 & \(\cdots\) & \(12.96\pm 1.11\) \\ H\(\alpha\) & \(\cdots\) & \(10.43\pm 0.76\) \\ \([\mathrm{N\,ln\,}]\) \(\lambda\) 6584 & \(\cdots\) & \(0.12\pm 0.54\) \\ & \(\cdots\) & (\(<1.62\)) \\ \hline \end{tabular} \end{table} Table 7: Properties of the interstellar medium in the host galaxy Figure 14: Thermal and non-thermal emission of SN 2018ibb. Less than a few percent of the total radiated energy is emitted in the radio and X-rays. The luminosity limits lie in the ballpark of non-detections of other SLSNe, and they are a factor of 50 larger than the luminosity of the four H-poor SLSNe with either radio or X-ray detection. The limits of SN 2018ibb are larger than the most luminous radio and X-ray SNe. SED is adequately described by a galaxy model with a stellar mass of \(\log M_{\star}/M_{\odot}=7.60^{+0.19}_{-0.22}\) and star-formation rate of \(0.02^{+0.04}_{-0.01}\)\(M_{\odot}\) yr\({}^{-1}\) (grey curve in Figure 15). The mass and the star-formation rate of the host of SN 2018ibb agree with the expected values of SLSNe-I host galaxies at \(z<0.3\)(Leloudas et al., 2015; Perley et al., 2016; Chen et al., 2017; Schulze et al., 2018, 2021), although both fall in the lower half of the distributions. The specific star-formation rate (SFR normalised by the stellar mass of the host) is comparable to a common star-forming galaxy of that stellar mass (grey band in Figure 16; Elbaz et al., 2007) but in the lower half of the observed distribution of SLSN host galaxies (Schulze et al., 2021). We caution that specific SFRs are notoriously difficult to measure (e.g., see figure 3 in Schulze et al., 2021) as they rely on well-sampled SEDs from the UV to the NIR. The X-shooter spectra up until \(T_{\rm max}+80\) days reveal narrow absorption lines from Mg i and Mg ii from the interstellar medium in the host galaxy but no absorption features from Ca ii, Fe ii, and Mn ii, which have prominent features in the wavelength range accessible with X-shooter and are typically seen in low-mass star-forming galaxies, e.g., Prochaska et al. (2007) and Fynbo et al. (2009). The equivalent widths of the detected lines and the upper limits of the strongest expected absorption features are reported in Table 7. The measurements of Mg i\(\lambda\) 2852 and Mg ii\(\lambda\lambda\) 2796, 2804 are comparable to those of the SLSN host galaxies reported in Vreeswijk et al. (2014). Following the methodology of de Ugarte Postigo et al. (2012), we infer an absorption-line strength parameter of \(\sim-3.5\) from Ca ii, Mg i and Mg ii, putting the host of SN 2018ibb at the low-metallicity end of the distribution (albeit the diagnostic is tailored to host galaxies of long-duration gamma-ray bursts, which are also connected with the death of very massive stars but which prefer galaxies with slightly higher metallicities and slightly older stellar populations than SLSNe-I; Hjorth & Bloom, 2012; Leloudas et al., 2015; Vergani et al., 2015; Perley et al., 2016; Schulze et al., 2018). SN 2018ibb's nebular spectra exhibit emission lines from hydrogen and oxygen from H ii regions in the host galaxy. We measure their intensities by integrating over their line profiles. To apply emission-line diagnostics for measuring the oxygen abundance, we also need the flux of [N ii] \(\lambda\) 6584, which evaded detection. Using the H\(\alpha\) line profile as a template of the [N ii] \(\lambda\) 6584 line profile, we measure the nominal flux and its uncertainty. Table 7 summarises all measurements. Using the O3N2 metallicity indicator with the calibration from Marino et al. (2013) yields a low oxygen abundance of \(12+\log\,({\rm O/H})=8.06^{+0.07}_{-0.11}\) in accordance with the low value from the absorption-line strength parameter. The oxygen abundance is comparable to the mean of SLSN host galaxies at similar redshifts (Leloudas et al., 2015; Perley et al., 2016; Chen et al., 2017). The flux ratio between H\(\alpha\) and H\(\beta\) is \(2.76\pm 0.62\), which is consistent within \(1\sigma\) with the theoretically expected value of 2.86 for no extinction (assuming a temperature of \(10^{4}\) K and an electron density of \(10^{2}\) cm\({}^{-3}\) for Case B recombination; Osterbrock, 1989). We conclude that the host attenuation is negligible. The H\(\alpha\) flux translates to a star-formation rate of \({\rm SFR}=4.4\pm 0.3\times 10^{-3}\)\(M_{\odot}\) yr\({}^{-1}\) using Kennicutt (1998) and the relation from Madau & Dickinson (2014) to convert from the Salpeter to the Chabrier IMF in the Kennicutt (1998) relation. This value is lower than the SFR estimated from the host SED fitting but consistent within \(2\sigma\). ## 5 Discussion ### SN ejecta emission vs. CSM interaction In Section 4.3.3, we have shown that the progenitor of SN 2018ibb is embedded in circumstellar material ejected shortly before the explosion. In this section, we examine the line profiles and evolution of selected oxygen and metal lines to infer the physical conditions of the SN ejecta and the CSM. Figure 16: The star-formation rate and stellar mass of the host galaxy of SN 2018ibb in the context of SLSN-I host galaxies from the PTF survey (Schulze et al., 2021). The host galaxy of SN 2018ibb lies in the expected parameter space of SLSN host galaxies but in the lower half of the mass and SFR distributions (kernel density estimates of the observed distributions are shown at the top and to the right of the figure). Its specific star-formation rate (SFR / mass) is comparable to the typical star-forming galaxies (grey band) but lower than for an average SLSN host galaxy. Figure 15: Spectral energy distribution of the host galaxy from 1000 to 60,000 Å (black dots). The solid line displays the best-fitting model of the SED. The red squares represent the model-predicted magnitudes. The fitting parameters are shown in the upper-left corner. The abbreviation ’n.o.f.’ stands for the number of filters. The line profiles are most clear in the nebular phase. Figure 17 shows the continuum-subtracted Mg i] \(\lambda\) 4571 line with the [O i] \(\lambda\lambda\) 6300, 6364 doublet at \(t_{\rm max}\)+286.7 days. Both lines extend to \(\sim 10,000\) km s\({}^{-1}\). Their maximum velocity hardly changes up to the last well-observed epoch at \(t_{\rm max}\)+637.3 days. Its similarity to the maximum velocity of Ca ii \(\lambda\) 3934 absorption line (Figure 10) suggests Mg i] and [O i] are produced in the high-velocity ejecta. The Mg i] line is well fitted with a parabolic line profile with similar maximum velocity, shown by the dark-blue line in Figure 17. This indicates emission from an optically thick shell with constant velocity (e.g., Fransson, 1984). A similar parabolic line profile is consistent with the red side of the [O i] \(\lambda\) 6364 doublet component. However, the blue side of the doublet, dominated by the 6300 A component, lacks most of the emission compared to the Mg i] line. By \(t_{\rm max}\)+637.3 days (Figure 18), the blue doublet component has grown and is now the stronger of the two lines. The evolution of the [O i] line profile may be explained if both doublet components are optically thick to at least \(t_{\rm max}\)+286.7 days. In that case, the blue component will be scattered by the red component, which extends over most of the blue component (the velocity difference between the two components is 3016 km s\({}^{-1}\)). These photons will either be thermalised or emerge on the red side of the 6364 A doublet component. Emission from the front side of the ejecta with velocities \(\lesssim-(v-3016\) km s\({}^{-1}\)) are only partially scattered, and some of this emission may leak out, explaining the 'bump' at \(\sim-7500\) km s\({}^{-1}\). At \(t_{\rm max}\)+565.0 days, the blue doublet component has grown, and the blue wing is equally bright, or somewhat brighter, compared to the 6364 A component. This trend continues at \(t_{\rm max}\)+637.3 days. The expected 3:1 ratio is still not reached, indicating that the ejecta is not optically thin, yet. Using the Sobolev (1957) theory for the line formation, we can estimate the optical depth \(\tau\) for a given O i density \(n\)(O i) in the ejecta (e.g., Li & McCray, 1992). Assuming LTE among the \({}^{3}\)P ground state levels, the optical depth of each line of the [O i] doublet is given by \[\tau=\frac{A\left({}^{1}D_{2},{}^{3}P_{J}\right)\,\lambda\left({}^{1}D_{2},{}^ {3}P_{J}\right)^{3}\,g\left({}^{2}D_{2}\right)}{8\pi\,g_{\rm tot}}\,\,\,n({ \rm O}\,{\rm i})\times t\] where \(A({}^{1}D_{2},{}^{3}P_{J})\) with \(J=2,1\) are the transition probabilities for the 6300 A and 6364 A lines, respectively, \(\lambda\left({}^{1}D_{2},{}^{3}P_{J}^{3}\right)\) is the wavelength of blue and red doublet component, respectively, \(g_{\rm tot}=9\) is the total statistical weight to the ground multiplet, and \(t\) is the time since the explosion, in units of day. Putting in the atomic constants, we get an optical depth for the 6364 A line of \[\tau=2.7\,\left(\frac{n({\rm O}\,{\rm i})}{10^{10}\,{\rm cm}^{-3}}\right) \left(\frac{t}{300\,\,{\rm day}}\right)\] and a depth that is a factor of 2.9 larger for the 6300 A line. The typical O i density needed to get an optically thick line is, therefore, \(\gtrsim 10^{9}\) cm\({}^{-3}\). This can be compared to the mean oxygen density of the core. Assuming O i is the dominant species of oxygen in the core, the number density is \[n({\rm O})\approx 3\times 10^{7}\,f^{-1}\,\left(\frac{M({\rm O})}{30\,M_{ \odot}}\right)\left(\frac{e_{\rm ij}}{10^{4}\,\,{\rm km\,\,s^{-1}}}\right)^{-3} \,\left(\frac{t}{300\,\,{\rm day}}\right)^{-3}\,{\rm cm}^{-3} \tag{1}\] where \(M({\rm O})\) is the mass of oxygen in the core, \(f\) is the filling factor, and \(v_{\rm ej}\) the ejecta velocity. To get an optically thick 6364 A line at \(t_{\rm max}\)+286.7 days, i.e., a density \(\gtrsim 10^{10}\) cm\({}^{-3}\), requires a very small oxygen filling factor, \(\lesssim 10^{-3}\) (i.e., a highly clumped medium), or an unphysically large oxygen mass of \(\gtrsim 10^{3}M_{\odot}\). In a CSM/PPISN interaction scenario, the continued high optical depth at late times could point to a highly compressed cool dense shell (CDS). A CDS will form from the compression behind the shock, which results from the interaction between the ejecta and the CSM (for a discussion see, e.g., Chevalier & Fransson, 2017). If the density of the CSM is large, the forward shock will be radiative and dominate the emission, which will then have the composition of the CSM. In the opposite case, the reverse shock dominates with a composition typical of the outer Figure 17: The [O i] \(\lambda\lambda\) 6300,6364 and Mg i] \(\lambda\) 4571 lines at \(t_{\rm max}\)+286.7 days. The Mg i] line is well fitted with a parabolic shape (dark blue), expected from an optically thick expanding shell (e.g., swept up CSM and unshocked SN ejecta), while the [O i] lines show a strong blue deficit because the line-forming region is still optically thick. The [O i] doublet is centred on the 6364 Å doublet component. Figure 18: The late-time evolution of the [O i] \(\lambda\lambda\) 6300,6364 doublet. Note the strong evolution on the blue side, while the red side of the lines is evolving slower. This indicates a transition from optically thick to optically thin [O i] lines, implying that the scattering in the absorption part of the P-Cygni profile is decreasing. Regions of strong atmospheric absorption are grey-shaded. ejecta. The latter case is more relevant for lower mass loss rates. In both cases, the density enhancement behind the cooling shock will be very large. Assuming an approximate pressure balance behind the shock, the density enhancement will be of the order of \(T_{\rm shock}/T_{\rm ps}\approx 3/16\,\mu\,m_{\rm s}\,v_{\rm rel}^{2}/(k\,T_{\rm ps})\), where \(\mu\) is the mean molecular weight (\(\sim 1.7\) for a fully ionised oxygen gas), \(m_{\rm s}\) the atomic mass unit, \(v_{\rm rel}\) the relative velocity between the CSM shell and the ejecta, \(T_{\rm shock}\) the temperature immediately behind the shock, and \(T_{\rm ps}\) the post-shock temperature in the CDS (\(\sim 10^{4}\) K). With \(v_{\rm rel}\approx 5000\) km s\({}^{-115}\), the compression is of the order of \(10^{5}\). Both \(v_{\rm rel}\) and \(T_{\rm ps}\) are uncertain and magnetic pressure could limit the compression. The CDS is also most likely unstable (Chevalier & Blondin 1995), leading to clumping of the shell and limiting of the compression. However, the estimate shows that a very large density could result in the CDS, making the line optically thick, equivalent to a low filling factor. In the PISN scenario, strong clumping in the ejecta is needed. This is, however, not indicated from simulations of PISN models without CSM by Chen et al. (2020). We now turn to the origin of the higher ionisation [O ii] and [O iii] lines. Figure 19 shows the line profiles of the [O i], [O ii] and [O iii] lines after subtracting the continuum. Owing to the doublet nature of the lines, we centre the line profiles on the blue component in the left panel and on the red component in the right panel. These line widths can be compared to the velocity of the CSM shell, the photospheric velocity, and the maximum velocity of the ejecta (vertical lines in Figure 19). It is clear that the [O ii] and [O iii] line widths are closer to the velocity of the CSM shell than to the photospheric velocity of the SN ejecta; in contrast to the [O i] line, which extends to the maximum velocity of the SN ejecta. The differences in the origin of the forbidden oxygen lines are corroborated by the O i-O iii line profiles (Figures 19, 20). The asymmetric [O iii] \(\lambda\lambda\) 4363 and [O iii] \(\lambda\lambda\) 4959,5007 lines have little emission in the red wings indicative of emission from a thin shell, where most of the red emission is absorbed by the photosphere. Examples of this can be seen in figure 4b in Fransson (1984). That scenario is also consistent with the evolution of the [O ii] \(\lambda\lambda\) 7320,7330 doublet. The 7300 A line, which may be a blend of the [O ii] \(\lambda\lambda\) 7320,7330 lines and [Ca ii] \(\lambda\lambda\) 7291,7324, is shown in Figure 21, centred on the [O ii] \(\lambda\) 7320 line. Focusing on the [O ii] lines (left panel), the blue wing has a nearly constant line profile between \(t_{\rm max}\)+231.2 and \(t_{\rm max}\)+565.0 days. The red wing of the [O ii] \(\lambda\) 7330 doublet component gets considerably narrower during the same time interval. At \(t_{\rm max}\)+637.3 days (right panel), the entire 7300 A line profile changes quite dramatically, becoming flat-topped and broader. This is a result of the [Ca ii] \(\lambda\lambda\) 7291,7324 lines becoming strong, while the [O ii] lines get weaker. The increasing asymmetry of the [O ii] doublet may be qualitatively understood by the CSM being occulted by the SN ejecta. Assuming that the optically thick SN ejecta with the velocity \(v_{\rm ej}\) slams into the CSM shell with a low velocity (ideally \(v\approx 0\)) located at a distance \(R_{\rm s}\) from the progenitor star, the maximum velocity of the red wing \(v_{\rm red}\) is (a pure geometric effect) \[v_{\rm red} =v_{\rm s}\,\left(1-\left(v_{\rm ej}\,t/R_{\rm s}\right)^{2} \right)\approx v_{\rm s}\,\left(1-\left(v_{\rm ej}\,t/v_{\rm s}\,(t+\tau) \right)^{2}\right)\] \[\approx v_{\rm s}\,\left(1-\left(v_{\rm ej}\,t/v_{\rm s}\tau\right)^{2}\right)\] where \(\tau\) is the time between the shell ejection and the explosion and \(t\) the time since explosion. (We have assumed that \(t\ll\tau\).) Because the ejecta with the CDS, which may define the photosphere, expands with a much higher velocity (\(\sim 8,500\) km s\({}^{-1}\) vs. \(\sim 3000\) km s\({}^{-1}\)) a progressively increasing portion of the dense CSM will be occulted by the photosphere and less of the 'backside' of the CSM will be seen. This would lead to the red side getting narrower with time. At the same time, an increasing por Figure 19: The [O i] \(\lambda\lambda\) 6300,6364, [O ii] \(\lambda\lambda\) 7320,7330, and [O iii] \(\lambda\lambda\) 4959,5007 line profiles at \(t_{\rm max}\)+565.0 days. The velocity scale is centred on the blue doublet component in the left panel and on the red component in the right panel. [O ii] and [O iii] only reach out to approximately the velocity of the CSM shell (\(v_{\rm CSM}\)), much less than the photospheric velocity (\(v_{\rm FeII}\)) and the maximum velocity of the ejecta (\(v_{\rm max}\)). In contrast to that, the [O i] profile extends to \(\sim 12,500\) km s\({}^{-1}\). This points to [O ii] and [O iii] being produced close to the CSM shell whereas [O i] is produced in the SN ejecta. Figure 20: Comparison of the [O iii] \(\lambda\lambda\) 4959,5007 lines with the [O iii] \(\lambda\lambda\) 4363 line. These lines are produced by the interaction of the SN ejecta with circumstellar material. Due to the occulation of the CSM by an optically thick SN ejecta, less of the ‘backside’ of the CSM is seen. The velocities of the CSM shell (\(v_{\rm CSM}\)), the photospheric velocity (\(v_{\rm FeII}\)) and the maximum velocity of the ejecta (\(v_{\rm max}\)) are indicated. tion of the dense CSM will be shocked, leading to a decreasing luminosity from the dense CSM, including the [O ii-iii] emission. The fact that the forbidden [O ii] and [O iii] lines are seen at about \(t_{\rm max}\)+30 days adds additional constraints on the physical conditions where they originate. The critical densities, above which collisional de-excitation becomes important, are less than \(\sim 2\times 10^{6}\) cm\({}^{-3}\) for [O ii] and [O iii] (Osterbrock & Ferland 2006). This is much lower than the densities expected in the ejecta (Equation 1). Therefore, the [O ii] and [O iii] lines would be severely suppressed if they were coming from the ejecta. Not only do [O ii] and [O iii] originate from the CSM, but also the recombination lines O i\(\lambda\) 7773 and O i\(\lambda\) 9263. The blue wings of their line profiles are similar to [O ii] \(\lambda\) 7320, extending to \(\sim 5000\) km s\({}^{-1}\) (Figure 22). In summary, we propose a two-component scenario where the broad component, seen in particular in the [O i] \(\lambda\lambda\) 6300,6364 and Mg i] \(\lambda\) 4571 lines as well as the broad absorption in Mg ii \(\lambda\) 2800 and Ca ii \(\lambda\) 3934 come from either the CDS or possibly the unshocked ejecta. The low-velocity component seen in the [O iii] lines, as well as the [O ii] and O i recombination lines come from the CSM shell at \(\sim 3000\) km s\({}^{-1}\). The fact that we see [O iii] \(\lambda\lambda\) 4959,5007 emission even at \(t_{\rm max}\)+989.2 days means that the dense CSM must extend out to at least a few \(\approx 10^{17}\) cm. The velocity width of the Mg ii absorption of the CSM shell of 406 km s\({}^{-1}\) (Section 4.3.3; Figure 12) may correspond to the velocity gradient over the CSM shell. That we do not see any change in the width with time suggests that this gradient must be small enough so that the velocity close to the shock is nearly constant. The origin of this gradient is not clear, though. One explanation could be that this is the result of a time-limited eruption, where a Hubble-like outflow is expected after a few dynamical time scales. This has been observed, for instance, in the Eta Carinae Homunculus nebula produced during the great eruption in 1843 (e.g., Smith 2006). The absence of H and He lines throughout the entire evolution reveals (Figure 6) that the CSM shell must be processed gas from the stripped progenitor. Any hydrogen and helium must have been lost before this eruption and reside at much larger radii. Among the \(>200\) H-poor SLSNe known, SN 2018ibb is only the seventh object with spectroscopic evidence of CSM interaction. In previous cases, CSM interaction did not manifest itself via [O iii] in emission (a possible candidate for CSM interaction with O-rich material is PS1-14b) Lunnan et al. (2016). iPTF16eh revealed CSM interaction through a light echo produced in a shell of H-poor and He-poor material (Lunnan et al. 2016). Late-time spectra of iPTF10aagc, 13ehe, 15esb and 16bad (Yan et al. 2015, 2017) and SN 2018bsz (Pursiainen et al. 2022) showed broad Balmer emission lines, suggesting that their progenitors lost their hydrogen envelopes much closer to the time of the terminal explosion than SN 2018ibb and iPTF16eh. ### Constraints on the powering mechanism and progenitor In the following, we contrast SLSN and PISN models with our photometric and spectroscopic datasets and discuss the most likely powering mechanism and progenitor of SN 2018ibb. #### 5.2.1 Modelling the bolometric light curve We first analyse the bolometric light curve. Katz et al. (2013) proposed an exact method for testing whether a light curve is powered by the decay of radioactive material and, therefore, allows us to place an upper limit on any \({}^{56}\)Ni produced during the explosion of SN 2018ibb's progenitor. This method is independent of details in the radiative transport, including the highly uncertain opacity, the velocity distribution and the ejecta geometry. The method is described in detail in Wygoda et al. (2019) and Sharon & Kushnir (2020). In brief, the Katz integral is given by \[QT = LT+ET\ {\rm with}\] \[QT = \int_{0}^{t}dt^{\prime}\,t^{\prime}\,Q_{\rm dep}\left(t^{\prime} \right),\ LT=\int_{0}^{t}dt^{\prime}\,t^{\prime}\,L\left(t^{\prime}\right)\] and \(ET\) is the integrated time-weighted luminosity that would be emitted if no \({}^{56}\)Ni were produced. Assuming that there is no additional source of energy, \(ET\) can be assumed to be negligible. Figure 21: Evolution of the [O ii] \(\lambda\lambda\) 7320,7330 + [Ca ii] \(\lambda\lambda\) 7291,7324 line complex. **Left**: Up to \(t_{\rm max}\)+565.0 days, the line complex is dominated by [O ii]. The red wing narrows due to an increasing occultation of the CSM shell by the optically thick expanding SN photosphere. **Right**: Between \(t_{\rm max}\)+565.0 days and \(t_{\rm max}\)+637.2 days days, the line complex shifts to the blue, consistent with the [Ca ii] line becoming more dominant. All profiles are centred on [O ii] \(\lambda\) 7320. Figure 22: Comparison of the [O ii] \(\lambda\lambda\) 7320,7330 lines with the O i\(\lambda\) 7773 and O i\(\lambda\) 9263 recombination lines. The similar line profiles, extending to \(\lesssim 5000\) km s\({}^{-1}\) indicates an origin in a highly processed CSM shell. The total energy deposition rate from radioactive decay of \({}^{56}\)Ni, \(Q_{\rm{dep}}\), is given by (Jeffery, 1999) \[Q_{\rm{dep}}(t)\approx Q_{\gamma}\left(1-e^{(-t_{0}/t^{2})}+Q_{e^{*}}(t)\right.\] where \(t_{0}\) is the \(\gamma\)-ray escape time. The deposition rates from \(\gamma\)-ray photons and positrons are \[Q_{\gamma} =\frac{M({\rm Ni})}{M_{\odot}}\,\left(6.45\,e^{-t/\tau_{\rm{Na}}} +1.38\,e^{-t/\tau_{\rm{Ce}}}\right)\times 10^{43}\ {\rm erg\ s^{-1}}\] \[Q_{e^{*}} =4.64\,\frac{M({\rm Ni})}{M_{\odot}}\,\left(-e^{-t/\tau_{\rm{Ni}}} +e^{-t/\tau_{\rm{Ce}}}\right)\times 10^{41}\ {\rm erg\ s^{-1}}\] where the mean lifetimes of \({}^{56}\)Ni and \({}^{56}\)Co are \(\tau_{\rm{Ni}}=8.76\) days and \(\tau_{\rm{Co}}=111.4\) days, respectively (Junde, 1999). Since the explosion time is not well known, we vary the explosion time between 0 and 50 rest-frame days before the first detection and use the relation \(L/LT=Q/QT\) to determine the \(\gamma\)-ray escape time. We measure a range of 600 to 700 rest-frame days for \(t_{0}\). After the \(\gamma\)-ray escape time is determined, we infer the nickel mass by comparing the luminosity in the fitted range to the deposited radioactive energy. The best fit for each point in the \(t_{\rm{exp}}\) grid is shown in Figure 23. Indeed, the declining light curve is fully consistent with being powered by 24-35 \(M_{\odot}\) of \({}^{56}\)Ni. The upper bound could be even larger if the SN explosion happened more than 50 rest-frame days before the detection by _Gaia_. The rise time of 90-140 days, the range of \(\gamma\)-ray escape times and the range of nickel masses are consistent with expectations from PISN models (Kasen et al., 2011; Kozyreva & Blinnikov, 2015; Kozyreva et al., 2017) and SN Ia (e.g., Wygoda et al., 2019; Sharon & Kushin, 2020), after scaling their nickel mass to the nickel mass of SN 2018ibb. Both the excellent match with nickel powering and coverage of the fading light curve for 706 days is unprecedented for any of the \(>200\) SLSNe known, suggesting that SN 2018ibb could indeed be a PISN. #### 5.2.2 Modelling the broadband light curve Next, we fit the multi-band light curve with the Modular Open-Source Filter for Transients MOSFET software tool (Guillochon et al., 2018). In addition to the MOSFi nickel model that is based on the parameterisation by Nadyozhin (1994), we also select the central-engine models sisn (describing powering by a spin down of a magnetar; Nicholl et al., 2017) and fallback (describing powering by a black hole accreting fallback material; Moriya et al., 2018), and the Chatzopoulos et al. (2012) model to characterise the powering by CSM interaction. We also utilise the more complex models magni combining powering by a magnetar and radioactive \({}^{56}\)Ni (Blanchard et al., 2019) and csmni which combines powering by CSM interaction and \({}^{56}\)Ni. In all models, the photosphere is assumed to have a blackbody spectral energy distribution at all times. While this approximation is adequate during the photospheric phase, it is inadequate at later times when the spectrum is dominated by emission lines and an interaction-powered pseudo-continuum. The spectral energy distribution of the model sisn is modified in the UV to account for absorption by the SN ejecta. As our dataset covers a very long time span, the trapping of \(\gamma\)-ray photons will eventually decrease, which accelerates the fading. All chosen models include a component to account for the loss of trapping (Nicholl et al., 2017). The priors of the model parameters are chosen to cover a broad range of the physically allowed parameter spaces. Their ranges and shapes are similar to Nicholl et al. (2017), Kangas et al. (2022) and Chen et al. (2023), and they are summarised in Table 8. The model parameters are inferred using Bayes' theorem using the nested sampler dynesty. The fits of each model are shown in Figure 24. As the fit covers a wide time interval, each panel in Figure 24 also contains a window zooming in onto the region of maximum light. The marginalised posteriors of the model parameters are summarised in Table 8. Visually, all models capture the rise, peak and decline up to \(t_{\rm{max}}\)+400 days. There are noticeable differences between the fits and the data because of \(i)\) not all models can be correct, \(ii)\) the inherent assumptions of each model, and \(iii)\) the assumption of a blackbody photosphere at all times. Owing to this, none of the models can capture the bumps and undulations (see inset in Figure 24). The significant deviation in the \(z\) band at \(>\)\(t_{\rm{max}}\)+200 days is due to the assumption of a blackbody photosphere. The late-time spectra reveal a blue pseudo-continuum with super-imposed emission lines. The luminescent [O ii]\(\lambda\lambda\) 7320,7330 emission lines are redshifted to the \(z\) band and cause the apparent discrepancy between the data and the models (Figure 6). Besides these general caveats, differences in the fit qualities between the models are visible. The pure magnetar and the black-hole central-engine models predict slightly broader light curves around the peak time, whereas the nickel and CSM models fit the data better (see insets in Figure 24). At epochs later than \(t_{\rm{max}}\)+500 days, the pure central engine models fail to describe the data. The discrepancies grow with time and reach \(\sim 2\) mag per band at \(t_{\rm{max}}\)+706 days. These differences between the model fits are also reflected in the fit statistics that we quantify with the Bayesian evidence, \(Z\), computed with dynesty. The nickel and CSM models have a score of log \(Z\sim 640\), whereas the pure central engine models reach only log \(Z\sim 500\). The Bayes factor, defined as the ratio of two \(Z\) scores, is larger than 100, and, therefore, these central engine models can be rejected on statistical grounds with high confidence (Jeffreys, 1961). The three nickel models require \(\approx 30\)\(M_{\odot}\) of freshly synthesised nickel to power the entire light curve from \(t_{\rm{max}}\)\(-\)93 days to Figure 23: The bolometric light curve of SN 2018ibb from 1800 to 14,300 Å (rest-frame) and lightcurve fits to the fading light curve using the Katz et al. (2013) method. The entire fading light curve from up to 706 days after peak is fully consistent by being powered 24-35 \(M_{\odot}\) of \({}^{56}\)Ni (dark red), suggesting that SN 2018ibb could be a pair-instability supernova. At about 300 days after peak (i.e., \(\gtrsim 400\) days after the explosion), the \(\gamma\)-ray trapping decreases with time. The loss of trapping is indicated by the difference between the light-red (100% trapping) and dark-red (\(<100\)% trapping) curves. Figure 24: Modelling of the light curves from the rest-frame UV to the NIR with MOSFET. All models provide an adequate description of the data up to \(t_{\rm max}\)+400 days, though with differences in the fit quality. At later times, the models diverge. The pure nickel model is the only model that captures the evolution after \(t_{\rm max}\)+500 days and has physically meaningful parameters. The central engine models (magnetar and fallback) predict a flattening of the light curve due to a power-law-shaped heating rate in contrast to powering by \({}^{56}\)Ni that has an exponential energy deposition rate. The magnetar+nickel model also captures the full evolution. However, the inferred model parameters would be either physically implausible or require an exotic star which we deem not viable. Note, the Keck photometry at \(t_{\rm max}\)+539 days and \(t_{\rm max}\)+562 days is not corrected for host contamination owing to the lack of Keck reference images to perform image subtraction. The expected host contribution is \(\approx\) 10%. \(t_{\rm max}\)+706 days, consistent with our conclusions on the bolometric light curve (Section 5.2.1). Such a large nickel mass can only be produced in a PISN explosion. PISN models predict no remnant after the entire star is obliterated (Fowler & Hoyle, 1964; Barkat et al., 1967; Rakavy et al., 1967), eliminating the magnetar + \({}^{56}\)Ni model. If we were to ignore stellar evolution theory, the rotational energy of the magnetar, which defines how much energy could be converted into radiation, would contribute \(<1\%\) to the total radiated energy (\(\approx 8\times 10^{49}\) erg), whereas 99% of the radiated energy would come from the radioactive decay of \({}^{56}\)Ni model (Table 8). Moreover, the inferred spin period of 15.4 ms is much larger than the median spin period of \(\sim 2.6\) ms from the ZTF-1 SLSN sample (Chen et al., 2023a, see also Nicholl et al., 2017 and Blanchard et al., 2020). Even the slowest spinning SLSN magnetars never exceeded 6-7 ms (Nicholl et al., 2017; Blanchard et al., 2020; Chen et al., 2023b). (All measurements are based on the fiducial assumption of dipole spin-down radiation.) The MOSFiT CSM \(+\)\({}^{56}\)Ni model can also be rejected. The nickel fraction would have a nonphysical value of \(\sim 90\%\). The inferred CSM mass is \(\sim 0.5\)\(M_{\odot}\). The kinetic energy that could be converted to radiation can be estimated as \(E_{\rm kin}=M_{\rm ej}\,M_{\rm CSM}\,/\,\left[2\left(M_{\rm ej}+M_{\rm CSM} \right)\right]\times\left(v_{\rm ej}-v_{\rm CSM}\right)^{2}\)(Moriya et al., 2018). In the most optimistic case (\(\rm t_{\rm CSM}=0\)), CSM interaction could contribute \(10^{50}\) erg, i.e., again \(\lesssim 1\%\) of the total radiated energy. Note, the Chatzopoulos et al. (2012) CSM model used in MOSFiT is debated (see the discussion in Sorokina et al., 2016; Moriya et al., 2022). Since CSM interaction is a highly complex process (e.g., Chevalier & Fransson, 2017; Tolstov et al., 2017; Suzuki & Maeda, 2021; Takei et al., 2022), a more sophisticated CSM + \({}^{56}\)Ni model is needed to accurately infer the contribution from CSM interaction with light curve modelling. The PISN explosion channel has two further predictions that can be tested. Firstly, producing \(\sim 30\)\(M_{\odot}\) of nickel requires a progenitor star with a mass of \(\sim 120\)\(M_{\odot}\) at the time of the explosion. Secondly, \(\sim 30\)\(M_{\odot}\) of nickel would result in line \begin{table} \begin{tabular}{l c c c c c c c} \hline Parameter & Prior & Magnetar & \(\rm Magnetar+\)\({}^{56}\)Ni & Fallback & CSM & CSM +\({}^{56}\)Ni & \({}^{56}\)Ni \\ & & \({}^{56}_{\rm Ni}\) & & & & \({}^{56}_{\rm Ni}\) & (red) \\ \hline \multicolumn{10}{c}{**Fitted properties**} \\ \hline \multicolumn{10}{c}{General} \\ \hline ejecta mass \(M_{\rm ej}\,\left(M_{\odot}\right)\) & log \(\mathcal{U}\left(1,300\right)\) & \(86^{+12}_{-1}\) & \(55^{+18}_{-5,15}\) & \(55^{+18}_{-5}\) & \(75^{+1}_{-4}\) & \(63\pm 9\) & \(36^{+4}_{-1}\) & \(54^{+13}_{-1}\) \\ explosion date \(t_{\rm eng}\) (day) & \(\mathcal{U}\left(-200,0\right)\) & \(-19^{+2}_{-2}\) & \(-22\pm 2\) & \(-21^{+2}_{-2}\) & \(-11\pm 2\) & \(-26^{+2}_{-2}\) & \(-19^{+3}_{-1}\) & \(-41\pm 6\) \\ \(\gamma\gamma\)-ray” opacity \(\gamma_{\rm ej}\,\left(\rm cm^{2}\,g^{-1}\right)\) & \(\mathcal{U}\left(10^{-2},10^{4}\right)\) & \(0.012\pm 0.002\) & \(15^{+48}_{-15}\) & \(45^{+12}_{-13}\) & \(0.010\pm 0.001\) & & \(\sim 25^{+11}_{-11}\) & \(26^{+108}_{-24}\) \\ optical opacity \(\epsilon\left(\rm cm^{2}\,g^{-1}\right)\) & \(\mathcal{U}\left(0.01,0.2\right)\) & \(0.17\pm 0.02\) & \(0.05\pm 0.02\) & \(0.05\pm 0.02\) & \(0.19\pm 0.01\) & \(\sim 0.06\pm 0.01\) & \(0.02\pm 0.01\) \\ scaling velocity \(v_{\rm neb}\,\left(\rm km\,s^{-1}\right)\) & \(\mathcal{U}\left(1000,10000\right)\) & \(5166^{+11}_{-103}\) & \(5667^{+11}_{-112}\) & \(5679^{+108}_{-115}\) & \(5580^{+250}_{-300}\) & \(5765^{+108}_{-112}\) & \(5774^{+108}_{-146}\) & \(4192^{+253}_{-253}\) \\ white noise parameter \(\sigma\) & log \(\mathcal{U}\left(10^{-3},100\right)\) & \(0.25\pm 0.01\) & \(0.21\pm 0.01\) & \(0.21\pm 0.01\) & \(0.26\pm 0.01\) & \(0.20\pm 0.01\) & \(0.21\pm 0.01\) & \(0.20\pm 0.01\) \\ \hline \multicolumn{10}{c}{Magnetar model} \\ \hline magnetic field \(B_{\rm z}\,\left(10^{41}\rm G\right)\) & log \(\mathcal{U}\left(0.01,20\right)\) & \(0.72^{+0.05}_{-1.05}\) & \(1.45^{+1.05}_{-1.05}\) & & & & & & \\ neutron-star mass \(M_{\rm MS}\,\left(M_{\odot}\right)\) & \(\mathcal{U}\left(1,2.2\right)\) & \(2.1\pm 0.1\) & \(1.45^{+1.05}_{-4.5}\) & & & & & \\ initial spin period \(P_{0}\,\left(\rm ms\right)\) & \(\mathcal{U}\left(1,20\right)\) & \(1.0^{+0.1}_{-0.0}\) & \(15.4^{+2.05}_{-2.5}\) & & & & & \\ \hline \multicolumn{10}{c}{\({}^{56}\)Ni model} \\ \hline \multicolumn{10}{c}{**Model fraction \(f_{\rm SN}\)**} \\ \hline \multicolumn{10}{c}{Fallback model} \\ \hline luminosity \(L_{\rm i}\,\left(10^{56}\rm erg\,s^{-1}\right)\) & log \(\mathcal{U}\left(10^{-4},10^{3}\right)\) & \(\cdots\) & \(\cdots\) & \(4.5\pm 0.1\) & & & & \\ transition time \(t_{\rm w}\) (day) & log \(\mathcal{U}\left(10^{-4},10^{4}\right)\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(0.003^{+0.04}_{-0.00}\) & & & \\ \hline \multicolumn{10}{c}{CSM} \\ \hline CSM mass \(M_{\rm CSM}\,\left(M_{\odot}\right)\) & log \(\mathcal{U}\left(0.01,3000\right)\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(30\pm 1\) & \(0.4^{+0.7}_{-0.7}\) & \(\cdots\) \\ CSM density \(\left(10^{-11}\rm cm^{-3}\right)\) & log \(\mathcal{U}\left(10^{-11}\rm cm^{-3}\right)\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(4.4^{+0.1}_{-0.1}\) & \(3.2^{+1.6}_{-2.3}\) & & \\ power-law index of the CSM & \(\mathcal{U}\left(0,2\right)\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(1.3\pm 0.2\) & \(1.8\pm 0.1\) & \(\cdots\) \\ density profile \(s\) & \(\mathcal{U}\left(8,12\right)\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(11.2\pm 0.4\) & \(10.3\pm 0.6\) & & \\ density profile \(\epsilon\) & fixed & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(0\) & \(0\) & \(\cdots\) \\ density profile \(\delta\) & \multicolumn{10}{c}{** blanketing by iron-group elements that absorb most of the flux blueward of \(\sim 5000\) A (at late times). The nickel model requires an ejecta mass of \(55^{+34}_{-15}\)\(M_{\odot}\); lower than the 120 \(M_{\odot}\) required from the PISN models but only in tension by merely \(1.9\sigma\). Spectra of SN 2018ibb show significant flux blueward of \(\sim 5000\) A even at \(t_{\rm max}\)+637 days. This is in contradiction with the expectations of PISN models. However, in Sections 4.3.4 and 5.1 we showed that CSM interaction contributes to the observed emission. In Section 5.2.4, we investigate whether CSM interaction could produce the blue excess. We emphasise that the necessity for a large ejecta and nickel mass is determined by the long rise, the high peak luminosity and the slow decline. It does not depend on the availability of the data in the blue bands. To corroborate that, we repeated the fit with MOSFIT using only data in the \(r\) band and in redder filters. Again, the fit returns \(M_{\rm ej}\sim 54\)\(M_{\odot}\) and \(M_{\rm Ni}\sim 35\)\(M_{\odot}\). The nickel fractions of the fits with and without data blueward of the \(r\) band are \(60\pm 20\%\) (Table 9). PISN models with \(M_{\rm Ni}\sim 30\)\(M_{\odot}\) have nickel fractions of \(\sim 26\%\) (e.g., Kasen et al., 2011; Gilmer et al., 2017; Kozyreva et al., 2017, see also Table 9). Our fitted value is larger but in tension by merely \(1.7\sigma\) and, therefore, not statistically significant (see also Section 5.2.3). While the pure magnetar model can be excluded on statistical grounds, the inferred properties are also nonphysical. The magnetar models push the parameter space to an extreme corner (\(M_{\rm NS}\sim 2.2\)\(M_{\odot}\), \(P_{0}=1\) ms and \(M_{\rm ej}\sim 80\)\(M_{\odot}\)) to squeeze out as much energy as possible from the neutron star. Furthermore, the lower limit on the progenitor mass (\(M_{\rm progenitor}>M_{\rm ej}+M_{\rm NS}\)) exceeds 82 \(M_{\odot}\). Explosion models predict that such a massive star leaves behind a black hole but not a neutron star (Heger and Woosley, 2002). Furthermore, the H-poor SLSNe, which are thought to be powered by a magnetar, have ejecta masses of \(\sim 5\)\(M_{\odot}\). The most massive ejecta reach a few 10 \(M_{\odot}\) but never exceed 50 \(M_{\odot}\)(Nicholl et al., 2017; Blanchard et al., 2020; Tinyanont et al., 2022; Chen et al., 2023; West et al., 2023). To verify that our results are robust, we also fit the observations using the software package Redback(Sarin et al., 2023), which implements these different models. We fit the multi-band data in magnitude space with a Gaussian likelihood function and the exact same priors, and utilise the nestele1 sampler implemented in bilby(Ashton et al., 2019; Romero-Shaw et al., 2020). We infer consistent parameters to MOSFIT. Our posteriors are reported in Table F.1. We also fit the multi-band data with the pure nickel model where capacities \(\kappa\) and \(\kappa_{\nu}\) are fixed to 0.07 and 0.027, respectively. This fit agrees with the previous conclusions and reveals that the degeneracy between \(\kappa\) and \(M_{\rm ej}\) could yield a lower-than-expected ejecta mass if the value of \(\kappa\) is constrained by other means, e.g., theoretical models. We will present further analysis with different models and Reback in a forthcoming publication. Footnote 1: [http://kylebarbarary.com/nestle/](http://kylebarbarary.com/nestle/) Previously, Eftekhari et al. (2021) reported that SN 2018ibb can be modelled with the MOSFIT slsn model using the _Gaia_ data and a handful of observations by PanSTARRS. We caution against this practice. Even with our comprehensive data set only the data after \(t_{\rm max}\)+600 days enabled us to break the degeneracy between the central-engine models and the nickel-powered models. Furthermore, the best-fit magnetar properties presented here and in Eftekhari et al. (2021) are significantly different, demonstrating that datasets with a large wavelength coverage and a wide time span are required to determine the powering mechanism of SLSNe. Our results echo the conclusions from Moriya et al. (2017), who performed a parameter study of magnetar and nickel models, that the two models could produce indistinguishable light curves if the time coverage is too short. These authors also stressed that observations after \(t_{\rm max}\)+700 days are needed to break the degeneracy in the light curve modelling. #### 5.2.3 Matching the lightcurve with PISN templates Motivated by the lightcurve fits, we compare the bolometric light curve to the PISN templates from Kasen et al. (2011), Gilmer et al. (2017) and Kozyreva et al. (2017). The grid of models from Kasen et al. (2011) are the metal-free helium models from Heger and Woosley (2002). The Gilmer et al. (2017) and Kozyreva et al. (2017) models assume a metallicity of 7% solar. Very massive stars in low-metallicity environments (\(Z\sim 0.07\) Z\({}_{\odot}\)) lose their hydrogen envelopes during the early evolution, assuming up-to-date wind mass-loss rates. The details about the used mass-loss rates are described in Ekstrom et al. (2012) and Yussof et al. (2013). Therefore, these stars are hydrogen-free by the time of the pair-instability episode, and the helium-core models from Heger and Woosley (2002) are a good representation of these explosions. This is in agreement with the models from Gilmer et al. (2017). Their suite of models, which were computed self-consistently, are initially hydrogen-rich models; however, owing to mass loss, the highest-mass models become hydrogen-free by the time of the explosion. Among the suitable models, we chose the P250 template from Gilmer et al. (2017), and the He100, He120, He125 and He130 templates from Kasen et al. (2011) (where the number stands for the helium core mass in \(M_{\odot}\)). The models have nickel yields between 5.8 and 40 \(M_{\odot}\). The vital properties of these models are presented in Table 9. The P250 model starts with an initial mass of 250 \(M_{\odot}\). At the time of the explosion, a helium core of 127 \(M_{\odot}\) has formed, which is similar to the helium models He125 and He130. We note that the P250 model not only loses its hydrogen envelope but also most of its helium layer (total mass of 2.6 \(M_{\odot}\) before the loss of the He layer) and ends up as a bare carbon-oxygen core with a tiny helium fraction by the time of the pair-instability explosion. In contrast to that, the He125 and He130 models evolve without mass losses and retain 2.4 \(M_{\odot}\) and 2.8 \(M_{\odot}\) of helium, respectively. To build the bolometric light curves of the PISN models, we use the hydrodynamics radiative transfer code STELLA (Blinnikov et al., 2006). The slight difference of the P250Ni34 light curve between our calculation and that in Gilmer et al. (2017) and Kozyreva et al. (2017) is caused by the different versions of STELLA used in the two studies. A relevant discussion can be \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Name & \(M\)(ZAMS) & \(M\)(He) & \(M\)(Ni) & Metallicity & \(\epsilon_{\rm ejecta}\) & \(E_{\rm kin}\) \\ & (\(M_{\odot}\)) & (\(M_{\odot}\)) & (\(M_{\odot}\)) & (\(Z/Z_{\odot}\)) & (km\({}^{-1}\)) & (\(10^{11}\) erg) \\ \hline He100 & 205 & 100 & 6 & 0.01 & 8400 & 42 \\ He120 & 242 & 120 & 26 & 0.01 & 10000 & 71 \\ He125 & 251 & 125 & 34 & 0.01 & 10300 & 79 \\ He130 & 260 & 130 & 44 & 0.01 & 10600 & 87 \\ P250 & 250 & 127 & 25 & 0.07 & 7500 & 86 \\ P250Ni34 & 250 & 127 & 34 & 0.07 & 8850 & 82 \\ \hline \end{tabular} 1 \end{table} Table 9: Summary of PISN model parameters found in Kozyreva et al. (2020). The re-calculated light curves of the helium models are consistent with those calculated with the spectral synthesis code SEDONA (Kasen et al., 2011). In Figure 25, we compare the bolometric light curve of SN 2018ibb to those computed for a series of PISN models. The PISN templates with nickel yields between 34 and 44 \(M_{\odot}\) (He125, He130, P250Ni34) providing excellent matches to the rise, the peak, and the fading parts of the bolometric light curve of SN 2018ibb. While the He125 and P250Ni34 models describe the rise and peak well, they systematically underestimate the late-time flux. Such a deviation at the late epochs may not necessarily re-fute these two models since the observed bolometric flux of SN 2018ibb may include a time-varying contribution from CSM interaction. As we show in Section 5.2.4, this contribution is not negligible and could boost the luminosity by a few 0.1 dex. The templates with \(M(\mathrm{Ni})\sim 25\)\(M_{\odot}\) (He120 and P250) also provide reasonable matches to the data, even though they exhibit a faster rise and produce peak luminosities that are \(\approx\) 0.3 dex fainter. In the case that CSM interaction contributes at all times, the apparent tension might be alleviated. The PISN model He100, which produces the smallest amount of Ni in our set, generates a light curve that is incompatible with observations. The peak bolometric luminosity of such a model is 0.8 dex lower compared to the observed value, implying that a different energy source must account for \(>84\%\) of the observed peak luminosity. The match of He125, He130 and P250Ni34 with the data also addresses an issue in fitting the multi-band light curve with MOSFiT. In Section 5.2.2, we reported that the inferred ejecta masses of \(55_{-19}^{+34}\)\(M_{\odot}\) are possibly too low. The excellent match of the entire light curve with the He125, He130 and P250Ni34 models demonstrates that this tension is not critical. #### 5.2.4 Late-time spectra of SN 2018ibb compared to PISN models Jerkstrand et al. (2016) computed spectra of the He100 [\(M(\mathrm{Ni})=5.8\)\(M_{\odot}\)] and He130 [\(M(\mathrm{Ni})=44\)\(M_{\odot}\)] PISN models at 400 and 700 days after the explosion.17 To compare these spectra with the observations, we need to constrain the poorly measured explosion date of SN 2018ibb. The bolometric light curve of the He100 and He130 models peaked at \(\approx\) 130 rest-frame days. Assuming that SN 2018ibb's bolometric light curve peaked up to 20 days before the peak in the \(glr\) band, we can scale the computed spectra to the epochs of the observed spectra via \(\exp\left(\Delta t/\tau_{\mathrm{Ca}}\right)\), where \(\Delta t\) is the phase difference between the observed and computed spectra and \(\tau_{\mathrm{Ca}}\) is the mean lifetime of \({}^{56}\)Co. Footnote 17: The model spectra of the P250 templates will be presented in a forthcoming paper by Kozyreva et al. (in prep.). Figure 26 shows the observed spectra of SN 2018ibb at \(t_{\mathrm{max}}\)+286.7 (top row) and \(t_{\mathrm{max}}\)+637.3 days (bottom row) in black. The upper left and the bottom left panels compare the earlier and the later spectra with the phase-adjusted He100 model (red) at \(t_{\mathrm{max}}\)+400 days and \(t_{\mathrm{max}}\)+700 days, respectively. The right column presents the same comparison to the He130 model spectra. The phase-adjusted spectra have a shaded band to indicate the impact of the uncertain peak time of the bolometric light curve on the model flux. We selected the observed spectra at these specific epochs to minimise the phase correction and cover a wide wavelength range. The He100 model fails to match the spectra of SN 2018ibb at both \(t_{\mathrm{max}}\)+286.7 and \(t_{\mathrm{max}}\)+637.3 days. The predicted emission lines are significantly weaker compared to the data, and the relative strength of the features does not match the shape of the observed spectra. In addition, the model spectra also exhibit lines that are significantly narrower compared to the observed spectra since the He100 model yields a lower ejecta velocity (Table 9). The He130 model provides a better match. At \(t_{\mathrm{max}}\)+286.7 days, the model spectrum describes the observed spectrum redward of 6000 A well, in terms of the absolute and relative strength of the features as well as the line widths. The computed spectrum also matches the observed NIR spectrum, albeit the strongest predicted feature at 1.2 \(\mu\)m (Fe i and Si i) is redshifted to a region that is strongly affected by atmospheric absorption (indicated by the black-shaded region in the upper half of the figure). The match at \(t_{\mathrm{max}}\)+637.3 days appears to be less plausible compared to that at \(t_{\mathrm{max}}\)+286.7 days. While the model reproduces Figure 25: Comparison of SN 2018ibb with the PISN models P250 and P250Ni34 from Kozyreva et al. (2017) (left panel) and the He100, He125, and He130 from Heger & Woosley (2002) (right panel). Templates with nickel masses of 34–44 \(M_{\odot}\) are required to describe the entire bolometric light curve from \(t_{\mathrm{max}}\)\(-\)93 to \(t_{\mathrm{max}}\)+706 days. Models with \(M(\mathrm{Ni})=25\)\(M_{\odot}\) systematically underestimate the observed bolometric light curve, but they could still be viable if CSM interaction contributes significantly throughout the evolution. the [O ii]+[Ca ii] at 7300 A, the observed spectrum shows an elevated continuum level and stronger [O i]\(\lambda\lambda\) 6300, 6364 in emission. The observed spectrum also shows prominent O i\(\lambda\) 7773 in emission that is not generated by the model. However, in Section 5.1 we showed that O i\(\lambda\) 7773 is produced by the CSM interaction. Blueward of 6000 A, the discrepancy between the observed and computed spectra is considerable in both epochs. A similar excess in the blue part of the spectrum was observed in other slow-evolving SLSNe (e.g., Jerkstrand et al., 2017), and it was used as a critical piece of evidence against the PISN interpretation (e.g., Dessart et al., 2013; Nicholl et al., 2013). However, in Sections 4.3.3, 4.3.4 and 5.1, we showed that SN 2018ibb is not exclusively powered by \({}^{56}\)Ni. SN 2018ibb's progenitor had an eruptive mass-loss episode shortly before the explosion. The interaction of the SN ejecta with CSM contributes to the observed light curve via discrete emission lines, and it could even produce a blue pseudo-continuum similar to that seen in interaction-powered SNe (Silverman et al., 2013; Hosseinzadeh et al., 2017; Perley et al., 2022)18. This raises the questions of whether the blue excess in SN 2018ibb is similar to that seen in interaction-powered SNe and how large the contribution of CSM interaction is to the bolometric light curve. Footnote 18: The pseudo-continuum in Type Ibn SNe is the product of the blending of thousands of iron emission lines (e.g., Dessart et al., 2022). In Figure 27, we further inspect the spectrum of SN 2018ibb at \(t_{\rm max}\)+637 days against the phase-adjusted spectrum of the He130 model. We attempt to decompose the spectrum of SN 2018ibb into two elements, namely a PISN and an ejecta-CSM interaction component. The CSM component is represented by a spectrum of the Type Icn SN 2021csp obtained at \(\sim 52.7\) days after the explosion from Perley et al. (2022).19 Its flux scale is scaled so that the sum of the PISN and CSM components (green) matches the shape of SN 2018ibb's pseudo-continuum. This approach is similar to that in Ben-Ami et al. (2014), where these authors used a spectrum of a Type IIn SN to deduce that the ejecta of the Type Ic SN 2010mb interacted with a large amount of H-free circumstellar material. Indeed, this toy model captures the general shape of SN 2018ibb, suggesting that a considerable fraction of the flux blueward of 6000 A is produced by the CSM interaction. Most of the emission lines in the Figure 26: Late-time spectra of SN 2018ibb at 287 and 637 days after its maximum. Overlaid are the computed PISN spectra from Jerkstrand et al. (2016) scaled to these epochs. The shaded region indicates the uncertainty of the explosion time. The He130 model provides an adequate description of the emission redward of 6000 Å at \(t_{\rm max}\)+286.7 days, but a worse match for the second epoch. The observed spectra show a considerable excess at shorter wavelengths that is not expected from the model spectra. We argue that the blue excess is due to the interaction of the SN ejecta with circumstellar material, which is not included in existing PISN models. The He100 model matches the observation of neither epoch. The vertical bars at the top of each panel indicate the location of telluric features. blue and O i \(\lambda\) 7773 feature were not observed in the spectrum of SN 2021csp. However, we have shown that some of the observed lines in SN 2018ibb, e.g., O i \(\lambda\) 7773,9262, [O ii] \(\lambda\lambda\) 7320,7330 and [O iii] \(\lambda\lambda\) 4959,5007, are generated by the CSM interaction. Others, e.g., [O i] \(\lambda\lambda\) 6300,6364 and Mg i \(\lambda\) 4571, are likely formed in the unshocked SN ejecta or the contact discontinuity (cool-dense shell) between the SN ejecta and the CSM (Section 5.1). Assuming SN 2018ibb's progenitor is similar to the He130 star model, we can roughly estimate the fractions of the observed bolometric flux that have been produced by the nickel decay and the CSM interaction. The bolometric luminosity calculated at \(t_{\rm max}\)+286.7 days covers the wavelength range from 3020 to 14,250 A (rest-frame). The phase-adjusted model spectrum from Jerkstrand et al. (2016) accounts for 70% of the bolometric flux, i.e., the nickel-powered light curve would be 0.1 dex fainter than the observed bolometric light curve. At \(t_{\rm max}\)+637.3 days, the observed bolometric luminosity covers the range from 3930 A to 8500 A (rest-frame). The phase-adjusted PISN spectrum accounts for only 21% of the observed bolometric flux, i.e., the Ni-powered light curve would be 0.7 dex fainter. To illustrate that, we show in Figure 28 the observed bolometric light curve (solid blue lines) and the fraction of the observed bolometric light curve that can be attributed to the He130 model (dashed red lines). However, there is a critical detail that we need to take into account before drawing a conclusion. The bolometric light curves of the P250 and He100-He130 models extend to 50,000 A. Therefore, our observed bolometric light curve could miss a substantial fraction of the true bolometric flux. The Jerkstrand et al. (2016) model spectra cover the wavelength range from the far UV to 25,000 A, and figure 13 in Jerkstrand et al. (2016) shows the fraction of light emitted between 25,000 A and 50,000 A, allowing us to estimate the missing IR fractions. At \(t_{\rm max}\)+286.7 days, the missing IR fraction is 0.14 dex. This fraction increases to \(\sim 0.39\) dex at \(t_{\rm max}\)+637.3 days, due to the shorter wavelength coverage of the observed bolometric light curve and an increased mid-IR contribution from the PISN model. The dotted green line in Figure 28 shows the estimated Ni-powered light curve of SN 2018ibb after correcting for the missing IR flux. The IR correction fortuitously compensates for most of the observed bolometric flux lost to CSM interaction, corroborating that even with significant CSM interaction a total mass of 25-44 M\({}_{\odot}\) of \({}^{56}\)Ni is still needed to power the light curve and spectra. A progressively increasing contribution from CSM interaction to the bolometric flux is not a contrived scenario. If the shock is radiative, as is expected for a high metallicity and dense CSM, then the luminosity from the shock is \(\sim\dot{M}\Delta v_{\rm gal}^{\prime}/2\), where \(\Delta v_{\rm rel}\) is the relative velocity of the ejecta and the CSM. If the density gradient, \(n\), of the ejecta is steep, the shock velocity is only slowly decreasing, \(v_{\rm s}\propto t^{-1/(n-n)}\) for a CSM with \(r^{-s}\) density profile, and the shock luminosity will only be a slowly decreasing function of time (Chevalier and Fransson, 2017). Because the radioactive input decreases exponentially, it is expected that the shock contribution will increase relative to the radioactively powered input. #### 5.2.5 [O ii] \(\lambda\) 1.025 \(\mu\)m The NIR spectra of SN 2018ibb after \(t_{\rm max}\)+300 days reveal an emission line at 1.025 \(\mu\)m that we interpret as [Co ii] (a triplet of individual lines at 1.019, 1.025 and 1.028 \(\mu\)m, which result from the 9-1, 10-2, and 11-3 transitions as sorted from higher to lower energies, respectively; Figure 29). The line luminosities are (2.9\(\pm\)0.8)\(\times 10^{40}\) erg s\({}^{-1}\) and (5.4\(\pm\)1.4)\(\times 10^{40}\) erg s\({}^{-1}\) at \(t_{\rm max}\)+352.6 and \(t_{\rm max}\)+377.5 days, respectively. Assuming optically thin LTE, we can convert the line luminosity to a (temperature-dependent) Co ii mass. The line luminosity of the [Co ii] \(\lambda\) 1.025 \(\mu\)m multiplet can be written as the sum of the individual transitions \[L({\rm Co\,n})=N_{9}\,A_{9-1}\,E_{9-1}+N_{10}\,A_{10-2}\,E_{10-2}+N_{11}\,A_{11 -3}\,E_{11-3},\] where \(N_{u}\) is the total number of ions in the upper state \(u\), \(A_{u-l}\) the transition rate for spontaneous emission from the upper state \(u\) to the lower state \(l\), and \(E_{u-l}\) the energy level of the transition. Figure 27: The nebular spectrum of SN 2018ibb (black) at \(t_{\rm max}\)+637.3 days and its decomposition into a CSM interaction (blue) and PISN component (red). This decomposition reveals that the shape of the spectrum in the blue is similar to the pseudo-continuum seen in interaction-powered SNe (Type Ia-CSM, Ibn, and IIn). The emission lines in the blue arise either from CSM interaction or from material in the CSM shell excited by the SN light. The dotted vertical lines indicate the location of strong galaxy emission lines. Figure 28: The observed late-time bolometric light curve (solid blue lines) and the fraction of light that could be attributed to \({}^{56}\)Ni after accounting for CSM interaction (dashed, red). The dotted green curves show the \({}^{56}\)Ni light curve after adding the missing IR flux (up to 5 \(\mu\)m). The IR correction pushes the light curves back to the regime of PISN models that produce 25–44 \(M_{\odot}\) of \({}^{56}\)Ni. Even in the case of a substantial contribution from CSM interaction, a total amount of 25–44 \(M_{\odot}\) of \({}^{56}\)Ni appears to be essential to power SN 2018ibb. We use Quinet (1998) for the values of the Einstein coefficients and the energy levels. The partition function can be taken as 20, with an error less than a factor of 2 for reasonable temperatures. Then, using equation 42 in Jerkstrand (2017), the cobalt mass is given by \[M\left(\mathrm{Co\,ii}\right)\gtrsim 0.5~{}M_{\odot}\times\left[\frac{L\left( \mathrm{Co\,ii}\right)}{5\times 10^{40}~{}\mathrm{erg/s}}\right]\times\frac{\exp \left(15410/T\right)}{\exp\left(15410/5000\right)},\] where \(T\) is the temperature of the ejecta, in units of \(K\). The temperature factor (the ratio of exponentials) varies from 0.2 at \(T=10,000\) K to 100 at \(T=2000\) K. To calculate the initial nickel mass, we need to account for the amount that has decayed over time. The initial nickel mass is a factor of \(\exp\left(t/\tau_{\mathrm{Co}}\right)\gtrsim\exp\left(450/111\right)\simeq 60\) larger, where \(t\) is the time since explosion and \(\tau_{\mathrm{Co}}\) is the mean lifetime of \({}^{56}\)Co. Averaging over the line luminosities of the two epochs, the inferred \({}^{56}\)Ni mass is \(\gtrsim 30~{}M_{\odot}\) if \(T\approx 5000\) K. This estimate is consistent with the inferred \({}^{56}\)Ni mass from the lightcurve modelling (Sections 5.2.1, 5.2.2, 5.2.3). However, lower values of the nickel mass would be expected if the temperature is higher (\(6~{}M_{\odot}\) at \(T=10,000\) K). For temperatures below \(\sim\)3500 K, the nickel mass becomes unphysically large, \(>100~{}M_{\odot}\). We note that [Co ii] \(\lambda\,1.025~{}\mu\)m can be blended with S ii \(\lambda\,1.032~{}\mu\)m20, indicated by the hatched region in Figure 29. Footnote 20: This feature consists of six lines between 1.0287 and 1.0370 \(\mu\)m. This Ni-mass estimate assumes that the transitions are optically thin. The Sobolev optical depth of the 9-1 transition line in LTE is (Jerkstrand et al. 2017, ignoring stimulated emission): \[\tau_{9,1} =A_{9,1}\,\lambda_{9,1}^{3}\,\frac{1}{8\pi}\frac{g_{9}}{g_{1}}n_{ 1}I\] \[\approx 0.08\times\left(\frac{M\left(\mathrm{Co\,ii}\right)[450d]}{1~ {}M_{\odot}}\right)\frac{x_{1}}{f},\] where \(\lambda_{9,1}\) is the wavelength of the emitted photon, \(g_{n}\) is the multiplicity of the \(n\)th state, \(n_{1}\) is the number density of atoms in the ground state, \(x_{1}\) is the fraction of Co II ions in the ground state, and \(f\) is the filling factor for the \({}^{56}\)Ni zone. In LTE at 5000 K \(x_{1}\) is \(\approx 0.5\), whereas at lower temperatures and/or in NLTE \(x_{1}\) is typically higher (towards unity). A typical CCSN has a characteristic filling factor of \(f\sim 0.1\) for any given zone, which means that \(0.5~{}M_{\odot}\) of \({}^{56}\)Co are optically thin at \(\sim 450\) days. For SLSNe, filling factors for the oxygen zones have been derived to be \(f\approx 10^{-3}-10^{-2}\)(Jerkstrand et al. 2017). If these filling factors also hold for the \({}^{56}\)Ni zone of SN 2018ibb, then the Co II lines would be optically thick at \(\sim 450\) days, and determining a mass from much to impossible at that time (Jerkstrand 2017). One may note that numerical simulations of PISNe show little clumping or mixing of the inner material (Chen et al. 2020), and the low filling factors derived for other SLSNe may be due to mixing from the central engine (e.g., Suzuki & Maeda 2021) or compression by circumstellar interaction (van Marle et al. 2010). In the stripped-envelope-supernova models from Jerkstrand et al. (2015), [Co ii] \(\lambda\,1.025~{}\mu\)m is the strongest predicted line from cobalt. The second strongest Co feature is a blend of two lines at 9338 and 9344 A. This [Co ii] feature could be blended with the red wing of the O i \(\lambda\,9263\) recombination line. In optically thin LTE, the expected line ratio between [Co ii] \(\lambda\,9340\) and [Co ii] \(\lambda\,1.025~{}\mu\)m is between \(0.5-1\) for a wide range of plausible temperatures. To examine whether [Co ii] \(\lambda\,9340\) could be present in the spectrum at \(t_{\mathrm{max}}\)+387 days, we show in Figure 29 in a Gaussian centred at 9340 A that has either the same integrated luminosity as [Co ii] \(\lambda\,1.025~{}\mu\)m or a luminosity that is 50% smaller. Clearly, [Co ii] \(\lambda\,9340\) is not present in our data at those luminosities. In the He130 model of Jerkstrand et al. (2016) at 400 days after the explosion, neither of the [Co ii] lines are present in any significant strength as they are absorbed by line blocking extending into the NIR. Under conditions with less line blocking (as in the Jerkstrand et al. 2015 CCSN models), the [Co ii] \(\lambda\,1.025~{}\mu\)m line can still be visible, also as iron has few strong emission lines around this particular wavelength. The same cannot be said about the 9340 A region where iron is stronger. In a PISN ejecta, the densities are about 100-times higher for a given epoch, and the NIR region is still largely opaque at 400 days after the explosion. To explain the observed [Co ii] \(\lambda\,1.025~{}\mu\)m line but the absence of the [Co ii] \(\lambda\,9340\) feature, we need to call upon absorption of the 9340 A line but not the 1.025-\(\mu\)m line, i.e., the He130 model reproduces the spectral shape near 9340 A but not at 1.025 \(\mu\)m if SN 2018ibb is a PISN. While the association of [Co ii] \(\lambda\,1.025~{}\mu\)m could be the smoking gun signature that SN 2018ibb is a PISN, we caution that the interpretation hinges on the detection of a single line. An IR spectrum with NIRSpece (Jakobsen et al. 2022) aboard the _James Webb Space Telescope_ could resolve such ambiguity. It is the only instrument that can provide an uncensored view from 1 to 5 \(\mu\)m. Such a spectrum could reveal, for instance, Co, Fe and Ni lines at \(>2.7~{}\mu\)m as seen in the Type Ia SN 2021aefx (Kwok et al. 2023). Figure 29: Zoom-in of the region from 9000 Å to 10,500 Å at \(t_{\mathrm{max}}\)+377.5 days. Cobalt has its strongest feature at 1.025 \(\mu\)m. Its tentative detection translates to \({}^{56}\)Ni mass of \(\gtrsim 30~{}M_{\odot}\), consistent with the light curve modelling. The second strongest cobalt feature is at 9340 Å. Its location is indicated by a fiducial Gaussian centred 9340 Å. Its integrated luminosity is expected to be between 50 and 100% of [Co ii] \(\lambda\,1.025~{}\mu\)m. The absence of [Co ii] \(\lambda\,9340\) is not an argument against the discovery of [Co ii] \(\lambda\,1.025~{}\mu\)m (see Section 5.2.5 for details). Lines from other elements that could blend with the [Co ii] lines are marked. Regions of strong atmospheric absorption are indicated by vertical bars at the top of the figure. ### Comparison to other slow-evolving SLSNe Among the \(\gtrsim 200\) H-poor SLSNe known, only seven objects belong to such a phenomenological subclass of slow-evolving SLSNe: SN 1999as (Hatano et al., 2001), SN 2007bi (Gal-Yam et al., 2009), PS1-11ap (McCrum et al., 2014), PTF12dam (Nicholl et al., 2013), LS014an (Inserra et al., 2016), PS1-14bj (Lunnan et al., 2016), and SN 2015bn (Nicholl et al., 2016).2 In the following sections, we compare the photometric and spectroscopic properties of SN 2018ibb to those of the historical slow-evolving SLSNe to comprehensively examine its exceptional properties. We omit SN 1999as from this analysis because its light curve and spectra were never published. Footnote 2: The ZTF-1 SLSN sample contains a further possible slow-evolving SLSN. SN 2018lx has a rise (\(\tau_{1/\rm{e,fine}}=60.5^{+8.2}_{-1.2}\) days) and a decline (\(\tau_{1/\rm{e,decline}}=108.8^{+10.04}_{-13.2}\) days) time scale comparable to SN 2018ibb (Table 5). The peak absolute magnitude is 0.2 mag brighter than that of SN 2018ibb (Chen et al., 2023). Owing to its high redshift of \(z=0.44\), the light curve spans a short time interval before SN 2018lx faded below the detection threshold, and the quality of the spectra is significantly lower compared to that of SN 2018ibb. Therefore, we exclude this SLSN from the comparison. We utilise the multi-band and bolometric light curves and host-subtracted spectra of LSQ14an presented in Inserra et al. (2017) and Jerkstrand et al. (2017), PS1-11ap from McCrum et al. (2014), PS1-14bj from Lunnan et al. (2016), PTF12dam (Nicholl et al., 2013), and SN 2007bi from Gal-Yam et al. (2009), Young et al. (2010) and Jerkstrand et al. (2017), and SN 2015bn from Nicholl et al. (2016, 2018) and Jerkstrand et al. (2017). Furthermore, we use the Fe ii velocity measurements from Liu et al. (2017) and Lunnan et al. (2016). All light curves and spectra were corrected for MW extinction. The spectrum of SN 2015bn in Nicholl et al. (2018) is not corrected for any host contribution. In Appendix F, we describe our approach to subtract the host contamination for SN 2015bn. #### 5.3.1 Light curves Following the methodology of Chen et al. (2023), we measure for each slow-evolving SLSN the k-corrected peak absolute magnitude in the \(g\) band, the k-corrected rest-frame \(g-r\) colour, and the \(1/e\) rise and decline time-scales of the \(g\)-band light curves.22 All measurements are summarised in Table 10. We also report in that table the measurements of SN 2018ibb and, for a broader comparison, the median values of the homogeneous ZTF SLSN sample (Chen et al., 2023). Footnote 2: Owing to the high redshift of PS1-11ap and PS1-14bj, we use their \(i\)-band light curves, which probe a rest-frame wavelength interval similar to that of the \(g\) band of SN 2018ibb. Slow-evolving SLSNe, including SN 2018ibb, have peak absolute magnitudes between \(\sim-20.8\) and \(-22\) mag in the \(g\) band and k-corrected \(g-r\) colours between \(-0.2\) and \(0\) at peak (Table 10). Both their absolute peak magnitudes and the peak \(g-r\)-colours are comparable to the median values of the ZTF SLSN sample (median values being \(M_{g,\rm{peak}}-21.5\) mag and \(g-r=-0.12\) mag; Table 10). The rising parts of the light curves of the historical slow-evolving SLSNe are not well sampled, limiting the comparison with SN 2018ibb and the ZTF SLSN-I sample. Only PS1-14bj has a rise time that is at least as long as that of SN 2018ibb and even 30 days longer than that of SN 2018ibb. The decline time scales of the historical slow-evolving SLSNe are well measured. They vary between 38 and 130 days, placing those events above the average of the ZTF-I sample (Table 10). Yet, only one historical slow-evolving SLSN had a decline time-scale as extreme as SN 2018ibb. With a decline time scale of 130 days, PS1-14bj evolves even slower than SN 2018ibb but its peak luminosity in the rest-frame \(g\)-band was 1.2 mag fainter than that of SN 2018ibb. This makes SN 2018ibb an unprecedented case even among the most extreme SLSNe known. LSQ14an also has a decline time scale of 100 days, but the observed light curve only covers the declining light curve, adding an unknown systematic error to its time scale measurement. Figure 30 shows the \(r\)-band absolute magnitude light curves of all slow-evolving SLSNe. The supernovae 2015bn and 2018ibb are the only SLSNe with observations extending beyond 500 rest-frame days after maximum light. The light curve of SN 2015bn faded much faster than that of SN 2018ibb. At about 400 days after peak, the decline slowed down and became very gradual. In contrast, SN 2018ibb's light curve faded linearly with a decline slope of \(\sim 1.1\) mag (100 days)\({}^{-1}\) that steepened to \(\sim 1.5\) mag (100 days)\({}^{-1}\) at 500 days after maximum. These differences translate into differences in the powering mechanisms. Magnetars lose their rotational energy efficiently through dipole radiation, which scales as \(E_{\rm{rot}}\propto t^{-2}\). The energy deposition (and henceforth the SN luminosity) evolves as a power law. Therefore, the light curve is expected to flatten at later times (in the time vs. magnitude space). Radioactive material has an exponentially declining energy deposition rate, which results in a linear decline in the time vs. magnitude space. The loss of \(\gamma\)-ray trapping accelerates the fading independent of the powering mechanism, but it only modifies the light curve without altering its general shape, i.e., the loss of gamma-ray trapping cannot convert a power-law decline into an exponential decline (e.g., Chen et al., 2015; Wang et al., 2015; Nicholl et al., 2018). Therefore, the power-law-shaped decline of SN 2015bn could point to magnet powering, as concluded in Nicholl et al. (2018). In return, \begin{table} \begin{tabular}{l c c c c c} \hline & Redshift & \(t_{1/\rm{e,rise}}\) & \(t_{1/\rm{e,decline}}\) & \(M_{g,\rm{peak}}\) & \((g-r)_{\rm{peak}}\) \\ & & (day) & (day) & (mag) & (mag) \\ \hline SN 2018ibb & 0.166 & 68 & 102 & \(-21.8\) & \(-0.12\) \\ \hline LSQ14an1 & 0.163 & \(\dots\) & \(\sim 100\) & \(<-20.8\) & \(-0.21\) \\ PS1-11ap2 & 0.524 & \(<25\) & 38 & \(-21.8\) & \(\dots\) \\ PS1-14bj2 & 0.521 & \(83\) & \(130\) & \(-20.6\) & \(\dots\) \\ PTF12dam & 0.107 & 50 & 56 & \(<-21.7\) & \(-0.20\) \\ SN 2007bi3 & 0.128 & \(<23\) & \(<77\) & \(-21.3\) & \(0\) \\ SN 2015bn & 0.114 & \(<31\) & 56 & \(-22.0\) & \(-0.17\) \\ \hline ZTF SLSNe & & 29 & 43 & \(-21.5\) & \(-0.12\) \\ \hline \end{tabular} \end{table} Table 10: Light curve properties of slow-evolving SLSNe SN 2018ibb's continued linear decline excludes powering by a magnetar. Nicholl et al. (2017) fitted the multi-band data of the slow-evolving SLSNe with the slss magnetar model in MOSFiT. This model provides an adequate description of the observations, even to the data of SN 2015bn at 1000 rest-frame days after maximum (Nicholl et al., 2018). The best-fit parameters cover the range from 5.3 to 14 \(M_{\odot}\) for the ejecta mass \(M_{\rm ej}\) (median being 6.3 \(M_{\odot}\)), 0.1 to \(0.8\times 10^{14}\) G for the orthogonal component of the magnetic field strength \(B\) (median being \(0.3\times 10^{14}\) G), and 2.3 to 3.9 ms for the initial spin period \(P_{0}\) (median being 2.8 ms). These values are typical for SLSN light curves fitted with that particular magnetar model (Nicholl et al., 2017; Chen et al., 2023). SN 2018ibb has starkly different values (Section 5.2.2; Table 8). The best fit requires a magnetar with an initial spin period of 1 ms and an ejecta mass of 86 \(M_{\odot}\), to squeeze out as much energy as possible from the magnetar model. As alluded to in Section 5.2.2, such massive stars do not have neutron star remnants. Furthermore, the magnetar model overpredicts the late-time flux significantly due to its power-law-shaped energy deposition. In Figure 31, we compare the bolometric light curves of the slow-evolving SLSNe to the suite of PISN models used for SN 2018ibb in Section 5.2.3. The bolometric light curves of all historical slow-evolving SLSNe are either inconsistent with the PISN models or the comparison is inconclusive: PTF12dam evolves too fast, the light curves of PS1-11ap and SN 2015bn have a different shape to PISN templates, PS1-14bj shows a flattening at late times, and the bolometric light curves of SN 2007bi and LSQ14an have no pre-max data. The lack of an estimate of the rising bolometric light curve for the latter two objects precludes concluding whether the two SLSNe could be PISNe or not. Dedicated studies on PS1-11ap, PS1-14bj, PTF12dam and SN 2015bn revealed that the magnetar model provides an adequate description of the light curves (Nicholl et al., 2013; McCrum et al., 2014; Lunnan et al., 2016; Nicholl et al., 2018; Vurm & Metzger, 2021). In conclusion, SN 2018ibb is the _only_ SLSN among the hundreds of SLSNe known whose entire light curve is consistent with PISN models. This result is even more revelatory considering that the bolometric light curve covers an exceptionally wide time interval from \(t_{\rm max}-\)93 to \(t_{\rm max}+\)706 days. #### 5.3.2 Spectra In this section, we compare the spectroscopic properties of SN 2018ibb to those of other slow-evolving SLSNe. First, we compare the photospheric velocities measured with the Fe ii \(\lambda\)5169 region. The top panel of Figure 32 displays the photospheric velocities, measured from the Fe ii \(\lambda\)5169 region, of SN 2018ibb and other slow-evolving SLSNe23 (in colour) and of the ZTF-I sample (kernel density estimate). The slow-evolving SLSNe have velocities between 8000 and \(12,000\) km s\({}^{-1}\) at peak, lower than the median value of the ZTF-I sample (14, 800 km s\({}^{-1}\)). PTF12dam and SN 2015bn have the fastest ex Figure 31: The bolometric light curves of SN 2018ibb and historical slow-evolving SLSNe in the context of PISN models with nickel masses between 5 and 44 \(M_{\odot}\). SN 2018ibb is the only SLSN whose entire light curve from \(t_{\rm max}-\)93 to \(t_{\rm max}+\)706 days is consistent with PISN templates. The other SLSNe have either too fast declining light curves, light curve shapes inconsistent with PISN models, or their light curves are poorly sampled, hindering a comparison with PISN templates. The grey-shaded region indicates the 1\(\sigma\) uncertainty. Figure 30: SN 2018ibb in the context of the phenomenological sub-class of slow-evolving SLSNe. Even among this rare sub-class of SLSNe, SN 2018ibb with its exceptionally broad light curve and high peak luminosity is an extreme object with unprecedented properties. panding ejecta (\(\sim 12,000\) km s\({}^{-1}\) at peak), but their ejecta rapidly decelerate to \(\sim 6000\) km s\({}^{-1}\) in \(\sim 60\) rest-frame days. Their velocities and velocity evolution is similar to those of other SLSNe (Liu et al., 2017). In stark contrast to that, SN 2018ibb has a velocity of merely 8500 km s\({}^{-1}\), comparable to those of PS1-14bj and SN 2007bi. Furthermore, the velocity of SN 2018ibb remains constant for 100 rest-frame days, which has not seen for any other SLSN before. Though the velocities of PS1-14bj and SN 2007bi are very similar to SN 2018ibb, the spectroscopic sequences of these two events are limited, precluding comparing their velocity evolution to that of SN 2018ibb. Next, we explore the spectroscopic properties of slow-evolving SLSNe during their photospheric and nebular phases. Panel A in Figure 33 shows the photospheric spectra at the time of maximum light. PTF12dam and SN 2015bn sustained a hot photosphere with a temperature of \(\gtrsim 12,000\) K (Nicholl et al., 2016; Vreeswijk et al., 2014). One of the strong features in their spectra is a comb of O ii absorption lines, a characteristic feature of SLSNe, which are only seen in photospheres with \(T>15,000\) K (Quimby et al., 2018) and probably also require non-thermal excitation (Mazzali et al., 2019). The spectra of PS1-11ap, PS1-14bj and SN 2018ibb are cooler (black-body temperatures of 10,000 to 12,000 K). Their spectra do not show O ii absorption lines but instead absorption lines from Ca, Fe, Mg, O and Si (see Figure 7 for the locations). Common to PTF12dam, SN 2015bn and SN 2018ibb is the presence of [Ca ii] \(\lambda\lambda\) 7291, 7323 in emission. It is one of the strongest features seen in nebular phase spectra of SNe (Filippenko, 1997; Gal-Yam, 2017) but is only seen during the photospheric phase in slow-evolving SLSNe (Gal-Yam et al., 2009; Inserra et al., 2017; Nicholl et al., 2019). It is also seen in SN 2007bi and LSQ14an, but these SLSNe lack spectra at peak. Around the time of maximum light (Panel B), all objects have similar spectra. Due to the differences in the ejecta velocities, features appear sharper in LSQ14an, PS1-14bj and SN 2018ibb that in PTF12dam and SN 2015bn. Some clear differences are well visible though. LSQ14an, PS1-14bj and SN 2018ibb reveal [O iii] emission. As we concluded in Section 5.1, the 7300 A feature in SN 2018ibb is not dominated by [Ca ii], but [O ii] \(\lambda\lambda\) 7320,7330. In the other objects, the centre of the 7300 A feature is consistent with [Ca ii]. Moreover, the 7300 A feature is well developed in SN 2007bi, LSQ14an and SN 2018ibb but still very weak in PTF12dam and SN 2015bn. The line profiles also differ. In SN 2018ibb the line profile is flat-topped but triangular and skewed to the blue for the other objects. During the early nebular phase (\(t/t_{\rm decel}\sim 2\); Panel C), all objects show a blue pseudo-continuum with superimposed forbidden and allowed emission lines from calcium, magnesium and oxygen (for the line identifications see Figure 7). SN 2018ibb and PS1-14bj are spectroscopically indistinguishable, though their overlap in wavelength is limited and PS1-14bj is significantly fainter than SN 2018ibb (Figure 30). The other objects reveal an increasing level of dissimilarities (LSQ14an \(\rightarrow\) SN 2015bn \(\rightarrow\) SN 2007bi). LSQ14an has a similar blue pseudo-continuum but its emission lines are not well developed. This is best seen in Ca ii \(\lambda\lambda\) 3933, 3968, [O iii] \(\lambda\lambda\) 4363 and [Ca ii] \(\lambda\lambda\) 7291,7324 + [O ii] \(\lambda\lambda\) 7320,7330. The SNe 2007bi and 2015bn have redder pseudo-continuum and significantly weaker [Ca ii] +[O ii]. Moreover, SN 2007bi has only a few features blueward of 5000 A. These differences develop further with time. During the late nebular phase (\(t/t_{\rm decel}\sim 5\); Panel D), the pseudo-continuum of all objects fades. SN 2018ibb and LSQ14an are characterised by a weaker [O i] \(\lambda\lambda\) 6300,6364 than SNe 2007bi and 2015bn. The ratio between [Ca ii]+[O ii] and [O i] is 2-3:1. Intriguingly, the emission line of SN 2018ibb evolved much slower than for LSQ14an. Now, LSQ14an exhibits more conspicuous emission lines than SN 2018ibb, best seen in [O iii] and [Ca ii]+[O ii]. The [O ii] feature of SN 2018ibb has a Lorentzian profile, whereas the profile of LSQ14an is double-peaked. In contrast to LSQ14an and SN 2018ibb, PTF12dam, SN 2007bi and SN 2015bn have exceptionally strong [O i]. It is, in fact, their strongest feature. Moreover, the [O i] is markedly narrower than for SN 2018ibb and LSQ14an: 6000-9000 km s\({}^{-1}\) vs. 16,000 km s\({}^{-1}\). The [Ca ii]+[O ii] to [O i] ratio is 1:2-3 and inverted compared to LSQ14an and SN 2018ibb. Panel E shows spectra of SN 2018ibb and SN 2015bn at 1000-1100 rest-frame days after maximum (\(t/t_{\rm decline}=9\)-13). These are the only two SLSNe with such extensive spectroscopic observations. Despite the low signal-to-noise ratio, their spectra exhibit well-defined SN features. SN 2018ibb continues to show intermediate-width [O iii] with a similar width as in the spectral epochs before, whereas SN 2015bn exhibits [O i] like in the previous epochs. Figure 34 presents NIR spectra of LSQ14an, SN 2015bn and SN 2018ibb at 3-4-times their respective decline time scales. All spectra reveal only very few features beyond 1 \(\mu\)m, which is expected for models of PISNe (Jerkstrand et al., 2016), SLSNe (Jerkstrand et al., 2017), and regular stripped-envelope supernovae (Jerkstrand et al., 2015). Some of the brightest expected features are redshifted to regions of strong atmospheric absorption at the average redshift of SLSNe. A feature that has been commonly seen among all known SLSNe is O i \(\lambda\lambda\) 1.13\(\mu\)m. SN 2018ibb reveals an emission feature at 1.025 \(\mu\)m, which we Figure 32: Fe ii ejecta velocities of slow-evolving SLSNe (in colour) and general SLSNe samples (grey) at the time of maximum (top panel) and as a function of time (bottom panel). SN 2018ibb has a markedly low velocity at the time of maximum and a flat velocity evolution, which is in stark contrast to the bulk of the SLSN population. Its velocity at peak is similar to the slow-evolving SLSNe PS1-14bj and SN 2007bi. However, both comparison objects lack spectra at earlier and later times. identified as [Co ii] (Section 5.2.5). [Co ii] is not present in any of the other spectra. The data quality of the spectra of LSQ14an and SN 2015bn is higher compared to that of SN 2018ibb, suggesting that if a substantial amount of \({}^{56}\)Ni was also formed in these supernovae, the [Co ii] line should have been visible. Instead, SN 2015bn reveals Mg i\(\lambda\) 1.50 \(\mu\)m that is not visible in SN 2018ibb but possibly in LSQ14an (Jerkstrand et al., 2017). In conclusion, SN 2018ibb is spectroscopically similar to other SLSNe, including slow-evolving SLSNe. During the photospheric phase, SN 2018ibb stands out by its low ejecta velocity and flat velocity evolution. The early nebular phase does not differ from other SLSNe. Very late-time observations (\(t/t_{\rm{decl}}>5\)) show clear differences between SN 2018ibb and other SLSNe, e.g., the weak and broad [O i] that stays optically thick throughout the entire evolution. Late-time NIR spectroscopy revealed the tentative detection of [Co ii] in SN 2018ibb. This feature is unprecedented for a SLSN and could be the smoking gun that SN 2018ibb is powered by the decay of \({}^{56}\)Ni. In Sections 4.3.3, 5.1, 4.3, and 5.2.4, we argued that the blue pseudo-continuum in SN 2018ibb is produced by the interaction of the SN ejecta with CSM. The prevalence of this feature in the other slow-evolving SLSNe raises the question of whether CSM interaction is also present in these objects. If this is the case, it is necessary to treat nebular spectra of SLSNe as the sum of at least two powering mechanisms, e.g., magnetar + CSM or \({}^{56}\)Ni + CSM, necessitating more complex SLSN models than the ones that currently exist. This also means that distinguishing between different powering mechanisms is more difficult and requires comprehensive data sets. ### Is SN 2018ibb a pair-instability supernova? Models of H-poor PISNe make very clear predictions for PISNe in the regime of SLSNe (\(M_{\rm{peak}}\leq-20\) mag) for their light curves (Kasen et al., 2011; Dessart et al., 2013; Gilmer et al., 2017; Kozyreva et al., 2017), ejecta velocities, spectra (Dessart et al., 2013; Jerkstrand et al., 2016) and the environments in which their progenitors are formed (Langer et al., 2007). In Sections 4.6 and 5.2, we tested the most critical predictions of the PISN models on the light curves, spectra and host galaxy. SN 2018ibb passes most tests of PISN models with a nickel yield of 25-44 \(M_{\odot}\). However, SN 2018ibb did not comply with two predictions, although it could pass these tests with the interpretations that we propose. Table 11 summarises all tests. The tentative detection of [Co ii] \(\lambda\) 1.025\(\mu\)m is unprecedented for a PISN candidate. However, existing PISN models do not predict significant emission from [Co ii] because of line block Figure 33: Comparison of the spectra of SN 2018ibb to those of other slow-evolving SLSNe between \(t_{\rm{max}}\)+30 days and \(t_{\rm{max}}\)+1000 days (darker colour: 5 Å binning; light shade: unbinned spectra). **Photospheric phase (Panels A–B)**: Around \(t_{\rm{max}}\) (Panel A), the spectra of PTF12dam and SN 2015bn are characterised by a hot continuum with superimposed O ii absorption lines as seen in many SLSNe at a similar epoch. SN 2007bi, LSQ14an and SN 2018ibb have cooler photospheres, and their spectra exhibit absorption lines from Ca, Fe, Mg, O and Si (see Figure 7 for their locations) but not O ii. At around \(t_{\rm{max}}\)+60 days (Panel B), all spectra appear similar, though differences exist. LSQ14an, PS1-14jb; and SN 2018ibb are the only SLSNe showing [O iii] in emission. Furthermore, SN 2007bi, LSQ14an and SN 2018ibb exhibit strong [Ca ii] + [O ii] in emission. This feature is also present in PTF12dam and SN 2015bn but is less pronounced. **Nebular phase (Panels C–E)**: Differences start to emerge during the early nebular phase and become stronger with time. SN 2018ibb, LSQ14an and PS1-14bj continue to show conspicuous [O iii] in emission, in contrast to PTF12dam, SN 2007bi and SN 2015bn that have very strong [O i] and O ii in emission. SNe 2015bn and 2018ibb are the only SLSNe with spectra at \(\sim\)\(t_{\rm{max}}\)+1000 days (Panel E). SN 2018ibb continues to show intermediate-width [O iii], whereas the spectrum of SN 2015bn exhibits [O i]. The elevated noise in the SN 2018ibb spectrum at \(t_{\rm{max}}\)+989.2 days at \(\lambda>6000\) Å is due to residuals of the skyline subtraction. The dashed vertical lines indicate the expected locations of emission lines commonly seen from H ii regions. ing. In Section 5.2.5, we proposed that line blocking could be less severe than predicted by existing models. The shape and the relative line intensities of the nebular spectra of SN 2018ibb are compatible with those predicted by PISN models. Our observations reveal a significant excess at wavelengths shorter than 5000 A, which should not be present if 25-44 \(M_{\odot}\) of iron group elements were formed. As we concluded in Section 5.2.4, we propose that CSM interaction may account for some, if not all, of the excess. PISN models consider mass loss (Kasen et al., 2011; Gilmer et al., 2017; Kozyreva et al., 2017; Dessart et al., 2013) in the evolution of the progenitor star. However, their light curves and spectra are computed assuming that any interaction between the SN ejecta and the circumstellar material is negligible. Kasen et al. (2011) pointed out that the CSM interaction could actually have a non-negligible contribution. Furthermore, the CSM might not only be produced by stellar winds but also by eruptions similar to that seen in Eta Carinae in 1843. That this is indeed a non-negligible effect is corroborated by recent findings in Chen et al. (2023b). These authors studied the light curves of 77 events from the homogenous ZTF SLSN-I sample, and concluded that CSM is common around H-poor SLSNe (in at least 25-44% of the events) and that it contributes to the observed emission, albeit finding spectroscopic evidence in the spectra is difficult. Owing to the lack of predictions of PISN models on CSM interaction, we cannot firmly conclude that SN 2018ibb is a PISN. Our observations demonstrate that interactions between the SN ejecta and the ambient CSM play a non-negligible effect in the observed photometric and spectroscopic properties (Sections 4.3, 4.3, 4.4, 5.1, 5.2.4). PISN models of H-poor progenitors with CSM are urgently needed. In the coming years, the Rubin Observatory, and the _James Webb_, _Euclid_ and _Roman Space Telescopes_ will systematically explore the high-redshift Universe. Since PISNe require metal-poor stars and the early Universe was less chemically enriched than today, PISNe are thought to be more abundant at higher redshifts. Several teams have proposed search strategies to find PISNe with these new observing facilities (e.g., Wang et al., 2017; Regos et al., 2020; Moriya et al., 2022, 2020). However, their search strategies are based on PISN models that, for instance, do not include CSM interaction. Considering that these new observing facilities either just started or will commence their science operations in the next years, it is critical to expand the suite of existing PISN models in order to find high-\(z\) PISNe in real-time. ### Could SN 2018ibb be a pulsational pair-instability supernova? The massive eruptions in a pulsational pair-instability supernova (PPISN) with a large kinetic energy can, under the right conditions, be an ideal case for a luminous interacting SN, as demonstrated in several studies (e.g., Woosley et al., 2007; Yoshida et al., 2016; Woosley, 2017; Leung et al., 2019; Marchant et al., 2019; Renzo et al., 2020). While the PPI mechanism is difficult to avoid for a He core in the mass range of 40-65 \(M_{\odot}\)(Woosley et al., 2007), the number of pulses and the interval between these, as well as the mass ejected and their kinetic energies, are more uncertain and differ between various studies. For a bright event to take place, the relative velocities between the shells of the different ejections, as well as their relative masses are important. The brightest event would result from the collision between a very fast, massive shell and a shell of low or zero velocity. The first shell must also be dense enough for the shocks to be radiative and massive enough for the second shell to be completely decelerated. Finally, the collision has to take place close enough to the star, on the order of \(10^{15}-10^{16}\)cm, so that it will radiate the energy on a timescale of approximately a year. This means that the interval between the pulses should not be more than approximately a year. However, a collision at a very small radius, and short time interval, will result in a very optically thick shell where most of the released energy will go into adiabatic expansion. In summary, there are a number of conditions which have to be fulfilled for a bright SN to result. This has been illustrated in detail by the different radiation-hydrodynamical models, e.g., Woosley (2017). Below, we discuss the most extreme models in order to judge whether a pure PPISN could explain the large total radiated energy we find for SN 2018ibb. For a pure He core, Woosley (2017) finds an upper limit to the kinetic energy of \(\sim 2\times 10^{51}\) erg, with the highest energy from the highest He core mass, if no additional power source (e.g., magnetar or black hole accretion) is involved. The most extreme model with a 62 \(M_{\odot}\) He core resulted in a 36 \(M_{\odot}\) ejecta with a total kinetic energy of \(2.1\times 10^{51}\) erg. This is distributed over several pulses, with most of the energy being dissipated in the first pulse. Without any previous strong mass loss this will, however, not be converted into radiation over a timescale of approximately a year. This is also confirmed by the light curve models in Woosley (2017). Brighter light curves could be obtained for models with a remaining hydrogen envelope. The most extreme, T130D in Woosley (2017), had three pulses, ejecting the 70 \(M_{\odot}\) hydrogen-rich envelope with a kinetic energy of \(1.5\times 10^{51}\) erg. About 3300 years later a second pulse ejected a 7.7 \(M_{\odot}\) shell with He, C and O and energy \(1.1\times 10^{51}\) erg, and after another 8 months a 13.5 \(M_{\odot}\) shell and energy \(1.5\times 10^{51}\) erg. The last two shells were ejected close enough in time to collide and create a luminous SN with a total radiated energy of \(4.5\times 10^{50}\) erg. Similar calculations have been done by Marchant et al. (2019) and Leung et al. (2019), using the MESA code (while Figure 34: Late-time NIR spectra of SN 2018ibb and the slow-evolving SLSNe LSQ14an and SN 2015bn. The strongest features are labelled. SN 2018ibb is the only SLSN that shows cobalt in emission, which has its strongest optical-NIR feature at 1.025 \(\mu\)m. Its luminosity translates to a nickel mass of \(\gtrsim 30\)\(M_{\odot}\), consistent with the light curve modelling. All spectra are scaled so that O_14_7773 has the same amplitude in all objects. Regions of strong atmospheric absorption are cropped. Woosley 2017 used the Kepler code). Qualitatively, these models agree, especially in the higher energies and mass ejected, as well as the number of pulses with increasing He core mass. In particular, Leung et al. (2020) find a maximum kinetic energy of \(2.8\times 10^{51}\) erg for the highest PPI He core mass, similar to the corresponding model by Woosley et al. (2007). However, as discussed by Leung et al. (2019), there are also substantial quantitative differences between the models, including ejected masses and time interval between the pulses. Some of the differences can be traced back to the treatment of shocks, and convection in both the hydrostatic and hydrodynamic phases. We note that a large kinetic energy in PPISN models has also been invoked to explain the light curves of other luminous SNe. For the FBOT AT2018ow, Leung et al. (2020) invoked a kinetic energy of \(5\times 10^{51}\) erg from a 42 \(M_{\odot}\) He-core interacting with an ejected shell with mass of 0.5 \(M_{\odot}\). An obvious solution to supply the extra energy is a hybrid model with a combination of a PPISN and the energy from a magnetar or accretion. This has been discussed by Woosley (2017) and for other energetic SLSNe including PTF12dam (Tolstov et al., 2017), Gaia16apd (Tolstov et al., 2017) and 19TF16eh (Lunman et al., 2018). However, it remains unclear how a magnetar can be formed from the core collapse of the very massive He core in a PPISN. In summary, a pure PPISN, close to the upper He core mass limit, may potentially explain the observed radiated energy of \(>3\times 10^{51}\) erg (Section 4.2.2). The conversion of kinetic energy to radiative energy, however, requires rather special conditions in terms of pulse intervals, ejecta mass and velocities. The uncertainties in the models are, unfortunately, large, and it is difficult to draw any firm conclusions. Additional energy sources can not be excluded, such as a magnetar or a black hole. A contribution from a magnetar would result in the flattening of the late-time light curve which is in stark contrast to our observations. In the case a black hole was formed during the gravitational collapse of the progenitor star, the accretion rate would need to be well-tuned to be consistent with the exponentially declining light curve, making the PPISN scenario less likely. ## 6 Conclusion In this paper, we have presented observations of the slow-evolving H-poor SLSN 2018ibb covering an exceptionally long time interval from \(-93\) to \(+989\) rest-frame days after maximum. SN 2018ibb shares many similarities with H-poor SLSNe, but its properties are extreme even for SLSNe. It is one of the slowest evolving SLSNe known. The slow evolution is apparent through the long rise of \(>93\) rest-frame days from 10% peak flux to peak, the slow decline of merely 1.1 mag (100 days)\({}^{-1}\), and the low photospheric velocity of 8500 km s\({}^{-1}\) that remains constant between the time of maximum and the following 100 rest-frame days. At peak, SN 2018ibb reached an absolute magnitude of \(M_{\star}=-21.7\) mag, comparable to the bulk of the SLSN population. The bolometric light curve had a peak luminosity of \(>2\times 10^{44}\) erg s\({}^{-1}\). During its lifetime, SN 2018ibb radiated \(>3\times 10^{61}\) erg. The peak luminosity and total radiated energy are strict lower limits. We compared SN 2018ibb with PISN and SLSN models. SN 2018ibb complies with most tests of PISN models with peak luminosity \(<-20\) mag, and possibly all tests with the interpretations that we propose, making SN 2018ibb the best PISN candidate, to date. Specifically, SN 2018ibb passes the following tests: 1. a rise time of \(>93\) days (expected: 120-150 days) 2. a decline time scale of 1.1 mag (100 day)\({}^{-1}\) (expected: 1.1 mag (100 day)\({}^{-1}\)) 3. the modelling of the multi-band light curves with physical SLSN models and the Katz et al. (2013) method point to the production of 25-44 \(M_{\odot}\)\({}^{56}\)Ni (expected: 10-44 \(M_{\odot}\)) 4. the bolometric light curve is consistent with PISN templates that produce 25 and 44 \(M_{\odot}\)\({}^{56}\)Ni 5. a low ejecta velocity of 8500 km s\({}^{-1}\) (expected: 7000-11, 000 km s\({}^{-1}\)) 6. a low metallicity (expected: \(<1/3\) solar) 7. none of the \(>200\) SLSNe has properties similar to SN 2018ibb (expected: PISNe are rare). \begin{table} \begin{tabular}{c c c c c c} \hline \hline Test & Condition & Observation & Section & Pass & Reference \\ \hline \multicolumn{6}{c}{**Light curve**} \\ \hline Rise time & \(120-150\) days & \(>93\) days & 4.2 & \(\mathbf{?}\) & \(1,2\) \\ Decline rate & \(1\) mag (100 day)\({}^{-1}\) & 1.1 mag (100 day)\({}^{-1}\) & 4.2 & \(\check{\check{\check{\prime}}}\) & \(1,2\) \\ Peak absolute magnitude \(M_{\mathrm{bol}}\) & \(-20-22.5\) mag & \(<-21.8\) mag & 4.2 & \(\check{\check{\prime}}\) & \(1,2\) \\ Nickel mass & \(10-40\)\(M_{\odot}\) & 25-40 \(M_{\odot}\) & 5.2.1, 5.2.2, 5.2.3 & \(\check{\check{\prime}}\) & \(1,2\) \\ PISN template & He100 – He130, & He120 – He130, & 5.2.3 & \(\check{\check{\prime}}\) & \(1,2\), 3 \\ & P200 – P250 & P250, P250N34 & & & \\ \hline \multicolumn{6}{c}{**Spectra**} \\ \hline Velocity & 7000–11,000 km s\({}^{-1}\) & 8500 km s\({}^{-1}\) & 4.3.2 & \(\check{\check{\check{\prime}}}\) & \(3,4\) \\ Nebular spectra & He100, He130 & He130, but blue excess & 5.2.4 & \(\mathbf{\check{\mathsf{x}}}\) & 5 \\ \([\)Co ii\(]\)\(\lambda\) 1.025\(\mu\)m & not predicted & detected & 5.2.5 & \(\mathbf{\check{\mathsf{x}}}\) & 5 \\ \hline \multicolumn{6}{c}{**Contribution from CSM interaction to the light curve and spectra**} \\ \hline CSM interaction & not explored & observed & 5.1, 4.3.4, 5.2.4 & \(\mathbf{?}\) & \\ \hline \multicolumn{6}{c}{**Host galaxy**} \\ \hline Metallicity & \(<Z_{\odot}/3\) & very low\({}^{a}\) & 4.6 & \(\check{\check{\mathsf{\prime}}}\) & 6 \\ \hline \hline \end{tabular} 1 \end{table} Table 11: Summary of the PISN tests applied on SN 2018ibb Such a huge amount of nickel of 25-44 \(M_{\odot}\) can only be produced in a pair-instability-supernova explosion of a star with a He-core mass of 120-130 \(M_{\odot}\) at the time of the explosion (ZAMS mass of approximately 240-260 \(M_{\odot}\)). However, SN 2018ibb does not comply with the following tests: 1. the tentative detection of [Co ii] \(\lambda\) 1.025 \(\mu\)m in emission, implying \(M(^{56}\)Ni\()\gtrsim\) 30 \(M_{\odot}\) (expected: no [Co ii] in emission) 2. the nebular spectra are similar to the He130 [\(M(^{56}\)Ni\()=44\)\(M_{\odot}\)] PISN model but show a substantial excess blueward of 5000 A due to CSM interaction. The tentative detection of [Co ii] is unprecedented for a PISN candidate and any SLSN. It could be the smoking-gun evidence of SN 2018ibb being a PISN, though the line identification hinges on the detection of a single line. PISN models predict no significant [Co ii] \(\lambda\) 1.025 \(\mu\)m in emission because of line blocking extending by iron to the NIR. We propose that the line blocking might be over-estimated in existing models. While the late-time spectra are similar to PISN models, they also exhibit a blue excess that should not be present due to the massive line-blanketing of 25-44 \(M_{\odot}\) iron-group elements. A similar blue excess was also observed in previous PISN candidates. Its presence was used as a critical piece of evidence against the PISN interpretation. We argue that this is not the case for SN 2018ibb. Three lines of evidence reveal that SN 2018ibb is not exclusively powered by radioactivity and that CSM interaction is also at play: _i_) the detection of a slow-moving CSM shell around the progenitor star; _ii_) the presence of similarly slow O i, [O ii], [O iii] emission lines; and _iii_) a blue pseudo-continuum similar to that of interaction-powered SNe. This suggests that some, if not all, of the blue excess is produced by CSM interaction. We stress that even after accounting for a substantial contribution of CSM interaction to the bolometric flux, 25-44 \(M_{\odot}\) are still required to power the entire bolometric light curve. PISN models consider mass-loss episodes (winds and to some level eruptions) to evolve their progenitors to the point of explosion. However, the SN light curves and spectra are computed in sterile environments, assuming that any interaction between the SN ejecta and the circumstellar material is negligible. Our observations demonstrate that CSM interaction is an important non-negligible effect that needs to be systematically explored in PISN models. The lack of such PISN models is the reason why we cannot conclusively argue for SN 2018ibb being a PISN. Our data set disfavours central engine models (magnetar powering and fallback accretion onto a black hole), the magnetar+\({}^{56}\)Ni model and pure CSM models. The continued linear decline out to \(t_{\rm max}\)+706 days and the absence of any light curve flattening, expected for magnetar models, are in conflict with existing analytical prescriptions of magnetar models. Furthermore, the inferred values of the physical parameters of the magnetar and magnetar+\({}^{56}\)Ni models are in conflict with existing stellar evolution models. A model with a simple-power-law-shaped fallback accretion rate, the default assumption in fallback models, would also result in a flattening of the light curve in contradiction with our observations. Analytical CSM models did not provide an adequate description either. The extensive, high-quality dataset of SN 2018ibb is predestied to perform definitive tests with SLSN and PISN models, and to explore rare explosion mechanisms, e.g., axion-instability supernovae (AISNe; Sakstein et al.2022). Simulations by Mori et al. (2023) suggest that AISNe evolve faster and are bluer than PISNe for a given He-core mass. AISNe might also be more abundant than PISNe. Therefore, revealing the powering mechanism of SN 2018ibb will have immediate consequences not only for SN science but also for stellar evolution theory. The final confirmation of a PISN would also have ramifications for the interpretation of the observed drop in the black hole mass function and, therefore, gravitational wave astronomy. In the coming years, the Rubin Observatory, and the _James Webb_, _Euclid and _Roman Space Telescopes_ will be used to search for SLSNe, PISNe, and the explosions of Population III stars in the high-redshift Universe. To make this leap forward, the community requires a significantly improved understanding of the powering mechanisms and the progenitors of SLSNe. This can be accomplished with _i_) comprehensive data sets of low-\(z\) SLSNe similar to the one presented here and _ii_) more complex theoretical models with clear predictions for light curves and spectra. The _James Webb Space Telescope_ could be transformative for studying low-redshift SLSNe. Its IR spectrograph NIR-spec has the sensitivity to provide an uncensored view from 1 to 5 \(\mu\)m. Such an IR spectrum of a SN 2018ibb-like event could reveal strong emission lines from cobalt, nickel and iron between 2 and 5 \(\mu\)m during the nebular phase, which would be the smoking-gun evidence for powering by \({}^{56}\)Ni. ###### Acknowledgements. We thank Boaz Katz (Weizmann Institute of Science, Israel) and Keiichi Maeda (Kyoto University, Japan) for fruitful discussions. U.C. Berkeley undergraduate students Nachiek Girah, Andrew Hoffman, Evelyin Liu, Shaunak Modak, Jackson Stiple, Samantha Stepan, Kevin Tang, and Keto Zhang helped obtain data with the Lick/Nickel telescope. Z. Chen acknowledges support from the China Scholarship Council. C. Fransson acknowledges support from the Swedish Research Council and the Swedish National Space Board. A. V. Filippenko's supernova group at U.C. Berkeley received financial support from the Christopher R. Redlich Fund, Gary & Cynthia Bengier, Clark & Sharon Winslow, Sandford Robertson, Frank and Kathleen Wood (T. G. Brink is a Wood Specialist in Astronomy), Alan Eustace (W. Zheng is a Eustace Specialist in Astronomy), and numerous other donors. J. P. U. Pynbo acknowledges support from the Carlsberg Foundation. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140. M. Gromadzki is supported by the EU Horizon 2020 research and innovation programme under grant agreement No. 101004716. A.J.erkstrand acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (ERC Starting Grant No. 108301891). Huncarayaraju was funded by the Academy of Finland projects 324504 and 328898. G. Leudons and M. Purisain are supported by a research grant (19054) from VILLUM FONDEN. R. Lunnan is supported by the European Research Council (ERC) under the European Union's Horizon Europe research and innovation programme (grant agreement No. 10104229 - TransPire). T. E. Muller-Bravo and L. Galbany acknowledge financial support from the Spanish Ministerio de Ciencia in Innovacion (MCN), the Agencia Estatal de Investigacion (AEI) 10.13039/5010011001133, the European Social Fund (ESY) "Integrating in your route", and the European Union's Next Generation EU/FRFR funds under the PIED2020-115253G-Ato-HOST/IFLOW project, the 2019 Ramon y Cajal program RYC2019-027683-L (2021 Juan de la Cierva program FC2021-047124-H). From Centre Superior de Investigacions Cientifics (CSIC) under the PIE project 20215A/C106, and the program Umidal de Excelencia Maria de Maeztu CEX2020-001058-M. M. Nicholl is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 948381) and by a Fellowship from the Alan Turing Institute. D. Polishok is grateful for the Wise Observatory staff. A. Rossi acknowledges support from Premile Levi/PRTE 2017 - 30. Rigatti has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 759194 - USNANC). N. Sarin is supported by a Nordita Fellowship. Nordita is funded in part by Nordforsk. S. Schulze acknowledges support from the G.R.E.A.T. research environment, funded by _Vietrasolarle_, the Swedish Research Council, project number 2016-06012. L. J. Spiles acknowledges support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (ERC Advanced Grant KILONOVA No. 885281). L. Tartaglia acknowledges support from MIUR (PRIN 2017 277E/KSX). Y. Yang acknowledges support from a Benoziyo Prize Postdoctoral Fellowship and the Bengier-Winslow-Robertson Fellowship. This work was funded by ANID, Millennium Science Initiative, ICN12, 009. Based in part on observations at the European Southern Observatory, Program IDs 199.D-0143, 0105.D-0380, 0106.D-0524, 1103.D-0328, 2102.D-5026, and 2104.D-5006 (PIs C. Inserra, S. Schulze, and S. J. Smartt); Gemini-South, Program ID 2021B-Q-901 (PI A. Gal-Yam); Hubble Space Telescope, Program ID GO-16657 (PI C. Fremling); Keck, Program IDs C323, 1023, U025 (PIs S. R. Kulkarni, A. V. Filippenko); Large Binocular Telescope, Program ID ID 20719.13 (PI A. Penzl); 21 Cammes Observatory, Program IDs FTPEOP2017AB-001, KEY2017AB-001, SUPERA2019A-001, SUPA2019A-002, SUPA2019B-007, and NOAO2020B-0012 (PIs P. J. Brown, K. De; Liverpool Telescope, Program IDs L181806, H18807, H1934A-14, I19811, and I20185 (PI D. A. Perley); Nordic Optical Telescope, Program IDs 57-502, 58-802, and G1-606, (PIs G. Leloudas, J. Sollerman); P200 (PI L. Yan); and XMM-Newton, Program ID 08221501 (PI R. Margutti). We thank the staffs of the many observatories at which we conducted observations. This work has made use of data from the European Space Agency (ESA) mission _Gaia_4, processed by the _Gaia_ Data Processing and Analysis Consortium25 (DPAC). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. Part of the funding for GROND (both hardware as well as personnel) was generously granted from the Leibniz-Prize to Prof. G. Hasinger (DFG grant HA 18502/18). This work is based in part on observations made with the Large Binocular Telescope (LBT). The LBT is an international collaboration among institutions in Italy, the United States, and Germany. LBT Corporation partners are Istituto Nazionale di Astrofisica, Italy; the University of Arizona on behalf of the Arizona university system. LBT Beteiligungsgesellschaft, Germany, representing the Max Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University; and The Research Corporation on behalf of The University of Notre Dame, University of Minnesota, and University of Virginia. Footnote 25: [https://www.cosmos.esa.int/web/gaia/dpac/configuration](https://www.cosmos.esa.int/web/gaia/dpac/configuration) Some of the observations with the Las Cumbres Observatory data have been obtained via OPTICON proposals and as part of the Global Supernova Project. The OPTICON project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 730890. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. CRTS is supported by the U.S. National Science Foundation (NSF) under grants AST-0909182, AST-1313422, and AST-143600. The Catalins Sky Survey (CSS) is a NASA-funded project supported by the Near Earth Object Observation Program (NEOO) under the Planetary Defense Coordination Office (PDC0). This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by NASA and the U.S. NSF. Based in part on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. 2TF is supported by the U.S. NSF under grant AST-1440341 and a collaboration including LIGhect, IPAC, the Weizmann Institute of Science, the Oskar Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANG Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and U.W.The SED Machine is based upon work supported by the U.S. NSF under grant 1106171. Partially based on observations made with the Nordic Optical Telescope, owned in collaboration by the University of Turku and Aarhus University, and operated jointly by Aarhus University, the University of Turku and the University of Oslo, representing Denmark, Finland and Norway, the University of Iceland and Stockholm University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. This work makes use of observations from the Las Cumbres Observatory network. The Las Cumbres Observatory team is supported by NSF grants AST-1911225 and AST-1911152. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA; the observatory was made possible by the generous financial support of the W. M. Keck Foundation. KAIT, and its ongoing operation were made possible by donations from Sun Microsystems Inc., the Hewlett-Packard Company, AutoScope Corporation, the Lick Observatory, the U.S. NSF, the University of California, the Sylvia & Jim Katzman Foundation, and the TABASGO Foundation. A major upgrade of the Kast spectrograph on the Shane 3m telescope at Lick Observatory was made possible through generous gifts from William and Marina Kast as well as the Helsing-Simons Foundation. Research at Lick Observatory is partially supported by a generous gift from Google.
2305.14843
Meta-learning For Vision-and-language Cross-lingual Transfer
Current pre-trained vison-language models (PVLMs) achieve excellent performance on a range of multi-modal datasets. Recent work has aimed at building multilingual models, and a range of novel multilingual multi-modal datasets have been proposed. Current PVLMs typically perform poorly on these datasets when used for multi-modal zero-shot or few-shot cross-lingual transfer, especially for low-resource languages. To alleviate this problem, we propose a novel meta-learning fine-tuning framework. Our framework makes current PVLMs rapidly adaptive to new languages in vision-language scenarios by designing MAML in a cross-lingual multi-modal manner. Experiments show that our method boosts the performance of current state-of-the-art PVLMs in both zero-shot and few-shot cross-lingual transfer on a range of vision-language understanding tasks and datasets (XVNLI, xGQA, MaRVL, xFlicker&Co)
Hanxu Hu, Frank Keller
2023-05-24T07:51:42Z
http://arxiv.org/abs/2305.14843v2
# Meta-Learning For Vision-and-Language Cross-lingual Transfer ###### Abstract Current pre-trained vison-language models (PVLMs) achieve excellent performance on a range of multi-modal datasets. Recent work has aimed at building multilingual models, and a range of novel multilingual multi-modal datasets have been proposed. Current PVLMs typically perform poorly on these datasets when used for multi-modal zero-shot or few-shot cross-lingual transfer, especially for low-resource languages. To alleviate this problem, we propose a novel meta-learning fine-tuning framework. Our framework makes current PVLMs rapidly adaptive to new languages in vision-language scenarios by designing MAML in a cross-lingual multi-modal manner. Experiments show that our method boosts the performance of current state-of-the-art PVLMs in both zero-shot and few-shot cross-lingual transfer on a range of vision-language understanding tasks and datasets (XVNLI, xGQA, MaRVL, xFlicker&Co). ## 1 Introduction Multi-modal models focus on jointly learning representations from multiple modalities, such as vision and language. Many task require the integration information of vision and language, including image captioning (Vinyals et al., 2015), natural language visual reasoning (Zhou et al., 2017; Suhr et al., 2019), and cross-modal retrieval (Zhen et al., 2019). Multi-modal learning models the interaction between different modalities, allowing the resulting representations to be used in various multimedia applications to enhance human-computer interaction. Recently, pre-trained vision-language models (PVLMs) (Chen et al., 2020; Lu et al., 2019; Tan and Bansal, 2019) have achieved significant advances in multi-modal tasks. However, the data which PVLMs learn from is mostly for high-resource languages such as English. The resulting models reply on large amounts of training data for good performance and often the models acquire biases that mean they perform poorly for low-resource languages such as Indonesian and Swahili. To address this, several multilingual PVLMs (Zhou et al., 2021; Ni et al., 2021) have been proposed. A number of studies have used multilingual multi-modal datasets (Bugliarello et al., 2022; Liu et al., 2021), and Figure 1 shows two examples of these datasets. They have evaluated PVLMs and have demonstrated that these models do not perform well in low-resource cross-lingual transfer settings. Meta-learning can mitigate this issue. It is a learning approach that enables machine learning models to adapt quickly to new tasks by learning the learning algorithm itself. Model-Agnostic Meta-Learning (MAML; Finn et al. 2017) is one of the most widely used meta-learning frameworks. It is based on gradient-descent optimization, does not require multiple models or complex settings, and can be used for a range of models. In previous work (Verma et al., 2020; Finn et al., 2017; Nooralahzadeh et al., 2020), MAML-based methods have been shown to be useful in low-resource and cross-lingual transfer scenarios, including both few-shot and zero-shot cross-lingual tasks. However, prior work has only attempted to use MAML for **text-only tasks** for cross-lingual transfer(Nooralahzadeh et al., 2020). Inspired by Figure 1: Examples in IGLUE(Bugliarello et al., 2022) benchmark. The left example comes from MaRVL (Liu et al., 2021) dataset, and the right example comes from XVNLI dataset proposed in IGLUE. previous works about using MAML for natural language tasks, this paper is focused on using MAML to address the limitations of previous PVLMs in **Vision-Language tasks** for low-resource cross-lingual transfer. We propose a meta-learning framework which is specialize for vision-language cross-lingual transfer tasks. In this framework, we proposed a novel algorithm which combines a traditional supervised loss function for learning downstream tasks and a contrastive loss for learning alignments between modalities together in a cross-lingual MAML optimization procedure and name it as XVL-MAML. We show that this can lead to significant improvement in PVLMs performance for low resource target languages. We also find that using Contrastive Learning in MAML framework on its own can bring improvements in PVLM performance in unsupervised settings. Specifically, we uses a contrastive learning loss as the objective function in the MAML algorithm, based on the assumption that it allows the model to generalize the matching of text and images in new languages. Because multi-modal datasets consist of image-text pairs describing the same or similar objects, labels for downstream tasks are not needed. Exploiting this insight, we take an image-text pair from the original dataset as a positive sample and a randomly paired text and image as a negative sample. In sum, our contributions are as follows. 1) We propose a novel MAML framework called XVL-MAML which is specialized for vision-and-language cross-lingual transfers, as it combine contrastive learning and standard supervised learning in the MAML algorithm. 2) We shows that using contrastive learning solely in MAML framework as an un-supervised setting can also be useful. 3) We demonstrated our proposed framework can boost the performance of current PVLMs across 14 languages and 4 tasks in both **zero-shot learning** and **few-shot learning**. 4) We conducted ablation study to verify the effect of contrastive learning in both supervised and un-supervised settings and made further analysis across langauges and tasks. ## 2 Related Work ### Multilingual Vision-and-Language Methods and Tasks Recent work has investigated Vision-and-Language cross-lingual transfer tasks. Elliott et al. (2016) proposed Multi30K, an image description dataset which contains descriptions in multiple languages. Previous methods (Gella et al., 2017; Rotman et al., 2018) focus on bridging languages through images, but they mainly focus on image-text retrieval and only consider high resource languages such as English and German. Pfeiffer et al. (2022) built a multilingual visual question answering dataset called xGQA. Liu et al. (2021) proposed a multilingual version of the grounded visual reasoning dataset called MaRVL, which follow the same setting as natural language visual reasoning dataset NLVR2 (Su et al., 2019), but considers both cross-lingual transfer and domain shift between languages. Qiu et al. (2022) uses machine translation to help multilingual multimodal learning. Several pre-trained models are recently proposed for vision-and-language cross-lingual transfer. Ni et al. (2021) proposed M3P, a transformer-based pre-trained model that maps the same concepts in different modalities and languages into a common semantic space. Similar to M3P, Liu et al. (2021) extended UNITER (Chen et al., 2020), proposing mUNITER based on M-BERT (Devlin et al., 2019), and xUNITER based on XLM-R (Conneau et al., 2020). Zhou et al. (2021) proposed UC2, a model using a data augmentation method based on machine translation for cross-lingual cross-modal pre-training. Although pre-training methods have proven powerful across multiple tasks, they require large amounts of training data and show clear performance gap between English and other low-resource languages on the IGLUE benchmark (Bugliarello et al., 2022). ### Meta-Learning Meta-learning has been increasingly popular in the machine learning community. Whereas conventional machine learning methods learn by data points, meta-learning learns by tasks. Previous meta-learning work (Vinyals et al., 2016; Finn et al., 2017) focused on adapting to new tasks quickly. But meta-learning can be applied to other scenarios as well, including semi-supervised learning (Ren et al., 2018), multi-task learning (Yu et al., 2020), and domain generalization (Li et al., 2018). Prior work has also explored the effectiveness of meta-learning in NLP: Wang et al. (2021) applied meta-learning in semantic parsing for domain generalization based on MAML (Finn et al., 2017; Li et al., 2018). Obamuyide and Vlachos (2019) leveraged meta-learning under limited su pervision in a relation classification task. Recently, there have been some applications using MAML in cross-lingual transfer: Gu et al. (2018) and Nooralahzadeh et al. (2020) regard languages as tasks in their meta-learning framework. In contrast to these existing approaches, which only explore text-only scenarios, we are the first to utilize meta-learning for cross-lingual transfer in multi-modal tasks. ## 3 Meta-learning for Vision-and-Language Cross-lingual Transfer We first formally define the problem of Vision-and-Language Cross-lingual Transfer in the context of zero-shot and few-shot scenarios in Section 3.1. Then, we introduce our overall fine-tuning framework in Section 3.2. And we introduce the contrastive learning used in vision-and-language tasks in Section 3.3. Finally, we introduce our proposed XVL-MAML algorithm in Section 3.4. ### Problem Definition Following the multilingual vision-language IGLUE benchmark (Bugliarello et al., 2022), we formulate the problem of cross-lingual transfer learning in Vision-and-Language scenarios. For understanding tasks, the input is a pair of an image \(V\) and text \(U\) and the output \(Y\) is the result inferred by the multi-modal model. We can thus formulate this problem as computing \(P_{\theta}(Y|V,U)\), where \(\theta\) are the parameters of the PVLMs. During training, the image-text pairs comes from datasets \(D_{s}\) in a set of known languages, and our aim is to perform well on the datasets \(D_{t}\) with the same task in unseen low-resource languages. For zero-shot learning, the model fine-tuned or pre-trained on \(D_{s}\) is directly used in inference on \(D_{t}\) in unseen low-resource languages. For few-shot learning, after trained on \(D_{s}\), the model is continually fine-tuned on several shots of \(D_{t}\) in target languages. ### Overall Fine-tuning Framework For Cross-lingual Transfer Our pipeline of the proposed meta-learning fine-tuning framework can be divided into three parts: 1. Fine-tune the pre-trained vision-language models on data of down-stream task **in English** 2. Fine-tune the models on data in the **auxiliary language** (one language other than English) using our proposed XVL-MAML algorithm. 3. Evaluate the fine-tuned models on data in the **target languages** (languages other than English and the auxiliary language) In traditional cross-lingual transfer learning procedure described in Bugliarello et al. (2022), only part 1 and part 3 should be conducted. In part 3, if the setting is zero-shot, the model should be evaluated on data in target language directly, but if the setting is few-shot, the model should continue fine-tuning on few-shots of data in target languages then conduct evaluation. The difference between our framework and traditional procedure is that one additional fine-tuning step (part 2) will be conducted. We will describe it specifically in Section 3.4, but before that, we will firstly introduce the contrastive learning in Vision-and-Language tasks. ### Contrastive Learning in Vision-and-Language tasks Vision-and-Language Contrastive Learning loss proposed by Zhang et al. (2020) has been proven effective in medical image scenarios and is used as the pre-training objective function of CLIP (Radford et al., 2021). It can be regarded as an auxiliary task for representation learning, aiming to enable models gain better aligned multi-modal representation for downstream tasks. In the contrastive learning scheme, a batch of embeddings of images encoded by the model can be written as \(I=\{I_{1},...,I_{N}\}\), and a batch of embeddings of texts encoded by the model can be written as \(T=\{T_{1},...,T_{N}\}\), where \(N\) is the size of batch, and \((I_{i},T_{i})\) an image-text pair, and the paired image-text data describe the same or similar concepts, so we can assume they are **positive** examples, and non-paired data are negative examples. Then, the embeddings of images and texts are fed into two different linear transformation layers separately, which are noted as \(W_{1}\) and \(W_{2}\): \[U=I\cdot W_{1}^{\top} \tag{1}\] \[V=T\cdot W_{2}^{\top} \tag{2}\] Where \(U\) and \(V\) represent the batch of image-text pairs. Then the cosine similarity of each pairs can be computed as \(\langle U_{i},V_{j}\rangle=\frac{U_{i}^{\top}V_{j}}{\|U_{i}\|\|V_{j}\|}\). The objective is to maximize the similarity of matched image-text pairs and minimize others. So the image-text contrastive loss can be formulated as : \[\mathcal{L}_{i}^{1}=-\log\frac{\exp(\langle U_{i},V_{i}\rangle)}{\sum_{K=1}^{N }\exp(\langle U_{i},V_{k}\rangle)} \tag{3}\] Following Zhang et al. (2020), the contrastive loss should be symmetric for each modality, and the text-image contrastive loss as: \[\mathcal{L}_{i}^{2}=-\log\frac{\exp(\langle V_{i},U_{i}\rangle)}{\sum_{K=1}^{N} \exp(\langle V_{i},U_{k}\rangle)} \tag{4}\] Finally, the final contrastive loss of this batch of paired data is: \[\mathcal{L}_{CL}=\sum_{i=1}^{N}(\mathcal{L}_{i}^{1}+\mathcal{L}_{i}^{2}) \tag{5}\] Where \(\mathcal{L}_{CL}\) is the overall contrastive loss. When we minimize \(\mathcal{L}_{CL}\), we actually maximize the similarity of image-text pairs which are positive examples. ### Xvl-Maml Inspired by the effectiveness of MAML for quickly adapting to new tasks, we propose a novel MAML algorithm specialized for cross-lingual transfer in vision and language tasks, called XVL-MAML. Specifically, we first integrate traditional contrastive learning into the MAML algorithm, making it specialized for the visual-language task of cross-lingual transfer learning. Our intuition is that we can use MAML with a contrastive loss as its learning objective for quickly adapting vision-language alignment to new languages. In this framework, the alignment between image and text in a specific language can be regard as a task. Inspired by Nooralahzadeh et al. (2020), we use data of one auxiliary language for fine-tuning, but with contrastive loss as objective function in the MAML algorithm. Specifically, we sample a batch of support data \(\mathcal{B}_{s}\) and a batch of query data \(\mathcal{B}_{q}\) in the data in auxiliary language \(A\) for each virtual task \(\mathcal{T}\). Assuming the parameters of the model are \(\theta\) and the contrastive loss on the support data is \(L_{CL}(\theta)_{\mathcal{B}_{s}}\), then the parameters of the model can be updated by one step of gradient descent: \[\theta^{{}^{\prime}}=\theta-\alpha\nabla_{\theta}\mathcal{L}_{CL}(\theta)_{ \mathcal{B}_{s}} \tag{6}\] Following the MAML algorithm, our final objective for this task is to minimize \(\mathcal{L}_{CL}(\theta^{{}^{\prime}})_{\mathcal{B}_{q}}\) on the query data \(\mathcal{B}_{q}\) using gradient descent: \[\theta\leftarrow\theta-\beta\nabla_{\theta}\mathcal{L}_{CL}(\theta-\alpha \nabla_{\theta}\mathcal{L}_{CL}(\theta)_{\mathcal{B}_{s}})_{\mathcal{B}_{q}} \tag{7}\] Optimized using this method, pre-trained vision-language models can quickly adapt to new tasks in other languages without using any annotation in auxiliary for downstream tasks, so we called it as un-supervised scenarios. In supervised scenarios, where the downstream tasks labels in auxiliary language is available, we combine the loss of downstream task \(\mathcal{L}\) with vision-language contrastive loss \(\mathcal{L}_{CL}\) by adding them together. So during fine-tuning, Equation (8) is modified as: \[\theta\leftarrow\theta-\beta(\nabla_{\theta}\mathcal{L}(\theta^{{}^{\prime \prime}})_{\mathcal{B}_{q}}+\lambda\nabla_{\theta}\mathcal{L}_{CL}(\theta^{{} ^{\prime}})_{\mathcal{B}_{q}}) \tag{9}\] Where the temporary parameters optimized for one step by downstream task loss \(\mathcal{L}\) on the support set \(\mathcal{B}_{s}\) is \(\theta^{{}^{\prime\prime}}\), \(\beta\) is the meta-learning rate, and \(\lambda\) the scale factor of contrastive learning. By simply adding the gradients of downstream task and contrastive learning in the meta-update, the model learns downstream tasks and vision-language alignment simultaneously for cross-lingual transfer. ## 4 Experiments In this section, we introduce the base PVLMs we use for out multi-modal cross-lingual transfer tasks, Figure 2: The proposed architecture of the model. The architecture consists of a contrastive learning module and a downstream tasks module, and both modules share the same parameters of a pre-trained vision-language model. and we also introduce the datasets and metrics we use to evaluate our proposed method. Then we describe how the experiments were conducted and discuss the results. ### Base models In this paper, we choose xUNITER Liu et al. (2021) and UC2 Zhou et al. (2021) as our baseline models, as these two models use different pre-training methods. Then we applied our method to both models to show that our proposed method is model-independent. xUNITERis a multilingual version of the UNITER model Chen et al. (2020). It has a similar architecture to UNITER and uses Faster-RCNN Ren et al. (2015) as a feature extractor for images. For a single images, 36 objects will be extracted and represented as 36 vectors \(\mathbf{v}=\{\mathbf{v}_{1},...,\mathbf{v}_{M}\}\), where \(\mathbf{v}\) represents the whole image, and \(M=36\). It is worth noting that the parameters of Faster-RCNN are frozen. Both features and positional embeddings pass through a fully connected layer, and then merge together. The image features are pooled and reshaped as vectors with the same dimension as text embeddings. UNITER has four pre-training methods: Masked Language Modelling (MLM), Masked Region Modelling (MRM), Image-Text Matching (ITM), and Word Region Alignment (WRA). xUNITER, except for the pre-training method mentioned above, it uses Masked Language Modelling for multilingual data and uses the same text embedder as XLM-R Conneau et al. (2020). Uc2uses a similar model architecture as UNITER, but different pre-training methods. The pre-training method of UC2 augments pre-training on English data by constructing multilingual corpus via machine translation and then uses this augmented data for pre-training. It also proposes the Visual Translation Language Modeling (VTLM) pre-training method, which uses the image as a pivot to learn the relationship between parallel texts in two languages and their corresponding images. ### Datasets and Metrics We use datasets corresponding to four tasks from the IGLUE benchmark Bugliarello et al. (2022), which includes xGQA Pfeiffer et al. (2022), MaRVL Liu et al. (2021), XVNLI, and xFlickr&Co Plummer et al. (2015); Lin et al. (2014). We visualize examples of MaRVL and XVNLI in 1. Following the setting in IGLUE, the evaluation metric is accuracy for all tasks except cross-modal retrieval, which uses Recall@1. ### Implementation and Hyperparameters We conduct all experiments based on the Visiloguistic Transformer Architectures framework VOLTA1 on 4 2080Ti GPUs. We implement the MAML algorithm based on Higher2 library. We use the AdamW Loshchilov and Hutter (2018) optimizer to fine-tune all models in PyTorch. Footnote 1: [https://github.com/e-bug/volta](https://github.com/e-bug/volta) Footnote 2: [https://github.com/facebookresearch/higher](https://github.com/facebookresearch/higher) Fine-tuning on English DataBefore evaluating models on the data in low-resource languages, we firstly fine-tune the pre-trained models on the English datasets NLVR2 Suhr et al. (2019), SNLI-VE Xie et al. (2019), Flickr30k Plummer et al. (2015), and GQA Hudson and Manning (2019) for MaRVL, XVNLI, xFlickr&Co, and xGQA, respectively, following the procedure of Bugliarello et al. (2022) and Liu et al. (2021). We follow the setting in IGLUE Bugliarello et al. (2022) and also \begin{table} \begin{tabular}{c|c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Model} & \multirow{2}{*}{XNVLI} & \multirow{2}{*}{xGQA} & \multirow{2}{*}{MaRVL} & \multicolumn{2}{c}{xFlickr\&Co} \\ \cline{3-5} & & & & & IR & TR \\ \hline \multirow{4}{*}{Baseline} & mUNITER & 53.7 & 10.0 & 53.7 & 8.1 & 8.9 \\ & xUNITER & 59.0 & 20.8 & 56.0 & 13.8 & 12.5 \\ & UC2 & 62.5 & 29.0 & 56.4 & 19.7 & 17.0 \\ & M3P & 58.2 & 28.2 & 56.0 & 12.9 & 11.9 \\ \hline \multirow{2}{*}{Ours} & xUNITER & **63.0 (+4.0)** & 22.5 (+1.7) & **59.4 (+4.4)** & 16.3 (+2.5) & 14.2 (+1.7) \\ & UC2 & **64.4 (+1.9)** & **29.9 (+0.9)** & 57.0 (+0.6) & **21.3 (+1.6)** & **18.7 (+1.7)** \\ \hline \hline \end{tabular} \end{table} Table 1: Zero-shot performance (accuracy) of four baseline models only fine-tuned by English data (Baseline) and two models fine-tuned by our meta-learning method (Ours) on four IGLUE datasets Bugliarello et al. (2022). used the IGLUE hyper-parameters for each task when fine-tuning. We save the parameters of models in each epoch, then picking the best performing model for each task as the initialized parameters \(\theta\) for meta-learning fine-tuning stage. Fine-tuning with Meta-learningFor the X-MAML and Contrastive-MAML algorithms, both the size of the support set and the query set are 64. We explore learning rates \(5\times 10^{-5}\), \(1\times 10^{-5}\), \(5\times 10^{-6}\), \(1\times 10^{-6}\) for both UC2 and xUNITER, and find the best learning rate is \(5\times 10^{-6}\) for both the normal fine-tuning stage and the meta-update of MAML. For the inner learning rate of X-MAML and Contrastive-MAML, we explore learning rates \(5\times 10^{-6}\), \(5\times 10^{-5}\), \(5\times 10^{-4}\) and \(5\times 10^{-3}\), and find that \(5\times 10^{-4}\) is the best inner learning rate. For the proposed meta-learning framework, we set the number of iterations to 25, 50, 100, 150, 200, 300, 400, 500, 1000 (for each iterations, we sample a batch of data as support set and a batch as query set). We find that models overfit after 300 iterations in most situations, so we set the number of iterations as 400 for all our experiments, and evaluate the performance of models for each 25 iterations to guarantee that we can pick the model with best performance of each setting for evaluation. ## 5 Results and Discussion ### Zero-shot We report the results of the baseline models, and the results for fine-tuning them using our meta-learning framework, in Table 1. In our setting, baseline model means the PVLMs are only fine-tuned on the English datasets. For simplicity, we only report the averaged results of all combinations of target languages and auxiliary languages for each model and task. Our proposed meta-learning framework is the combination of Contrastive-MAML and X-MAML. We set the value of \(\lambda\) in Equation (8) to \(1\times 10^{-3}\) for xUNITER and \(0.2\) for UC2. The results in the Table 1 indicate the effectiveness of our meta-learning framework, and show that our method can boost the zero-shot performance of UC2 and XUNITER on all four datasets in the IGLUE benchmark. Note that to Table 1 shows average performance across all languages - performance for individual languages can vary, and is shown in detail in Appendix A, Table 5. We also shows the differences of improvements by using different auxiliary languages for different target languages in Figure 4. ### Few-shot We also conduct few-shot experiments following the setting in IGLUE (Bugliarello et al., 2022) for both xUNITER and UC2 on XVNLI and MaRVL. The results are shown in Figure 3, where the horizontal axis represents the number of shots, and the vertical axis represents the accuracy score. The leftmost point in the horizontal axis is zero, which represents the performance in zero-shot setups. The blue points and lines show the performance of models fine-tuned by our method. The yellow lines and points represent the performance of the baseline. It is clear that in all these four figures, our method achieves better performance in all shots. And it is worth noting that although there is a slight increase from the performance of zero-shot to one-shot, our proposed method without seeing any data in the target languages outperforms the baselines in the few-shot setting, except for UC2 on MaRVL. In other words, only a few instances of training data in target languages is not enough to eliminate the advantage of our method. It further demonstrates that while our method requires training data in one auxiliary language, there is no need for few-shot data in target languages. ### Ablation Study and Further Analysis In this section, we conduct a series of ablation studies, which investigate the effect of each part of our proposed meta-learning framework. We have performed five runs for each settings and reported the average and standard error to show the significance. The Effect of Contrastive LearningWe investigate the effect of contrastive learning in our meta-learning fine-tuning framework. Specifically, we fine-tune the model using contrastive learning loss in the MAML algorithm, just as the un-supervised setting described in the Section 3.2, where the labels of auxiliary languages is not available. We evaluate the performance of UC2 and xUNITER on XVNLI dataset in the un-supervised setting in Table 3. It indicates using contrastive learning solely in the MAML algorithm can also gain improvements of performance. It provides evidence for the idea of contrastive learning can enable models learn alignment of modalities in cross-lingual transfer, and hence gain better representations. we also compare the performance of the model in the supervised setting where labels of data in auxiliary language is available, so in XVL-MAML algorithm, both contrastive loss and down-stream task loss are used. Then, we remove the contrastive learning loss in the XVMAML algorithm, only keeping the down-stream task loss. We compare the performance of these two setting in Table 4 to show the effectiveness of contrastive learning loss in XVL-MAML algorithm in the supervised setting. In Table 4, the first row is XVL-MAML algorithm without using contrastive learning loss, which means only using down-stream task loss when fine-tuning, and the second row it the normal XVL-MAML which uses both contrastive loss and down-stream task loss. and language families. Moreover, surprisingly, our method can significantly boost the performance of xUNITER even in challenging MaRVL dataset which across 5 diverse language families and cultures, for improving 4.4 point in accuracy. Diverse auxiliary and target languagesWe also investigate how different auxiliary languages can effect performance in different target languages. Specifically, we take the MaRVL dataset as an example and report the results in Table 2. Then we visualize the improvements of xUNITER by using different auxiliary languages for different target languages on MaRVL and XVNLI datasets in Figure 4. The difference of improvements in MaRVL (which range from 0.44 to 5.4)is larger than in XVNLI (which range from 2.8 to 6.4), and one possible reason is that the language families of MaRVL are more diverse than of XVNLI. ### Example Predictions We show some examples of inputs and predictions for baseline models and the models fine-tuned by our method in Appendix A. In Figures 5 and 6, we use xUNITER to predict the Chinese part of the MaRVL dataset. We have selected three examples where baseline predicted wrongly but our method predicted correctly, and two examples where both our method and baseline method predicted correctly. In the first three examples, the label was True but the baseline predicted False. We find that the same concepts have different visual features in the left and the right image for each example, which makes it more difficult for models to identify. For instance, in the first example, the dining rooms in the left and right images look different. In the last two examples, however, the concepts described in the text do not have diverse or obscure visual features when they appear in the images. Therefore, based on these cases, we can surmise that the meta-learning framework makes the model more adaptive for diverse information, and have better generalization capabilities of diverse mapping between texts and images. ## 6 Conclusions In this paper, we focus on mitigating the problem of poor performance of current PVLMs in vision-language cross-lingual transfer. We proposed a novel MAML framework to make pre-trained models quickly adaptive for new languages in vision-and-language scenarios. Our meta-learning framework combine contrastive learning and downstream task supervised learning together. We implement and verify the effectiveness of the our algorithm in both supervised and un-supervised settings. The key strength of our approach is that we leverage contrastive learning in the MAML procedure so that the models can quickly learn aligning representations from different modalities and adapt it for unseen languages. Experimental results demonstrate that our proposed meta-learning framework significantly improves the performance of models in vision-and-language cross-lingual transfer both in zero-shot and few-shot setups. We applied our method to two state-of-the-art PVLMs, UC2 and xUNITER, and verified its effectiveness on four datasets in the IGLUE benchmark in 17 languages. We also conducted ablation study to explore the effect of contrastive learning in supervised and un-supervised settings, and made further analysis for different languages and tasks. Figure 4: Improvements of zero-shot performance by fine-tuning xUNITER on different auxiliary languages then evaluating on different target languages using our proposed framework compared with baseline. The left heatmap is on MaRVL, and the right is on XVNLI. Rows correspond to auxiliary and columns correspond to target languages ### Limitations Our proposed method applies contrastive learning to samples of image-text pairs. The alignments induced in this fashion work best if there is a concept or an object that is both depicted in the image and referred to in the sentence. If this is not the case, then the method may end up learning alignments not so much accurately; this includes cases where the image or the sentence contain multiple objects or concepts, not all of which can be aligned. To address this limitation, future work should explore how to construct better positive and negative samples and how to enable learning at a more fine-grained level. ## Ethics Statement The use of IGLUE benchmark in our paper is consistent with their intended use. We have checked the datasets we use don't contain any offensive content and identifiers by sample and visualize examples in datasets. There are 14 languages in the datasets we used, we list them in Table 5, it can also be found in IGLUE benchmark [2]. The detail information of datasets is described in [2].
2310.10893
Active Learning Framework for Cost-Effective TCR-Epitope Binding Affinity Prediction
T cell receptors (TCRs) are critical components of adaptive immune systems, responsible for responding to threats by recognizing epitope sequences presented on host cell surface. Computational prediction of binding affinity between TCRs and epitope sequences using machine/deep learning has attracted intense attention recently. However, its success is hindered by the lack of large collections of annotated TCR-epitope pairs. Annotating their binding affinity requires expensive and time-consuming wet-lab evaluation. To reduce annotation cost, we present ActiveTCR, a framework that incorporates active learning and TCR-epitope binding affinity prediction models. Starting with a small set of labeled training pairs, ActiveTCR iteratively searches for unlabeled TCR-epitope pairs that are ''worth'' for annotation. It aims to maximize performance gains while minimizing the cost of annotation. We compared four query strategies with a random sampling baseline and demonstrated that ActiveTCR reduces annotation costs by approximately 40%. Furthermore, we showed that providing ground truth labels of TCR-epitope pairs to query strategies can help identify and reduce more than 40% redundancy among already annotated pairs without compromising model performance, enabling users to train equally powerful prediction models with less training data. Our work is the first systematic investigation of data optimization for TCR-epitope binding affinity prediction.
Pengfei Zhang, Seojin Bang, Heewook Lee
2023-10-16T23:53:07Z
http://arxiv.org/abs/2310.10893v2
# Active Learning Framework for Cost-Effective TCR-Epitope Binding Affinity Prediction ###### Abstract T cell receptors (TCRs) are critical components of adaptive immune systems, responsible for responding to threats by recognizing epitope sequences presented on host cell surface. Computational prediction of binding affinity between TCRs and epitope sequences using machine/deep learning has attracted intense attention recently. However, its success is hindered by the lack of large collections of annotated TCR-epitope pairs. Annotating their binding affinity requires expensive and time-consuming wet-lab evaluation. To reduce annotation cost, we present ActiveTCR, a framework that incorporates active learning and TCR-epitope binding affinity prediction models. Starting with a small set of labeled training pairs, ActiveTCR iteratively searches for unlabeled TCR-epitope pairs that are "worthy" for annotation. It aims to maximize performance gains while minimizing the cost of annotation. We compared four query strategies with a random sampling baseline and demonstrated that ActiveTCR reduces annotation costs by approximately 40%. Furthermore, we showed that providing ground truth labels of TCR-epitope pairs to query strategies can help identify and reduce more than 40% redundancy among already annotated pairs without compromising model performance, enabling users to train equally powerful prediction models with less training data. Our work is the first systematic investigation of data optimization for TCR-epitope binding affinity prediction. Active Learning, TCR-epitope Binding Affinity. ## I Introduction T cell receptors (TCRs) play a pivotal role in adaptive immune systems by recognizing epitope -- a part of antigen-presented on cell surface via major histocompatibility complex and initiating potential immune responses to safeguard the host [1]. Understanding the binding affinity between TCR and epitope sequences is fundamental for developing immunotherapy strategies, where T cells are engineered/designed and subsequently assessed for their binding results to target epitopes in a wet-lab setting [2]. However, such assessments are slow and expensive. To overcome these challenges, computational methods for predicting TCR-epitope binding affinity have emerged to streamline the assessment process and minimize expenses. Many machine learning and deep learning models have been developed to improve the prediction performance of TCR-epitope binding affinity [3, 4, 5, 6]. These models typically take two sequences, a TCR and an epitope, as input and predict the binding affinity between them. Despite emergence of these machine learning models, there has been limited attention given to optimizing the underlying data. Most prediction models have primarily focused on exploring the impact of various neural network architectures, neglecting the crucial role of data in achieving optimal performance. As databases of annotated TCR-epitope pairs continue to grow [7, 8, 9], two important research questions arise. First, _how can we reduce the cost of annotating new TCR-epitope pairs in the future?_ Reducing the cost of each individual wet-lab experiment is challenging due to the inherent expenses associated with these processes. However, more informed decisions can be made by selectively annotating the "most important" or "most useful" pairs for prediction models. By intelligently choosing pairs for annotation, we can optimize the allocation of resources, ultimately leading to more efficient use of annotation budgets for TCR-epitope pairs. Second, _how can we optimize the use of those already annotated pairs to train powerful prediction models with less training data without compromising model performance?_ This question arises because of the presence of identical or similar TCR-epitope pairs in the data. Some pairs, although not identical, can be semantically similar in the latent feature space and thus contribute minimally to model training. Furthermore, identical pairs, i.e., TCR-epitope pairs with matching sequences, could cause model overfitting and slow down the training process. A noticeable overlap among these pairs has been observed in most of the currently available databases. For example, roughly 49.41% of TCR-epitope pairs with binding scores greater than zero in VDJdb [8] are identical to pairs in IEDB [9], leading to unnecessary consumption of computational resources. These similar pairs are considered less important or less beneficial to the model. Therefore, our objective is to identify and reduce the redundancy inherent among these already annotated TCR-epitope pairs. By doing so, users can train equally powerful prediction models with less training data. In this study, we present ActiveTCR, a novel framework designed to address research questions of reducing annotation cost for future unlabeled pairs and reducing data redundancy among already labeled data in the context of TCR-epitope binding affinity prediction. ActiveTCR employs an active learning approach, where a prediction model is initially trained on a small subset of the training data then the model continuously selects the most informative samples from the unlabeled TCR-epitope pool to be annotated and retrain the prediction model until satisfactory performance is achieved or no more samples are available. The crux of our framework lies in querying the "most important" pairs for the TCR-epitope binding affinity prediction model. We utilized entropy as a heuristic measure of "importance" of each pair concerning prediction models. To investigate the effectiveness of ActiveTCR, we explored five distinct query strategies, including a random sampling baseline, three variants of entropy sampling, and a misclassification sampling strategy for reducing redundancy. We evaluated ActiveTCR in two scenarios. Our experimental results demonstrated that ActiveTCR effectively halved the annotation cost for future unlabeled TCR-epitope pairs and reduced over 40% data redundancy among those already annotated pairs. ActiveTCR provides a promising solution to the challenges of reducing the annotation cost for future data and reducing redundancy among already annotated data for TCR-epitope binding affinity prediction models. ## II Related Work ### _Computational Approaches for Binding Affinity Prediction_ In order to predict binding affinity between TCR and epitope sequences, researchers have spent large efforts in employing machine learning and deep learning techniques. Two primary approaches explored to improve prediction performance are 1) designing the neural network architecture of prediction models and 2) developing amino acid embedding models for TCR and epitope sequences. Early models focused on neural network structures of prediction models while using a simple embedding matrix, BLOSUM62 [10], to map amino acid residues to continuous numeric vector representations. For example, NetTCR [4] used a series of convolutional layers to learn features of input TCR and epitope sequences whereas ERGO2 [3] proposed an LSTM-based approach to learn sequential information of amino acid sequences. Later, ATM-TCR [5] utilized multi-head attention modules to learn contextualized features of amino acids. While these models achieved fair prediction performance on known epitopes (AUCs of 72.0-77.3%), they performed poorly in correctly identifying binding TCRs for novel (unseen) epitopes that were not observed during training (AUCs of 47.0-54.2%). To develop models that can better generalize to unseen epitopes, researchers incorporated binding affinity prediction models with advanced amino acid embedding techniques. Several such models have been proposed to learn representations from a large corpus of TCR sequences, which can be used as feature extractors for the input sequences of TCR-epitope binding affinity prediction models. TCR-BERT [11] learned amino acid embeddings using a masked language model, with its architecture inspired by the well-known language model BERT [12]. catELMo [13], a model inspired by ELMo [14], learned amino acid embeddings by predicting the next token based on its previous tokens processed by a stack of bi-directional LSTM layers. Embeddings from catELMo led to significant performance gains for unseen epitope sequences [6, 13]. It was also demonstrated that a prediction model using catELMo embeddings with only 10% of the training data outperformed one using BLOSUM62 embedding with 100% of data, indicating potential in reducing the annotation cost of TCR-epitope pairs. ### _Active Learning_ Active learning is a machine learning algorithm that interactively requests annotations of new unlabeled training samples. Unlike passive learning, where a model is trained on a fixed set of labeled data, active learning incorporates human annotators (oracles) to iteratively label the most informative samples. Various query strategies have been developed to identify such informative samples for annotation and inclusion in the training set. Uncertainty sampling [15] selects the samples that a model is most uncertain about. The intuition is that the samples that confuse the model the most will benefit its learning the most. A common example for selecting high uncertainty samples is entropy-based strategy [16, 17]. It selects samples for which the predicted probability distribution over all labels is uniformly spread out. Diversity sampling measures the prediction diversity of unannotated samples, using this diversity as an indicator of informativeness [18, 19, 20]. By updating the training set, active learning can improve model performance while reducing annotation costs. This approach is particularly useful when labeled data is costly and scarce but unlabeled data is abundant. Active learning has proven effective in a variety of fields, including natural language processing [21], computer vision [22, 23], and medical image analysis [24]. It has also been applied to improve molecular-level optimizations such as prediction of protein-protein interaction [25] and target-drug interaction [26, 27]. To the best of our knowledge, no prior work has explored active learning approaches in the context of TCR-epitope binding affinity prediction. ## III Data This section describes how we prepared TCR-epitope pairs and divided them into training and testing sets. As the third complementary-determining region (CDR3) of the TCR \(\beta\) chain is the most critical component that interacts with epitope sequences [28], we made use of the CDR3 of TCR \(\beta\) chain and referred to it as TCR unless otherwise specified. ### _TCR-epitope Pairs_ #### Iii-A1 Positive Pairs We sourced TCR-epitope pairs of human species from three publicly available databases: VDJdb [8], McPAS [7], and IEDB [9]. These pairs are clinically known to bind to each other and were used as our positive data. We followed the same pre-processing procedure as ATM-TCR [5]. The data was filtered to only include pairs of human MHC class I epitopes and TCR\(\beta\) sequences. We kept pairs with linear epitope sequences and discarded pairs containing wildcards such as * or X in sequences. For VDJdb [8], we only included pairs with a confidence binding score greater than zero. We also eliminated any duplicated TCR-epitope pairs, resulting in 150,008 unique TCR-epitope pairs, with 982 unique epitopes and 140,675 unique TCRs. #### Iii-A2 Negative Pairs Given the scarcity of clinically confirmed negative TCR-epitope pairs [4], generating negative pairs is a standard practice in the field of TCR-epitope binding prediction [4, 5, 6, 13]. In order to synthesize negative pairs, we adopted the strategy of pairing existing epitopes with newly sampled TCRs from repertoires, an approach that has been substantiated in prior studies [4, 6, 13]. The rationale behind this approach was to simulate training pairs where TCRs that are prevalent in healthy individuals do not bind with disease epitopes, thereby providing a comprehensive contrast to the positive pairs in our machine learning model. We first randomly sampled TCRs from 20 million TCR sequences of healthy control repertoires of ImmunoSEQ [29]. We then replaced the original TCRs of the positive TCR-epitope pairs with TCRs of the healthy repertoires. This resulted in 150,008 unique negative TCR-epitope pairs. ### _Training and Testing Set Splits_ It is of interest to measure binding affinity prediction performance on epitopes and TCRs that have never been observed before. A random split was not suitable for measuring generalization performance, particularly for unseen epitopes. Given that 99.97% of epitopes occur multiple times in our dataset, it is highly likely that an epitope would be present in both the training and testing sets. Although 96.8% of TCRs in our dataset are unique, the same issue may still occur for TCR sequences. To accurately assess the prediction performance on the novel (unseen) TCRs and epitopes, we used the two data partition approaches in ATM-TCR [5]. The epitope split shared no common epitopes between training and testing sets, allowing us to evaluate the model's generalizability on novel epitope sequences. Similarly, the TCR split shared no common TCRs between training and testing sets, enabling us to evaluate the model's generalizability on novel TCR sequences. For both splits, we used 80% of the entire data as training data and the remaining 20% as testing data. ## IV Methods ### ActiveTCR ActiveTCR is an active learning framework designed for rapid and cost-effective development of TCR-epitope binding affinity prediction models. The crux of ActiveTCR lies in its ability to interactively query a large size of pool to iteratively update the training set \(D\) by adding the most informative TCR-epitope pairs, thereby learning improved prediction models \(M\). ActiveTCR has two use cases in the context of TCR-epitope binding affinity prediction: 1) reducing annotation costs of unlabeled TCR-epitope pairs and 2) reducing redundancy among already labeled pairs. The goal of the first use case is to maximize prediction performance gains while minimizing the associated annotation costs. As shown in Fig. 0(a), ActiveTCR iteratively queries the most informative TCR-epitope pairs to be annotated by wet-lab efforts. The process begins by training a TCR-epitope binding affinity prediction model \(M\), referred to as a learner, on an initial training set of labeled TCR-epitope pairs \(D\). The learner \(M\) then interactively queries unlabeled TCR-epitope pairs from the pool \(P_{u}\). In order to do so, it predicts binding affinities of the unlabeled pairs and selects the most informative pairs by a query strategy \(Q\). Wet-lab annotators then provide the ground truth binding labels for the queried TCR-epitope pairs \(L\). The newly labeled pairs are added to the training set \(D\), and the model \(M\) is retrained on the updated training set \(D\). Through this feedback loop, ActiveTCR continually updates model \(M\) until satisfactory prediction performance is achieved. Detailed procedure is provided in Alg. 1. The goal of the second use case is to develop a data efficient model that can achieve the same or better prediction performance with a less number of TCR-epitope training pairs. As shown in Fig. 0(b), ActiveTCR iteratively queries more labeled pairs \(L\) and adds them to the training set \(D\) to improve the prediction model \(M\). Redundant pairs may include identical data entries found in different databases or semantically similar entries that do not significantly contribute to the model performance. Since the ground truth label is accessible in the pool \(P_{l}\), we can leverage it for querying informative pairs and discard redundant pairs. The wet-lab annotators are no longer required in this setting. Detailed procedure is provided in Alg. 1. ### _Query Strategies_ Global and local entropy-based samplingEntropy-based sampling [30] queries TCR-epitope pairs that the model is least certain about their binding status. Such pairs are typically located near the classification boundary, making them more informative about shape of the boundary than pairs located farther from it. Entropy [31] is an intuitive metric for assessing the uncertainty of machine learning model predictions. A low entropy score of a sample indicates that the model is more confident in its prediction result, while a high entropy score suggests less confidence. The entropy score of a model \(M\)'s prediction for a pair of TCR and epitope (\(t_{i},e_{i}\)) is defined as follows: \[Q_{EN}(t_{i},e_{i};M)=H(\hat{y}_{i})\\ =-(\hat{y}_{i}\log_{2}\hat{y}_{i}+(1-\hat{y}_{i})\log_{2}(1-\hat{y }_{i})) \tag{1}\] where \(\hat{y}_{i}\) is model \(M\)'s prediction output for the TCR-epitope pair (\(t_{i},e_{i}\)). At each iteration, prediction model \(M\) predicts binding affinity \(\hat{y}\) for pairs in \(P_{u}\) (or \(P_{l}\)) and selects those with the highest entropy scores to construct new training sets \(D\). We used two entropy-based query strategies: _global entropy sampling_ and _local entropy sampling_. Global entropy sampling queries pairs from the entire pooled pairs \(P_{u}\) (or \(P_{l}\)) at each iteration, while local entropy sampling queries pairs from a random subset of the pool, \(P_{us}\in P_{u}\) (or \(P_{ls}\in P_{l}\)). In our experiment, we set the size of \(P_{us}\) (or \(P_{ls}\)) to be twice the size of the query pairs. For example, if 1,000 query pairs are selected at each iteration, the size of \(P_{us}\) (or \(P_{ls}\)) would be 2,000. Global entropy sampling is more reliable than local entropy sampling as it calculates entropy scores across all unlabeled pairs. Meanwhile, local entropy sampling is faster as it only computes entropy for a subset of pooled pairs. Query-by-dropout-committee samplingQuery-by-committee [15] selects samples based on the level of disagreement among multiple prediction models. Each model, known as a committee member, learns its own decision boundary and predicts the binding affinity of a TCR-epitope pair. If the committee members largely disagree on their predictions, the pair is considered uncertain and added to training sets \(D\). This strategy tends to be slower and more computationally demanding because it requires training multiple prediction models to serve as committee members. To address these limitations, researchers have used dropout during inference time to measure committee disagreement in active learning frameworks [19, 20]. This technique enables dropout layers during the prediction phase, introducing variation to the model's prediction outputs by randomly dropping out neurons. As a result, the same model can produce different binding affinity scores for a pair of TCR and epitope without the need of training multiple committee models. This approach, referred to as dropout committee, is commonly used to measure the uncertainty of a neural network model with reduced computational burden [32]. We quantified the disagreement score for a TCR-epitope pair by a sum of Kullback-Leibler divergence [33] between each test time prediction and their average: \[Q_{C}(t_{i},e_{i};M)=\sum_{j}D_{KL}\left(\widehat{Y}_{i,j}\ ||\ \overline{Y}_{i}\right)\\ =\sum_{j}\sum_{k\in\mathbf{0},\mathbf{1}}\hat{y}_{i,j,k}\log_{2 }\left(\frac{\hat{y}_{i,j,k}}{\overline{y}_{i,k}}\right) \tag{2}\] where \(k\) is possible prediction outcomes (\(0\) for non-binding and \(1\) for binding in our case), \(\hat{y}_{i,j,k}\) is the binding affinity prediction score of \((t_{i},e_{i})\) made by \(j\)-th committee member, and \(\overline{y}_{i,k}\) is an average of all committee members' prediction scores. In our experiment, we set the number of dropout committees as 10. Fig. 1: ActiveTCR framework for TCR-epitope binding affinity prediction. It can be used **a)** to reduce annotation cost for unlabeled TCR-epitope pairs, and **b)** to reduce data redundancy among labeled TCR-epitope pairs. It interactively queries the most informative TCR-epitope pairs to iteratively construct a training set and to update a binding affinity prediction model. At each iteration, we applied _dropout committee sampling_ to a randomly selected subset of pooled pairs \(P_{us}\) (or \(P_{ls}\)) for computational efficiency. To investigate whether this query strategy can enhance the performance of entropy-based queries, we added it to the local entropy strategy, defining the measurement as follows. \[Q_{EN,C}(t_{i},e_{i};M)=\omega Q_{EN}(t_{i},e_{i};M)+(1-\omega)Q_{C}(t_{i},e_{i};M) \tag{3}\] where \(\omega\) is a weight between local entropy sampling and dropout committee sampling. For simplicity, we assumed the same importance for those two samplings and assigned equal weights to them in our experiments. Misclassification samplingWe designed a simple yet effective query strategy called _misclassification sampling_. It selects samples from the labeled TCR-epitope pool \(P_{l}\) that are misclassified (both incorrectly labeled as positive and incorrectly labeled as negative) by the model \(M\) with a significant difference between the true label and the predicted label. The intuition is that samples that are misclassified by a large margin can potentially improve model performance by providing more information about the model's weaknesses. The misclassification sampling score of a TCR-epitope pair \((t_{i},e_{i})\) is defined as follows: \[MC(t_{i},e_{i},y_{i};M)=|y_{i}-\hat{y}_{i}|\;\;\text{for}\;i\in P_{l} \tag{4}\] where \(y_{i}\) is the ground truth binary label and \(\hat{y}_{i}=M(t_{i},e_{i})\) is the binding affinity prediction score between 0 and 1. Note that this approach is only applicable to the use case of reducing redundancy among annotated TCR-epitope pairs as it requires ground truth labels for them in \(P_{l}\) to determine misclassification. ### _Binding Affinity Prediction Model_ Our learner \(M\) is a TCR-epitope binding affinity prediction model that predicts the likelihood of binding between a TCR and an epitope. It takes two sequence inputs (a TCR and an epitope) and outputs a probability that the TCR will bind to the epitope. In our active learning framework, users have the flexibilities to select the binding affinity prediction model that best suits their needs, for instance, ATM-TCR [5], ERGO2 [3], NetTCR [4], and PiTE [6]. We used the 3-linear-layered model from catELMo [13] as our prediction model \(M\) because it performed similarly to the state-of-the-art performance of PiTE while being faster and more lightweight [6]. It is composed of two main steps: amino acid embedding and binding affinity prediction. In the first step, the TCR and epitope sequences were represented as numeric vectors of size 1,024 using catELMo, a state-of-the-art amino acid embedding model. In the second step, a simple prediction model with three linearly connected layers was trained to predict the binding affinity between the embedded TCR and epitope sequences. The embedded vectors were used as input for the model. Each was first processed through a linear layer with 2,048 neurons followed by a Sigmoid Linear Units (SiLU) activation function [34], batch normalization [35], and 0.3 rate dropout [36]. The two processed sequences were then concatenated and fed to a linear layer with 1,024 neurons, followed by a SiLU activation function, batch normalization, and 0.3 rate dropout. Finally, the last linear layer with a neuron followed by a sigmoid activation function produced a binding affinity score between 0 and 1. Binary cross-entropy loss and Adam optimizer [37] were used to train the model. The batch size was 32 and the learning rate was 0.001. The training continued for 200 epochs or was early terminated if the validation loss did not decrease for 30 consecutive epochs. ## V Results on Real Data ### _Study Design_ In this section, we experimentally assess how ActiveTCR contributes to the annotation cost reduction for unlabeled TCR-epitope pairs and to the data redundancy reduction among labeled pairs. We randomly selected 10% (about 24,006 pairs) of the training TCR-epitope pairs as the initial training data \(D\). The remaining 90% (about 216,054 pairs) of the training set was treated as pool data (\(P_{u}\) or \(P_{l}\)). Our learner model \(M\) predicted the binding affinities \(\hat{y}\) and identified the most informative pairs based on the different query strategies. At each iteration, 24,006 additional pairs \(L\) were queried from pool data. For the use case of reducing annotation costs of unlabeled data, we removed the binding labels of TCR-epitope pairs in \(P_{u}\) to simulate the presence of future unlabeled pairs. The ground truth labels were provided to the model only after it made queries \(L\), and served as the wet-lab annotation. For the use case of reducing data redundancy, we allowed the query strategies to leverage ground truth label information to select pairs from \(P_{l}\). ActiveTCR continuously expanded the size of dataset \(D\) by adding the queried samples \(L\). The prediction model \(M\) was interactively retrained until all pairs in the pool were exhausted or satisfactory results were achieved. The testing set was held out to assess the performance of the model \(M\) at each iteration. We reported AUCs of prediction models using different query strategies on the testing set. Each query strategy had 10 independent runs with a seed value of 42 unless otherwise specified. ### _Reduce Annotation Cost by Interactively Annotating Unlabeled Samples_ ActiveTCR significantly reduced annotation costs for future unlabeled TCR-epitope pairs compared to the random sampling baseline. We simulated a realistic scenario in which not all labels are available at the beginning. We assumed that only 10% samples are labeled, which is our initial set. We then iteratively added back the "labels" of each additional 10% training samples (to mimic the wet lab procedure of annotating additional TCR-epitope pairs) by ActiveTCR. The key question we addressed in this experiment is whether ActiveTCR, as an active learning framework, can select the most informative TCR-epitope pairs to be annotated, such that it achieves a higher performance gain compared to a random sampling strategy. Among the strategies, global entropy sampling reduced at least 40% annotation cost in both TCR and epitope splits (Fig. 2). Global entropy sampling, while being more computationally intensive than local entropy sampling, was found to be more effective in reducing annotation cost and improving the model performance. This suggested that samples queried from global entropy were more informative and therefore contributed more to the model's learning than those queried using local entropy sampling. Local entropy with dropout committee sampling did not appear to offer additional benefits over local entropy sampling in both TCR and epitope splits, indicating that using a dropout layer to introduce uncertainty does not necessarily improve model learning. A possible explanation is that the weight (\(w\) in Equation 3) we assigned between them is suboptimal. Misclassification sampling was excluded from the comparison as it required ground truth labels for pairs. As the model queried new samples with their binding affinity ground truth hidden at each iteration, we did not have control over the ratio of queried positive and negative pairs at each iteration. To understand the preference of each query strategy over positive and negative pairs, we visualized the amount of queried positive and negative pairs in \(L\) for each iteration in both TCR and epitope splits (Fig. A1). We found that global entropy sampling, the best-performing strategy, had a relatively uneven distribution of positive and negative pairs at each iteration but maintained a relatively balanced positive-negative ratio for cumulative pairs \(D\) (Fig. A2). We also showed that ActiveTCR consistently improved prediction performance for individual end epitopes by fine-tuning the initial prediction model \(M_{0}\) on queried pairs. Such investigation is particularly beneficial when the goal is to optimize the prediction performance of novel or rarely observed target epitopes associated with specific diseases. We emulated a situation where a single laboratory, equipped with an initial prediction model trained on existing TCR-epitope pairs, aims to explore a novel target epitope. This setup aligns with the laboratory's research interests as it represents a common real-world scenario where labs are often interested in studying novel or disease-specific epitopes. We assumed that the initial prediction model was trained on an existing database of multiple epitopes and TCRs, and that the target epitope was novel to the model. To ensure the target epitope has never been seen by the model, we selected the epitope from the testing set of epitope split. The initial prediction model \(M_{0}\) was trained on 10% of randomly selected pairs from the epitope split training set. We did not utilize the remaining pairs from the training set as we assumed only a limited number of training pairs are available. We then used 90% of the randomly selected TCR and the target epitope pairs from the epitope split testing set as the unlabeled pool, and the remaining 10% as testing pairs. We fine-tuned the initial model \(M_{0}\) on different sizes of query sets (\(1,\cdots,8K\)). A 10x smaller learning rate of 0.0001 was used to prevent model overfitting. We compared the best-performing global entropy sampling query with a random sampling baseline and quantified the performance improvements. Fig. 3 shows results for the two most abundant epitopes in the testing set, MIELSLIDFYLCFLAFLLFLVLIML and GILGFVFTL. We reported the normalized AUC (\(AUC_{norm}\)) and defined it as follows: \[AUC_{norm}=(AUC-AUC_{none})/(AUC_{all}-AUC_{none}) \tag{5}\] where \(AUC_{none}\) is the AUC of the initial model that fine-tuned on zero queried pairs, and \(AUC_{all}\) is the AUC of the fine-tuned model on the entire unlabeled pool pairs of an epitope. Our results indicated that ActiveTCR with global entropy sampling consistently outperforms the random sampling baseline in terms of prediction performance for individual epitopes, regardless of the number of TCR-epitope pairs queried and annotated. This experimental setup illustrated how ActiveTCR's query aligns with a laboratory's interests and capabilities, assisting in the selection of the most Fig. 3: Epitope-specific performance of ActiveTCR in reducing annotation costs. The performance is measured on epitope split for two individual epitopes **a)** MIELSLIDFYLCFLAFLLFLVLIML and **b)** GILGFVFTL. The average (solid line) and standard error (band) of \(AUC_{norm}\) from 5 independent runs for each query strategy are reported, \(AUC_{none}\) for MIELSLIDFYLCFLAFLLFLVLIML and GILGFVFTL are 81.80% and 92.98% when fine-tuning \(M_{0}\) on zero pairs, respectively. \(AUC_{all}\) for MIELSLIDFYLCFLAFLLVLIML and GILGFVFTL are 89.49% and 95.47% when fine-tuning \(M_{0}\) on the entire new unlabeled pool, respectively. ActiveTCR consistently outperforms the random sampling baseline with however many TCR-epitope pairs queried and annotated. Fig. 2: Performance of ActiveTCR in reducing annotation costs, measured on a) TCR split and b) epitope split. Average (solid line) and standard error (band) AUC of 10 independent runs for each query strategy are reported. ActiveTCR using global entropy sampling reduced approximately 40% annotation costs compared to the random sampling baseline. Two of the query strategies required sub-sampling the unlabeled pool, which was earlier stopped at iteration 8 (80% training set) in epitope split due to insufficient TCR-epitope pairs for sub-sampling. "promising" TCR-epitope pairs for testing, thereby reducing the experimental workload while maximizing the potential discovery of useful knowledge. ### _Identify and Reduce Redundancy among Labeled Samples_ ActiveTCR identified and removed at least 40% of training data as redundancy (Fig. 4) while matching the performance of passive learners with random sampling. This significant reduction directly translates into computational savings for future model training tasks. In this experiment, we queried positive and negative pairs separately. This gave us control over the distribution of queried positive and negative pairs. We queried 5% (12,003 pairs) from each at each iteration. ActiveTCR with global entropy sampling with only 50% of the training data achieved similar performance to random sampling with 90% of training data, indicating that more than 40% of the training data were not necessarily contributing to the model. We noticed that, with misclassification sampling, ActiveTCR struggled to achieve high AUC prediction scores with a larger number of queried training pairs at first. This may be because the model was being fed with "difficult" samples, making it challenging to generalize well on the testing set. However, by the third iteration, the performance score of the method started to improve and even surpassed that of global entropy sampling in the final iterations. We also observed that global entropy consistently outperformed local entropy in reducing data redundancy, suggesting that samples queried from global entropy are less redundant and could improve the model's performance more. Despite its relatively lower performance compared to other query strategies, local entropy with the dropout committee sampling was still able to reduce approximately 30% of redundant data in the epitope split and 20% in the TCR split. We speculated that the reason for its suboptimal performance is the challenge of determining the weights of local entropy and the dropout committee sampling method. Nonetheless, this approach can still provide a significant reduction in annotation costs. Overall, we demonstrated that ActiveTCR is effective for reducing redundancy among those TCR-epitope pairs with annotated binding results. It can significantly reduce the amount of training data required to match comparable performance to passive learning. It should be noted that the goal of this section is to 1) demonstrate that there are many redundant TCR-epitope pairs that do not necessarily add to performance gains of the prediction model and to 2) identify a compact, highly informative subset of the data (which we refer to as the "primal dataset"). When new TCR-epitope pairs are added to a dataset, the "primal data" computed prior to the addition of new pairs can be used as a starting point and querying can be done only from the new pairs. This avoids the need to retrain prediction models from scratch using the entire data including the newly added pairs. We conducted an additional experiment and observed no significant performance differences between utilizing an identified primal dataset and running from scratch. In this experiment, we used 60% of the training data (referred to as \(A\)) as initially available TCR-epitope pairs with labels. Running ActiveTCR on this dataset (\(A\)), a subset was identified as primal data (\(A_{Primal}\)) after removing \(A_{Redundant}\), where \(A=A_{Primal}\cup A_{Redundant}\), and \(A_{Primal}\cap A_{Redundant}=\emptyset\). Then, we randomly selected a set of additional TCR-epitope pairs with labels (referred to as \(B\)). The total number of pairs in \(B\) is one-third of the number of pairs in \(A\). Note that \(A\) and \(B\) are disjoint. With a larger number of labeled pairs, in order to newly define the primal data (\((A\cup B)_{Primal}\)), one can either rerun ActiveTCR on the entirety of \(A\cup B\), or utilize the previously defined primal data (\(A_{Primal}\)) as the starting point and query only from the newly added dataset (B). Utilizing the previously defined primal data only requires iterative model training sweeping through the additional data \(B\). However, training from scratch requires more iterations of model training. This reduction in training iterations grows with each occasion of obtaining new labeled data. In the TCR split, the former approach yielded an average AUC score of 96.09%, while the latter resulted in a score of 95.35%. Similarly, in the epitope split, the former method achieved an AUC of 95.42%, and the latter, 94.41%. These similar performances indicate ActiveTCR allows users to expand upon pre-existing datasets without the need to rerun ActiveTCR from the beginning. ## VI Discussion We proposed ActiveTCR, an active learning framework for cost-effective TCR-epitope binding affinity prediction. We investigated five query strategies and demonstrated the advantages of ActiveTCR in two practical scenarios, reducing annotation costs of unlabeled pairs and decreasing computational cost by reducing redundancy among annotated TCR-epitope pairs. Despite the significant reduction in annotation cost and data redundancy achieved by ActiveTCR, retraining the Fig. 4: ActiveTCR performance comparison in reducing computational cost by reducing redundancy for TCR-epitope pairs using five different query strategies for **a)** TCR split and **b)** epitope split. Average (solid line) and standard error (band) AUC of 10 independent runs for each query strategy are reported. ActiveTCR using global entropy sampling reduced approximately 40% redundancy among labeled data compared to the random sampling baseline. Two of the query strategies required sub-sampling the unlabeled pool, which was earlier stopped at iteration 8 (80% training set) in both TCR and epitope split due to insufficient TCR-epitope pairs for sub-sampling. prediction model \(M\) at each iteration is relatively slow. To address this, we investigated an alternative training strategy: fine-tuning of the previous prediction model on newly queried samples \(L\), instead of training a new model on updated training sets \(D\). We found that the fine-tuning method converges much faster but severe overfitting problems were also observed for all query strategies. As ActiveTCR randomly selected the initial training set, it may cause the "cold start" problem in active learning. This means that different initial training data may lead to different initial TCR-epitope binding affinity prediction models \(M\), and in turn, may affect subsequent models and queried pairs in following iterations. To demonstrate the robustness of our method under different starting points, we conducted additional experiments based on three different random seeds and consistently observed significant amounts of annotation cost reductions (Fig. 5). One limitation of our study is that our experimental setting may not be a perfect reflection of the real-world distribution of TCR-epitope pairs. The dataset we prepared for this study has a 1:1 ratio of positive and negative pairs, while datasets obtained in real-world scenarios may differ. In a real-world scenario, researchers may focus on improving model performance for epitopes rising from a set of target diseases. Annotating a collection of TCR-epitope pairs based on a set of target epitopes may require analysis of TCR repertoire across disease and control subjects to extract TCRs that are likely to bind to the target epitopes. Consequently, there are likely to be more positive pairs and fewer negative pairs. To demonstrate how the original dataset distribution affects the query sample distributions, we ran ActiveTCR with global entropy sampling on a TCR-epitope dataset with different positive and negative ratios. We prepared the dataset with the ratio of positive and negative pairs as 2:1, 4:1, and 9:1, respectively. We kept the number of positive TCR-epitope pairs constant and randomly selected half, one-fourth, and one-eleventh of the negative pairs, then concatenated them together. Fig. 6 shows the queried sample distributions for different positive-negative ratios. While it is difficult to pinpoint the reasons, we found that the global entropy sampling query strategy attempted to query a more balanced number of positive and negative pairs than random sampling from iterations 1 to 4. This finding highlighted the potential advantage of this query strategy and we believe it deserves further investigation in future studies. We also observed that the model preferred to query positive pairs at later iterations, possibly because more positive pairs were available and negative pairs had been used up as it iterates. Additionally, the annotated TCR-epitope pairs were not specifically designed for machine learning purposes but were collected from various biological or clinical studies and may contain biases towards certain diseases.
2304.09285
Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation
Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity -- corridor, activity, view, and frame value -- simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 93.8% on simulated sequences and 67.57% in cadaver across all granularity levels, with up to 88% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.
Benjamin D. Killeen, Han Zhang, Jan Mangulabnan, Mehran Armand, Russel H. Taylor, Greg Osgood, Mathias Unberath
2023-04-18T20:48:14Z
http://arxiv.org/abs/2304.09285v1
# Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation ###### Abstract Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 93.8% on simulated sequences and 67.57% in cadaver across all granularity levels, with up to 88% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.1 Footnote 1: Code and data available at [https://github.com/benjamindkilleen/pelphix](https://github.com/benjamindkilleen/pelphix). Keywords:Activity recognition fluoroscopy orthopedic surgery surgical data science ## 1 Introduction In some ways, surgical data is like the expanding universe: 95% of it is dark and unobservable [3]. The vast majority of intra-operative X-ray images, for example, are "dark", in that they are not further analyzed to gain quantitative insights into routine practice, simply because the human-hours required would drastically outweigh the benefits. As a consequence, much of this data not only goes un-analyzed but is discarded directly from the imaging modality after inspection. Fortunately, machine learning algorithms for automated intra-operative image analysis are emerging as an opportunity to leverage these data streams. A popular application is surgical phase recognition (SPR), a way to obtain quantitative analysis of surgical workflows and equip automated systems with situational awareness in the operating room (OR). SPR can inform estimates of surgery duration to maximize OR throughput [8] and augment intelligent surgical systems, _e.g._ for suturing [20] or image acquisition [1, 5, 11], enabling smooth transitions from one specialized subsystem to the next. Finally, SPR provides the backbone for automated skill analysis to produce immediate, granular feedback based on a specific surgeon's performance [6, 21]. The possibilities described above have motivated the development of algorithms for surgical phase recognition based on the various video sources in the OR [15, 19, 22, 23]. However, surgical phase recognition based on interventional X-ray sequences remains largely unexplored. Although X-ray guidance informs more than 17 million procedures across the United States (as of 2006) [13], the unique challenges of processing X-ray sequences compared to visible or structured light imaging have so far hindered research in this area. Video cameras collect many images per second from relatively stationary viewpoints. By contrast, C-arm X-ray imaging often features consecutive images from vastly different viewpoints, resulting in highly varied object appearance due to the transmissive nature of X-rays. X-ray images are also acquired irregularly, usually amounting to several hundred frames in a procedure of several hours, limiting the availability of training data for machine learning algorithms. Figure 1: Our model architecture incorporates frame-level spatial annotations using a U-Net encoder-decoder variant. Anatomical landmarks and segmentation maps provide added supervision to the image encoder for a transformer, which predicts the surgical phase. The images shown here are the result of Markov-based simulation of percutaneous fixation, used for training. Following recent work that enables sim-to-real transfer in the X-ray domain [7], we now have the capability to train generalizable deep neural networks (DNNs) using simulated images, where rich annotations are freely available. _This paper represents the first step in breaking open SPR for the X-ray domain, establishing an approach to categorizing phases, simulating realistic image sequences, and analyzing real procedures._ We focus our efforts on percutaneous pelvic fracture fixation, which involves the acquisition of standard views and the alignment of Kirschner wires (K-wires) and orthopedic screws with bony corridors [17]. We model the procedure at four levels, the current target corridor, activity (position-wire, insert-wire, and insert-screw), C-arm view (AP, lateral, etc.), and frame-level clinical value. Because of radiation exposure for both patients and clinicians, it is relevant to determine which X-ray images are acquired in the process of "fluoro-hunting" (hunting) versus those used for clinical assessment. Each of these levels is modeled as a Markov process in a stochastic simulation, which provides fully annotated training data for a transformer architecture. ## 2 Related Work SPR from video sources is a popular topic, and has benefited from the advent of transformer architectures for analyzing image sequences. The use of convolutional layers as an image encoder has proven effective for recognizing surgical phases in endoscopic video [22], laparoscopic video [19], and external time-of-flight cameras [4]. These works especially demonstrate the effectiveness of transformers for dealing with long image sequences [4], while added spatial annotations improve both the precision and information provided by phase recognition [19]. Although some work explores activity recognition in orthopedic procedures [9, 10] they rely on head-mounted cameras with no way to assess tool-to-tissue relationships in percutaneous procedures. The inclusion of X-ray image data in this space recenters phase recognition on patient-centric data and makes possible the recognition of surgical phases which are otherwise invisible. ## 3 Method The Pelphix pipeline consists of stochastic simulation of X-ray image sequences, based on a large database of annotated CT images, and a transformer architecture for phase recognition with additional task-aware supervision. A statistical shape model is used to propagate landmark and corridor annotations over 337 CTs (see supplement), as shown in Fig. 2a. The simulation proceeds by randomly aligning virtual K-wires and screws with the annotated corridors (Section 3.1). In Section 3.2, we describe a transformer architecture with a U-Net style encoder-decoder structure enables sim-to-real transfer for SPR in X-ray. ### Image Sequence Simulation for Percutaneous Fixation Unlike sequences collected from real surgery [15] or human-driven simulation [14], our workflow simulator must capture the procedural workflow while also maintaining enough variation to allow algorithms to generalize. We accomplish this by modeling the procedural state as a Markov process, in which the transitions depend on evaluations of the projected state, as well as an adjustment factor \(\lambda_{\mathrm{adj}}\in[0,1]\) that affects the number of images required for a given task. A low adjustment factor decreases the probability of excess acquisitions for the simulated procedure. In our experiments, we sample \(\lambda_{\mathrm{adj}}\in\mathcal{U}(0.6,0.8)\) at the beginning of each sequence. Fig. 3 provides an overview of this process. Given a CT image with annotated corridors, we first sample a target corridor with start and endpoints \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{3}\). For the ramus corridors, we randomly swap the start and endpoints to simulate the retrograde and antegrade approaches. We then uniformly sample the initial wire tip position within \(5\,\mathrm{mm}\) of \(\mathbf{a}\) and the direction within \(15^{\circ}\) of \(\mathbf{b}-\mathbf{a}\). **Sample desired view**. The desired view is sampled from views appropriate for the current target corridor. For example, appropriate views for evaluating wire placement in the superior ramus corridor are typically the inlet and obturator oblique views, and other views are sampled with a smaller probability. We refer to the "oblique left" and "oblique right" view independent of the affected patient side, so that for the right public ramus, the obturator oblique is the "oblique left" view, and the iliac oblique is "oblique right." We define the "ideal" principle ray direction \(\hat{\mathbf{r}}^{*}\) for each standard view in the anterior pelvic plane (APP) coordinate system, (see supplement) and the ideal viewing point \(\mathbf{p}^{*}\) as the midpoint of the target corridor. At the beginning of each sequence, we sample the intrinsic camera matrix of the virtual C-arm with sensor width Figure 2: (a) The ramus, teardrop and S2 bony corridors, as well as 16 anatomical landmarks with added supervision for phase recognition. (b) The anterior pelvic plane (APP) coordinate system is used to define principle ray directions for standard views of the pelvis, enabling realistic simulation of image sequences for percutaneous fixation. \(w_{\rm s}\sim\mathcal{U}(300,400)\,\mathrm{mm}\), \(d_{\rm sd}\sim\mathcal{U}(900,1200)\), and an image size of \(384\times 384\). Given a viewing point and direction \((\mathbf{p},\hat{\mathbf{r}})\), the camera projection matrix \(\mathbf{P}\) is computed with the X-ray source (or camera center) at \(\mathbf{p}-d_{\rm sp}\hat{\mathbf{r}}\) and principle ray \(\hat{\mathbf{r}}\), where \(d_{\rm sp}\;\;\mathcal{U}(0.65\,d_{\rm sd},0.75\,d_{\rm sd})\) is the source-to-viewpoint distance, and \(d_{\rm sd}\) is the source-to-detector distance (or focal length) of the virtual C-arm. **Evaluate view**. Given a current view \((\mathbf{p},\hat{\mathbf{r}})\) and desired view \((\mathbf{p}^{*},\hat{\mathbf{r}}^{*})\), we first evaluate whether the current view is acceptable and, if it is not, make a random adjustment. View evaluation considers the principle ray alignment and whether the viewing point is reasonably centered in the image, computing, \[\hat{\mathbf{r}}\cdot\hat{\mathbf{r}}^{*}<\cos(\theta_{t})\quad\text{AND} \quad\Big{|}\Big{|}\mathbf{P}\mathbf{p}^{*}-\left[\tfrac{H}{2}\;\tfrac{W}{2} \;1\right]^{T}\Big{|}\Big{|}<\frac{2}{5}\min(H,W) \tag{1}\] where the angular tolerance \(\theta_{t}\in[3^{\circ},10^{\circ}]\) depends on the desired view, ranging from teardrop views (low) to lateral (high tolerance). **Sample view**. If Eq. 1 is not satisfied, then we sample a new view \((\mathbf{p},\hat{\mathbf{r}})\) uniformly within a uniform window that shrinks every iteration by the adjustment factor, according to \[\mathbf{p} \sim\mathcal{U}_{\circ}\left(\mathbf{p}^{*},\mathrm{clip}(\lambda _{\rm adj}\,||\mathbf{p}^{*}-\mathbf{p}||\,,\,5\,\mathrm{mm},\,100\,\mathrm{ mm})\right) \tag{2}\] \[\hat{\mathbf{r}} \sim\mathcal{U}_{\angle}\left(\hat{\mathbf{r}}^{*},\mathrm{clip} (\lambda_{\rm adj}\arccos(\hat{\mathbf{r}}^{*}\cdot\hat{\mathbf{r}}),\,1^{ \circ},\,45^{\circ})\right), \tag{3}\] where \(\mathcal{U}_{\circ}(\mathbf{c},r)\) is the uniform distribution in the sphere with center \(\mathbf{c}\) and radius \(r\), and \(\mathcal{U}_{\angle}(\hat{\mathbf{r}},\theta)\) is the uniform distribution on the solid angle centered on \(\hat{\mathbf{r}}\) with colatitude angle \(\theta\). This formula emulates observed fluoro-hunting by converging on the desired view until a point, when further adjustments are within the same random window [12]. We proceed by alternating view evaluation and sampling until evaluation is satisfied, at which point the simulation resumes with the current activity: wire positioning, wire insertion, or screw insertion. **Evaluate wire placement**. During wire positioning, we evaluate the current wire position and make adjustments from the current view, iterating until evaluation succeeds. Given the current wire tip \(\mathbf{x}\), direction \(\hat{\mathbf{v}}\), and projection matrix Figure 3: The image sequence simulation pipeline for Pelphix. We model the procedure as a Markov random process, where transition probabilities depend on realistic evaluation of the current frame. \(\mathbf{P}\), the wire placement is considered "aligned" if it _appears_ to be aligned with the projected target corridor in the image, modeled as a cylinder. Algorithm 1 (see supplement) details this process for down-the-barrel views (when the principle ray aligns with the target corridor) and orthogonal views. In addition, we include a small likelihood of a false positive evaluation, which diminishes as the wire is inserted. **Sample wire placement**. If the wire evaluation determines the current placement is unsuitable, then a new wire placement is sampled. For the down-the-barrel views, this is done similarly to Eq. 2, by bringing the wire closer to the corridor in 3D. For orthogonal views, repositioning consists of a small random adjustment to \(\mathbf{x}\), a rotation about the principle ray (the in-plane component), and a minor perturbation orthogonal to the ray (out-of-plane). This strategy emulates real repositioning by only adjusting the degree of freedom visible in the image, i.e. the projection onto the image plane: \[\mathbf{x} \sim\mathcal{U}_{\circ}(\mathbf{x},\text{clip}(\lambda_{\text{ adj}}||\mathbf{x}-\mathbf{a}||,\,5\,\text{mm},\,10\,\text{mm})) \tag{4}\] \[\hat{\mathbf{v}} \leftarrow\text{Rot}\left(\hat{\mathbf{v}}\times\hat{\mathbf{r} },\,\theta_{\perp}\right)\text{Rot}\left(\hat{\mathbf{r}},\,\theta^{*}+\theta _{\parallel}\right),\,\text{where}\,\,\theta_{\perp}\sim\mathcal{U}(-0.1\, \theta^{*},0.1\,\theta^{*}),\] (5) \[\theta_{\parallel} \sim\mathcal{U}(-\text{clip}(\lambda_{\text{adj}}\theta^{*},\,3 ^{\circ},\,10^{\circ})\,,\,\,\text{clip}(\lambda_{\text{adj}}\theta^{*},\,3^{ \circ},\,10^{\circ})), \tag{6}\] and \(\theta^{*}\) is the angle between the wire and the target corridor in the image plane. If the algorithm returns "Good," the sequence either selects a new view to acquire (and stays in the position-wire activity) or proceeds to insert-wire or insert-screw, according to random transitions. In our experiments, we used 337 CT images: 10 for validation, and 327 for generating the training set. A DRR was acquired at every decision point in the simulation, with a maximum of 1000 images per sequence, and stored along with segmentations and anatomical landmarks. We modeled a K-wire with 2 mm diameter and orthopedic screws with lengths from 30 to 130 mm and a 16 mm thread, with up to eight instances of each in a given sequence. Using a customized version of DeepDRR [18], we parallelized image generation across 4 RTX 3090 GPUs with an observed GPU memory footprint of \(\sim 13\) GB per worker, including segmentation projections. Over approximately five days, this resulted in a training set of 726 sequences totaling 229,488 images and 11 validation sequences with 3,916 images. ### Transformer Architecture for X-ray-based SPR Fig. 1 shows the transformer architecture used to predict surgical phases based on embedding tokens for each frame. To encourage local temporal features in each embedding token, we cross-pollinate adjacent frames in the channel dimension, so that each \((3,H,W)\) encoder input contains the previous, current, and next frame. The image encoder is a U-Net [16] encoder-decoder variant with 5 Down and Up blocks and 33 spatial output channels, consisting of (a) 7 segmentation masks of the left hip, right hip, left femur, right femur, sacrum, L5 verteba, and pelvis; (b) 8 segmentation masks of bony corridors, including the ramus (2), teardrop (2) and sacrum corridors (4), as in Fig. 2a; (c) 2 segmentation masks for wires and screws; and (d) 16 heatmaps corresponding to the anatomical landmarks in Fig. 2a. These spatial annotations provide additional supervision, trained with DICE loss \(\mathcal{L}_{\text{DICE}}\) for segmentation channels and normalized cross correlation \(\mathcal{L}_{\text{NCC}}\) for heatmap channels as in [2, 7]. To compute tokens for input to the transformer, we apply a \(1\times 1\) Conv + BatchNorm + ReLU block with kernel size 512 to the encoder output, followed by global average pooling. The transformer has 6 layers with 8 attention heads and a feedforward dimension of 2048. During training and inference, we apply forward masking so that only previous frames are considered. The output of the transformer are vectors in \(\mathbb{R}^{21}\) with phase predictions for each frame, corresponding to (a) the 8 target corridors; (b) 3 activities (position-wire, insert-wire, or insert-screw); (c) 8 standard views (see Section 3.1); and (d) 2 frame values (hunting or assessment). We compute the cross entropy loss separately for the corridor \(\mathcal{L}_{\text{cor}}\), activity \(\mathcal{L}_{\text{act}}\), view \(\mathcal{L}_{\text{view}}\), and frame \(\mathcal{L}_{\text{fr}}\) phases, and take the mean. See the supplement for more training details. ## 4 Evaluation Simulation. We report the results of our approach first on simulated image sequences, generated from the withheld set of CT images, which serves as an upper bound on real X-ray performance. In this context our approach achieves an accuracy of 96.9%, 86.3%, 93.9%, and 98.2% with respect to the corridor, activity, view, and frame level, respectively, for an average of 93.8% across all levels. We observe comparatively lower performance for the activity level and speculate that this occurs because the insert-wire activity visually resembles position-wire for low insertions, in our simulation. Moreover, we achieve an average DICE score of 0.73 and landmark detection error of \(1.01\pm 0.153\) pixels in simulation, indicating that these features provide a meaningful signal. Figure 4: Results of surgical phase recognition for a cadaveric procedure. We observe varying performance based on the target corridor, either because of the associated views or due to the accumulated orthopedic hardware. Cadaver study.We evaluate our approach on cadaveric image sequences with five wire and screw insertions. An attending orthopedic surgeon performed percutaneous fixation on a lower torso specimen, taking the antegrade approach for the left and right public rami corridors, followed by the left and right teardrop and S1 screws. An investigator acted as the radiological technician, manipulating a Siemens CIOS Fusion C-arm according to the surgeon's direction. A total of 257 images were acquired during these fixations. Two investigators recorded phase labels during the procedure based on the surgeon's input. Although segmentation masks and anatomical landmarks are not available for these images, we observe qualitatively satisfactory segmentation masks (see supplement), indicating successful sim-to-real generalization. Our results for phase recognition, shown in Fig. 4 demonstrate the potential for Pelphix as a viable approach to SPR in X-ray. We achieve an overall accuracy of 88%, 61%, 51%, and 70% with respect to the corridor, activity, view, and frame levels, respectively. We find that across all levels, accuracy varied significantly depending on the target corridor, likely because of the associated views. For instance, prediction of the right ramus, left teardrop, and right teardrop corridors was achieved with 100%, 98%, and 100% accuracy, while the left ramus and S1 corridors yielded 57% and 80% accuracy, respectively. Similar variation can be seen in the activity, acquisition, and frame level accuracy: screw insertion was recognized with nearly 100% accuracy, while wire insertion was often confused for screw insertion. This is because our simulation varied the screw insertion depth randomly rather than based on the anatomy. Teardrop and inlet views are recognized with reasonable accuracy (90%, 60%, and 81%), while the network struggles with lateral views. These shortcomings may reflect sampling biases in the stochastic simulation that resulted in certain views being underrepresented, but the fact that the least represented views are the left and right teardrops (3.5% and 2.3% of images) would seem to discount this. ## 5 Discussion and Conclusion As our results show, Pelphix is a potentially viable approach to robust SPR based on X-ray images. We showed that stochastic simulation of percutaneous fracture fixation, despite having no access to real image sequences, is a sufficiently realistic data source enabling sim-to-real transfer. While we expect adjustments to the simulation approach will close the gap even further, truly performative SPR algorithms for X-ray may rely on Pelphix-style simulation for pretraining, before fine-tuning on real image sequences to account for human-like behavior. Extending this approach to other procedures in orthopedic surgery, angiography, and interventional radiology will require task-specific simulation capable of modeling possibly more complex tool-tissue interactions and human-in-the-loop workflows. Nevertheless, Pelphix provides a first viable route toward X-ray-based surgical phase recognition, which we hope will motivate routine collection and interpretation of these data, in order to enable advances in surgical data science that ultimately improve the standard of care for patients.
2303.02377
Continued functions and critical exponents: Tools for analytical continuation of divergent expressions in phase transition studies
Resummation methods using continued functions are implemented to converge divergent series appearing in perturbation problems related to continuous phase transitions in field theories. In some cases, better convergence properties are obtained using continued functions than diagonal Pade approximants, which are extensively used in literature. We check the reliability of critical exponent estimates derived previously in universality classes of O(n)-symmetric models (classical phase transitions) and Gross-Neveu-Yukawa models (quantum phase transitions) using new methods.
Venkat Abhignan, R. Sankaranarayanan
2023-03-04T10:43:52Z
http://arxiv.org/abs/2303.02377v1
Continued functions and critical exponents: Tools for analytical continuation of divergent expressions in phase transition studies ###### Abstract Resummation methods using continued functions are implemented to converge divergent series appearing in perturbation problems related to continuous phase transitions in field theories. In some cases, better convergence properties are obtained using continued functions than diagonal Pade approximants, which are extensively used in literature. We check the reliability of critical exponent estimates derived previously in universality classes of \(O(n)\)-symmetric models (classical phase transitions) and Gross-Neveu-Yukawa models (quantum phase transitions) using new methods. ## I Introduction Divergent series are inevitable solutions of perturbation approximations used in field theories [1]. Resummation methods are required to extract meaningful values from these perturbative expansions with zero radii of convergence around their singular points [2; 3]. A summation method expands this region of convergence by following a different mapping of variables. The rigorous analysis by Stieltjes on continued fractions has led to the applicability of its analogue Pade sequences on a wide range of problems in perturbation theory [4]. Pade based methods are the most commonly used techniques to achieve these affine transformations of variables by scaling and shifting [5; 6]. Our previous work showed that even other continued functions had remarkably interesting convergence properties by obtaining results related to universal critical parameters [7]. Some important results were discussed, implementing only the lower order information of the renormalization group (RG) perturbative expansions in the \(O(n)\)-symmetric \(\phi^{4}\) scalar field theory. Especially using the continued exponential [8] and such a blended function, continued exponential fraction, we could address the \(\lambda\)-point discrepancy between the theoretical predictions [9; 10; 11; 12], and famous experimental value of specific heat exponent [13] in the \(O(2)\)\(\phi^{4}\) model [14; 15], though the issue remains unresolved. Also, using the continued exponential fraction, a consensus can be seen between different theoretical approaches in the most prominently solved three-dimensional Ising model where correlation length exponent \(\nu_{Ising}\approx 0.630\) matches up to the third decimal place. The different significant approaches in such models are perturbative RG [16], Monte-Carlo simulations (MC) [17], and conformal bootstrap calculations (CB)[18]. Further, using these continued functions and combining them with Borel-Leroy transformation, we could produce precise estimates for critical parameters in universality classes of modified Landau-Wilson Hamiltonian [19]. Perturbative six-loop \(\epsilon\) expansions from \(n\)-vector model with cubic anisotropy [20], \(O(n)\times O(m)\) spin models [21] and the weakly disordered Ising model [22] were handled. The simplest description for a sequence of the continued functions where the convergent behaviour is observed is that \((i+1)\)th term of the sequence has the form of \(i\) iterations of a corresponding function. Using the self-similar continued representation of a function to obtain convergence was the rudimentary idea developed into many forms by Yukalov [23; 24]. Even the recently developed resummation methods to achieve analytic continuation are based on orthogonal Gauss hypergeometric functions [25; 26; 15; 27], which can be represented in the form of continued fractions [28]. However, such methods are based on using the large-order behaviour of the perturbative expansions. Since this asymptotic information might not be available in all cases, it is of prime importance to study resummation methods that only implement lower-order information. For instance, with the recent development in computational techniques, such lower-order information for the \(\phi^{4}\) field theory has been solved to calculate the six-loop [29] and seven-loop [16] RG functions. Such calculations involve around 138 Feynman graphs in the fifth order, 687 graphs in the sixth order and 4047 graphs in the seventh order in the perturbative expansions using the minimal subtraction renormalization scheme [29]. With the possibility of solving such complex calculations, it can lead to more orders of information in such field theories, which can better define the behaviour of classical and quantum phase transitions on a wide range of physical systems. Initially in Sec. II.A we briefly introduce the description of resummation methods. We elaborated and implemented the resummation procedures using continued functions to evaluate divergent expressions concerning the correction-to-scaling exponents derived from the \(\phi^{4}\) field theory. Previously we used the continued fraction to solve this series, which can be related to the diagonal Pade approximant in orthogonal polynomials [30; 31; 32]. Further in Sec. II.B, we have explored the role of continued exponential in other aspects of continuous phase transitions related to the lattice Ising model. Studying continuous phase transitions on a one-dimensional lattice model with short-range interaction was first proposed by Ising [33]. We implement the continued exponential into the schemes of widely studied perturbative low-temperature expansion [34] and primitive position-space renormalization [35] of the Ising model. Finally, in Sec.III, we explore the possibility of implementing continued functions in the study of critical exponents related to quantum phase transitions using RG functions of Gross-Neveu-Yukawa models [36]. (\(O(n)\) Universality Class) ### \(O(n)\)-symmetric \(\phi^{4}\) field theory and resummation of critical exponents Studying continuous phase transitions through \(\phi^{4}\) field theory begins from Landau's description [37]. The most interesting numerical results that can be derived from implementing Kadanoff's scaling theory [38], Wilson's perturbative RG, and epsilon expansion [39] to this theoretical description of Landau are the critical exponents. They describe the singular behaviour of the phase transition in a material at the critical point \(T=T_{\rm c}\) (critical temperature) and are considered universal, i.e., they are independent of the nature of the material and the type of continuous phase transition. These universal critical parameters are dependent only on the symmetries of the system and dimensionality \(d\), defined by a universality class. These physically relevant yet numerically divergent critical exponents are solved from field theories in the form of \[Q(\epsilon)\approx\sum_{i=0}^{N}q_{i}\epsilon^{i} \tag{1}\] for \(\epsilon\to 0\) where \(\epsilon\) is the perturbative parameter associated with the physical system. Transformations of sequences is a key numerical technique for resolving convergence issues in the divergent series of critical exponents. The idea behind resummation techniques is that one can achieve convergence by combining the infinite divergent series with an appropriate sequence transformation rather than simply adding a particular series term by term, which is meaningless [2]. A slowly converging or diverging sequence \(\{s_{N}\}_{N=0}^{N}\), with the partial sums \(\{s_{N}\}=\sum_{i=0}^{N}q_{i}\) of an infinite series, is transformed into a new, presumably better numerically behaved sequence, \(\{s^{\prime}_{N}\}_{N=0}^{\infty}\) using these resummation methods. Assume that \(\{s_{N}\}_{N=0}^{\infty}\) either converges to a limit \(s\) or, if it diverges, can be resummed using the right technique to produce \(s\). Resummation methods implement transformations of linear sequences according to the general formula \(s^{\prime}_{N}=\sum_{i=0}^{N}\ \mu_{Ni}\ s_{i}\). These transformations compute the elements of the transformed sequence \(\{s^{\prime}_{N}\}\) as weighted averages of the elements of the input sequence \(\{s_{N}\}\) with weights \(\mu_{Ni}\). The primary argument is that, for the weights \(\mu_{Ni}\), it is possible to establish some necessary and sufficient conditions to ensure that, when applied to a convergent sequence \(\{s_{N}\}_{N=0}^{N}\), the converted sequence \(\{s^{\prime}_{N}\}_{N=0}^{N}\) may converge to the same limit, \(s=s_{\infty}\). Depending on which resummation method is chosen for a specific problem, some empirical ideas are always required to get the best results. We implement transformation of sequences using continued exponential fraction [7; 19] \[Q(\epsilon)\sim c_{0}\exp\left(\frac{1}{1+c_{1}\epsilon\exp\left(\frac{1}{1+c _{2}\epsilon\exp\left(\frac{1}{1+c_{3}\epsilon\exp\left(\frac{1}{1+c_{4} \epsilon\exp\left(\frac{1}{1+c_{5}\epsilon\exp\left(\frac{1}{1+c_{6}}\right)} \right)}\right)}\right)}\right)}\right), \tag{2}\] continued exponential \[Q(\epsilon)\sim d_{0}\exp(d_{1}\epsilon\exp(d_{2}\epsilon\exp(d_{3}\epsilon \exp(d_{4}\epsilon\exp(d_{5}\epsilon\exp(d_{6}\epsilon\exp(\cdots))))))), \tag{3}\] continued exponential with Borel-Leroy transformation [19] \[Q(\epsilon)\sim\int_{0}^{\infty}\exp(-t)t^{\prime}e_{0}\exp(e_{1}\epsilon t \exp(e_{2}\epsilon t\exp(e_{3}\epsilon t\exp(\cdots))))dt \tag{4}\] where \(l\) is Borel-Leroy parameter, continued natural logarithmic function \[Q(\epsilon)\sim\log\left(g_{1}\epsilon\,\log\left(g_{2}\epsilon\log\left(g_{3 }\epsilon\log\left(g_{4}\epsilon\log\left(g_{5}\epsilon\log\left(g_{6} \epsilon\log\left(\cdots\right)+1\right)+1\right)+1\right)+1\right)+1\right)+1 \right), \tag{5}\] and continued fraction \[Q(\epsilon)\sim\frac{h_{1\epsilon}}{\frac{\frac{h_{1\epsilon}}{\frac{h_{2 \epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{ \frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2 \epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{ 2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{ 2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{\frac{h_{2\epsilon}}{ \frac{h_{2\epsilon}{\frac{h_{2\epsilon}{\epsilon}{\frac{h_{2\epsilon}}{\frac{h_{ \epsilon}}{\epsilon}{\epsilon}{\epsilon}{\epsilon}{\epsilon}{\epsilon}{\epsilon}{\epsilon}{ \epsilon}}{\epsilon}{\epsilon}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\} it was later employed for studying convergence in phase transitions [7; 19; 40]. Combining continued exponential and continued fraction, continued exponential fraction was utilised [7]. Continued exponential with Borel-Leroy transformation was utilised based on the Pade-Borel-Leroy transformation \[Q(\epsilon)=\int_{0}^{\infty}\exp(-t)t^{l}f(\epsilon t)dt,\,\,\,f(y)=\sum_{i=0} ^{\infty}\frac{q_{i}}{\Gamma(i+l+1)}y^{i}, \tag{7}\] replacing Pade with continued exponential [19]. Factorial growth of coefficients \(q_{i}\) can be determined as \(i^{l}i\) similar to \(\Gamma(i+l+1)\) in the above equation using Stirling's approximation for large order behaviour (\(i\rightarrow\infty\)) [3]. The convergence is noted by numerically observing the transformed sequence of calculated quantities \[C_{1}\equiv c_{0}\exp\left(\frac{1}{1+c_{1}\epsilon}\right),\,C_{2}\equiv c_{ 0}\exp\left(\frac{1}{1+c_{1}\epsilon\exp\left(\frac{1}{1+c_{2}\epsilon} \right)}\right),\\ C_{3}\equiv c_{0}\exp\left(\frac{1}{1+c_{1}\epsilon\exp\left( \frac{1}{1+c_{2}\epsilon}\right)}\right),\cdots \tag{8}\] for continued exponential fraction, \[D_{1}\equiv d_{0}\exp(d_{1}\epsilon),\,D_{2}\equiv d_{0}\exp(d_{1}\epsilon \exp(d_{2}\epsilon)),\,D_{3}\equiv d_{0}\exp(d_{1}\epsilon\exp(d_{2}\epsilon \exp(d_{3}\epsilon))),\,\cdots \tag{9}\] for continued exponential, \[E_{1}\equiv\int_{0}^{\infty}\exp(-t)t^{l}e_{0}\exp(e_{1}\epsilon t )dt,\,E_{2}\equiv\int_{0}^{\infty}\exp(-t)t^{l}e_{0}\exp(e_{1}\epsilon t\exp(e _{2}\epsilon t))dt,\\ E_{3}\equiv\int_{0}^{\infty}\exp(-t)t^{l}e_{0}\exp(e_{1} \epsilon t\exp(e_{2}\epsilon t\exp(e_{3}\epsilon t)))dt,\cdots \tag{10}\] for continued exponential with Borel-Leroy transformation, \[G_{1}\equiv\log(g_{1}\epsilon+1),\,G_{2}\equiv\log(g_{1}\epsilon \log(g_{2}\epsilon+1)+1),\,G_{3}\equiv\log(g_{1}\epsilon\log(g_{2}\epsilon \log(g_{3}\epsilon+1)+1)+1),\\ G_{4}\equiv\log(g_{1}\epsilon\log(g_{2}\epsilon\log(g_{3} \epsilon\log(g_{4}\epsilon+1)+1)+1)+1),\cdots \tag{11}\] for continued logarithm and \[H_{1}\equiv\frac{h_{0}}{h_{1}\epsilon+1},\,H_{2}\equiv\frac{h_{0}}{\frac{h_{1 }\epsilon}{h_{2}\epsilon+1}+1},\,H_{3}\equiv\frac{h_{0}}{\frac{h_{1}\epsilon} {h_{2}^{3}\epsilon+1}+1},\,H_{4}\equiv\frac{h_{0}}{\frac{h_{1}\epsilon}{h_{2} \epsilon+1}+1},\cdots \tag{12}\] for continued fraction. These transformed sequences are calculated for finding a numerical estimate from transformed variables \(\{c_{i}\}\), \(\{d_{i}\}\), \(\{e_{i}\}\), \(\{g_{i}\}\), \(\{h_{i}\}\). These variables can be obtained as general expressions for any quantity \(Q(\epsilon)\) by Taylor expansion of sequence at arbitrary order and from relations with coefficients \(\{q_{i}\}\) of Eq.(1) such as (weighted averages of \(\{q_{i}\}\)) \[q_{0}=c_{0}\mathrm{e}^{1},\,q_{1}=-c_{0}c_{1}\mathrm{e}^{2},\,q_{2}=c_{0} \mathrm{e}^{3}\Bigg{(}c_{1}c_{2}+\frac{3c_{1}{}^{2}}{2}\Bigg{)},\,q_{3}=-c_{0}c _{1}\mathrm{e}^{4}\Bigg{(}\frac{13c_{1}{}^{2}}{6}+3c_{1}c_{2}+\,c_{2}\,c_{3}+ \frac{3c_{2}{}^{2}}{2}\Bigg{)},\,\cdots, \tag{13}\] for continued exponential fraction \[q_{0}=d_{0},\,q_{1}=d_{0}d_{1},\,q_{2}=d_{0}\Bigg{(}d_{1}d_{2}+\frac{{d_{1}}^{ 2}}{2}\Bigg{)},q_{3}=d_{0}d_{1}\Bigg{(}d_{2}\,d_{3}+\frac{{d_{2}}^{2}}{2}+d_{1 }d_{2}+\frac{{d_{1}}^{2}}{6}\Bigg{)},\,\cdots, \tag{14}\] for continued exponential \[q_{1}=g_{1},\,q_{2}=g_{1}g_{2},\,q_{3}=g_{1}g_{2}g_{3},\,q_{4}=g_{1}g_{2}g_{3}g _{4},\,q_{5}=g_{1}g_{2}g_{3}g_{4}g_{5},\,q_{6}=g_{1}g_{2}g_{3}g_{4}g_{5}g_{6}, \cdots, \tag{15}\] for continued logarithm and \[q_{0}=h_{0},\,q_{1}=-h_{0}h_{1},\,q_{2}=h_{0}h_{1}(h_{2}+h_{1}),\,q_{3}=-h_{0}h _{1}\Big{(}h_{2}h_{3}+{h_{2}}^{2}+2h_{1}h_{2}+{h_{1}}^{2}\Big{)},\cdots \tag{16}\] for continued fraction. Solving relations in Eq.(14) for coefficients of Borel-Leroy transformed series \(f(y)\) in Eq. (7) provides transformed variables \(\{e_{i}\}\) for continued exponential with Borel-Leroy transformation. While using the continued logarithmic function, \(\big{(}g_{\epsilon}\log(\cdots)+1\big{)}>0\) condition must be satisfied in every term of the sequence of Eq. (11), or the estimate becomes undefined. And, it is to be noted that this function is applicable only for quantities \(Q(\epsilon)\) with \(q_{0}=0\). To attain the reliability of the estimates generated by these procedures, error calculation is essential. This is predicted by the principle of fastest apparent convergence, which measures differences of estimates at consecutive orders [29; 41]. The partial sums can be paired with the Shanks transformation for transformed sequences with convergence behaviours to produce accelerated convergence and assess their error [4]. Shanks transformation for a convergent sequence \(\{A_{i}\}\) is defined as \[S(A_{i})=\frac{A_{i+1}A_{i-1}-A_{i}^{2}}{A_{i+1}+A_{i-1}-2A_{i}}, \tag{17}\] and iterated Shanks is \(S^{2}(A_{i})\equiv S(S(A_{i}))\). When \(S^{2}(A_{i})\) is considered as prediction for \(Q(\epsilon)\) the error is estimated from relation [15] \[(|S(A_{i+1})-S(A_{i})|+|S(A_{i+1})-S^{2}(A_{i})|)/2. \tag{18}\] Minimizing this error calculated from successive iterations is also helpful in determining the Borel-Leroy parameter \(l\) or tuning parameter in Eq. (7). #### ii.1.1 Critical exponents \(\nu\) and \(\omega\) of correlation length \(\xi\) Theoretically, close to the critical point, the correlation length \(\xi\) of the fluctuations associated with the field is the most important characteristic length scale. These fluctuations are responsible for the critical behaviour of all the thermodynamic quantities. The divergence of \(\xi\) is controlled by the critical exponents \(\nu\) and \(\omega\) as \[\xi(T)\sim|T-T_{c}|^{-\nu}(1+\text{const.}|T-T_{c}|^{\omega\nu}+\cdots). \tag{19}\] For a \(n\)-component field, these critical exponents are derived as a power series of \(\epsilon=(4-d)\). In our previous work, continued exponential and continued exponential fraction [7] were used to determine the exponent \(\nu\), whereas the recent seven-loop perturbative expansion of \(\omega\) for \(n=0,1,2,3\) have the form of [15] \[\omega =\epsilon-0.65625\epsilon^{2}+1.8236\epsilon^{3}-6.2854\epsilon^{ 4}+26.873\epsilon^{5}-130.01\epsilon^{6}+692.10\epsilon^{7}, \tag{20a}\] \[\omega =\epsilon-0.62963\epsilon^{2}+1.6182\epsilon^{3}-5.2351\epsilon^{ 4}+20.750\epsilon^{5}-93.111\epsilon^{6}+458.74\epsilon^{7},\] (20b) \[\omega =\epsilon-0.60000\epsilon^{2}+1.4372\epsilon^{3}-4.4203\epsilon^{ 4}+16.374\epsilon^{5}-68.777\epsilon^{6}+316.48\epsilon^{7},\] (20c) \[\omega =\epsilon-0.57025\epsilon^{2}+1.2829\epsilon^{3}-3.7811\epsilon^{ 4}+13.182\epsilon^{5}-52.204\epsilon^{6}+226.02\epsilon^{7}, \tag{20d}\] for \(\epsilon\to 0\) respectively. Since this series does not have a zeroth order coefficient (\(q_{0}=0\)), continued exponential and continued exponential fraction could not be directly used to determine a reliable numerical value for exponent \(\omega\), and so continued fraction was implemented [7]. Another way of handling this kind of perturbative expansion is perhaps by realizing the series as \(\omega/\epsilon\) (this may not always work correctly for a divergent series [4]). This is used for transformation through continued exponential fraction, continued exponential, continued exponential with Borel-Leroy transformation and continued fraction as defined in Eq.s (2), (3), (4) and (6), respectively. One can directly implement the transformation of \(\omega\) using continued logarithm as defined in Eq. (5). In this manner, the numerical estimates of \(\omega\) for \(d=3\) (\(\epsilon=1\)), self-avoiding walks model (\(n=0\)) are obtained from sequences \(\{C_{i}\}\), \(\{D_{i}\}\), \(\{E_{i}\}\), \(\{G_{i}\}\), \(\{H_{i}\}\) and their final estimate is interpolated from Shanks in Eq. (17) as \[C_{1} =0.82327,C_{2}=0.73638,C_{3}=0.77176,C_{4}=0.77159,C_{5}=0.77158,C_ {6}=0.77158,S^{2}(C_{4})=0.77158, \tag{21a}\] \[D_{1} =0.51879,D_{2}=0.94498,D_{3}=0.62464,D_{4}=0.92862,D_{5}=0.67078,D_ {6}=0.92284,S^{2}(D_{4})=0.665(71),\] (21b) \[E_{1} =0.55938,E_{2}=0.89477,E_{3}=0.73637,E_{4}=0.84816,E_{5}=0.84818, E_{6}=0.84816,S^{2}(E_{4})=0.84817,\] (21c) \[G_{2} =-2.6906,G_{3}=0.77963,G_{4}=0.82150,G_{5}=0.84032,G_{6}=0.85018,G _{7}=0.85237,S^{2}(G_{5})=0.8578(64),\] (21d) \[H_{1} =0.60377,H_{2}=0.82633,H_{3}=0.76467,H_{4}=0.81070,H_{5}=0.81854,H _{6}=0.80994,S^{2}(H_{4})=0.8153(33). \tag{21e}\] These estimates are illustrated in Fig. 1 and compared with the most reliable MC result \(\omega=0.899(12)\)[15; 42]. The error for final estimates is evaluated from Eq. (18). For continued exponential with Borel-Leroy transform, by tuning the parameter \(l\), the plot for a final estimate \(S^{2}(E_{4})\) for \(l\in[0.5,2]\) with error bars showing \((|S(E_{5})-S(E_{4})|+|S(E_{5})-S^{2}(E_{4})|)/2\) is illustrated in Fig. 2(a). As it is observed, the final estimate is sensitive to the tuning parameter and taken at \(l=1.43\), where the prediction is most precise with \(\omega=0.84817\). All estimates undershoot, while the continued logarithm estimate \(\omega=0.8578(64)\) is most comparable to the MC value. However, when compared with recent resummation studies, the continued function with Borel-Leroy transform estimate, and continued logarithm estimate are most compatible with previous predictions from hypergeometric-Meijer resummation (HM) [15] (seven-loop) where \(\omega=0.8484(17)\), Borel with conformal mapping calculations (BCM) [29] (six-loop) where \(\omega=0.841(13)\) and self-consistent resummation algorithm (SC) [43] where \(\omega=0.846(15)\). It is also interesting to observe that oscillating sequence from continued exponential envelops the region of convergence from different approaches. The numerical estimates of \(\omega\) for \(d=3\), Ising-like model (\(n=1\)) are obtained from sequences \[C_{1}=0.82856,C_{2}=0.73919,C_{3}=0.77329,C_{4}=0.77278,C_{5}=0.772 69,C_{6}=0.77272,S^{2}(C_{4})=0.77270(2), \tag{22a}\] \[D_{1}=0.53279,D_{2}=0.93612,D_{3}=0.63803,D_{4}=0.91397,D_{5}=0.6 8127,D_{6}=0.90431,S^{2}(D_{4})=0.741(30),\] (22b) \[E_{1}=0.55458,E_{2}=0.90540,E_{3}=0.70533,E_{4}=0.85741,E_{5}=0.79 393,E_{6}=0.82036,S^{2}(E_{4})=0.81259(2),\] (22c) \[G_{2}=-4.9986,G_{3}=0.75617,G_{4}=0.77396,G_{5}=0.79920,G_{6}=0.80 725,G_{7}=0.81013,S^{2}(G_{5})=0.81174(36),\] (22d) \[H_{1}=0.61364,H_{2}=0.82364,H_{3}=0.76342,H_{4}=0.80579,H_{5}=0.8 0533,H_{6}=0.80578,S^{2}(H_{4})=0.80556(11). \tag{22e}\] These estimates are illustrated in Fig. 3 and compared with MC result \(\omega=0.832(6)\)[17]. To illustrate the repeatability of behaviour in continued exponential with Borel-Leroy transform, we calculate \(S^{2}(E_{4})\) here for varying Borel-Leroy parameter \(l\) and plot it in Fig. 2(b). The obtained precise prediction is \(\omega=0.81259(2)\) at \(l=3.53\). This value and continued logarithm estimate \(\omega=0.81174(36)\) are most comparable with the MC result, while other estimates undershoot. The continued logarithm estimate and continued exponential with Borel-Leroy transform estimate are also compatible with recent HM prediction \(\omega=0.8231(50)\), DCM calculation \(\omega=0.820(7)\) and SC algorithm \(\omega=0.827(13)\). The numerical estimates of \(\omega\) for \(d=3\), \(XY\) universality class (\(n=2\)) are obtained from sequences \[C_{1}=0.83459,C_{2}=0.74368,C_{3}=0.77630,C_{4}=0.77522,C_{5}=0.77 475,C_{6}=0.77498,S^{2}(C_{4})=0.77472(34), \tag{23a}\] \[D_{1}=0.54881,D_{2}=0.92884,D_{3}=0.65044,D_{4}=0.89975,D_{5}=0.6 8808,D_{6}=0.88433,S^{2}(D_{4})=0.774(10),\] (23b) \[E_{1}=0.58735,E_{2}=0.87944,E_{3}=0.74371,E_{4}=0.81789,E_{5}=0.8 1785,E_{6}=0.74371,S^{2}(E_{4})=0.81789(2),\] (23c) \[G_{2}=-2.4804,G_{3}=0.73100,G_{4}=0.72010,G_{5}=0.75477,G_{6}=0.7 5999,G_{7}=0.76383,S^{2}(G_{5})=0.81174(36),\] (23d) \[H_{1}=0.62500,H_{2}=0.82329,H_{3}=0.76388,H_{4}=0.80231,H_{5}=0.7 9408,H_{6}=0.80029,S^{2}(H_{4})=0.7983(14). \tag{23e}\] These estimates are illustrated in Fig. 4 and compared with MC result \(\omega=0.789(4)\)[12]. The continued fraction estimate \(\omega=0.7983(14)\) is most comparable with the MC value, while other estimates undershoot or overshoot. The continued function with Borel-Leroy transform estimate \(\omega=0.81789\) (\(l=1.26\)) and continued logarithm estimate \(\omega=0.81174(36)\) is most compatible with CB result \(\omega=0.811(10)\)[44], HM prediction \(\omega=0.789(13)\) and DCM calculation \(\omega=0.804(3)\). The numerical estimate of \(\omega\) for \(d=3\), Heisenberg universality class (\(n=3\)) are obtained from sequences \[C_{1}=0.84080,C_{2}=0.74909,C_{3}=0.78036,C_{4}=0.77855,C_{5}=0.77 572,C_{6}=0.77784,S^{2}(C_{4})=0.7807(52), \tag{24a}\] \[D_{1}=0.56538,D_{2}=0.92316,D_{3}=0.66270,D_{4}=0.88691,D_{5}=0.6 9486,D_{6}=0.86466,S^{2}(D_{4})=0.7831(17),\] (24b) \[E_{1}=0.60287,E_{2}=0.87481,E_{3}=0.74859,E_{4}=0.80427,E_{5}=0.8 04425,E_{6}=0.80427,S^{2}(E_{4})=0.80435(4),\] (24c) \[G_{2}=-1.8614,G_{3}=0.70588,G_{4}=0.66200,G_{5}=0.70910,G_{6}=0.7 1041,G_{7}=0.71550,S^{2}(G_{5})=0.70877(96),\] (24d) \[H_{1}=0.63684,H_{2}=0.82452,H_{3}=0.76614,H_{4}=0.80025,H_{5}=0.7 8725,H_{6}=0.79218,S^{2}(H_{4})=0.79083. \tag{24e}\] These estimates are illustrated in Fig. 5 and compared with MC result \(\omega=0.773\)[45]. The continued exponential estimate \(\omega=0.7831(17)\), continued exponential fraction estimate \(\omega=0.7807(52)\) are comparable with the MC value and similarly continued fraction estimate \(\omega=0.79083\), continued exponential with Borel-Leroy transform estimate \(\omega=0.80435(4)\) (\(l=1.16\)) are most compatible with predictions from BCM calculation \(\omega=0.795(7)\) and SC resummation algorithm \(\omega=0.794(4)\). Similarly, we study \(\omega\) for \(d=2\) (\(\epsilon=2\)) systems. These results are interesting since previous resummation studies of RG functions could not predict reliable estimates for two-dimensional systems [46; 47; 48] due to non-analyticity of \(\beta\)-functions around the fixed point. We obtain numerical estimates of \(\omega\) for \(d=2\) self-avoiding walks model (Eq. (20a)) from the Figure 1: Estimates of \(\omega\) at successive orders compared with MC result [15; 42] for self-avoiding walks model. sequences at consecutive orders as \[C_{1} =1.4442,C_{2}=1.3059,C_{3}=1.3685,C_{4}=1.3682,C_{5}=1.3682,C_{6}=1.3 682,S^{2}(C_{4})=1.3682, \tag{25a}\] \[D_{1} =0.53829,D_{2}=1.9806,D_{3}=0.60286,D_{4}=1.9793,D_{5}=0.61259,D_{6 }=1.9792,S^{2}(D_{4})=1.276(11),\] (25b) \[E_{1} =0.68393,E_{2}=1.8792,E_{3}=1.0471,E_{4}=1.7958,E_{5}=1.7542,E_{6}= 1.0583,S^{2}(E_{4})=1.805(24),\] (25c) \[G_{2} =1.8597,G_{3}=1.6694,G_{4}=1.8057,G_{5}=1.8065,G_{6}=1.8152,G_{7}= 1.8154,S^{2}(G_{5})=1.8064(93),\] (25d) \[H_{1} =0.86486,H_{2}=1.5997,H_{3}=1.3194,H_{4}=1.5533,H_{5}=1.6082,H_{6 }=1.5509,S^{2}(H_{4})=1.589(27). \tag{25e}\] These estimates are illustrated in Fig. 6 and compared with exact result from lattice models, \(\omega=2\)[49; 50]. Continued with Borel-Leroy transform estimate \(\omega=1.805(24)\) (\(l=1.75\)), continued logarithm estimate \(\omega=1.8064(93)\) are comparable with the exact result, HM resummation \(\omega=1.96(46)\) and BCM prediction \(\omega=1.90(25)\). We obtain numerical estimates of \(\omega\) for \(d=2\) Ising-like model from the sequences as \[C_{1} =1.4573,C_{2}=1.3113,C_{3}=1.3725,C_{4}=1.3717,C_{5}=1.3716,C_{6}= 1.3716,S^{2}(C_{4})=1.3716(1), \tag{26a}\] \[D_{1} =0.56773,D_{2}=1.9725,D_{3}=0.64065,D_{4}=1.9696,D_{5}=0.65243,D_{6 }=1.9693,S^{2}(D_{4})=1.2981(78),\] (26b) \[E_{1} =0.72905,E_{2}=1.8469,E_{3}=1.1097,E_{4}=1.7273,E_{5}=1.7283,E_{6 }=1.7273,S^{2}(E_{4})=1.7278(3),\] (26c) \[G_{2} =1.8732,G_{3}=1.6453,G_{4}=1.7843,G_{5}=1.7842,G_{6}=1.7936,G_{7}= 1.7938,S^{2}(G_{5})=1.7842(95),\] (26d) \[H_{1} =0.88525,H_{2}=1.5898,H_{3}=1.3127,H_{4}=1.5348,H_{5}=1.5316,H_{6 }=1.5348,S^{2}(H_{4})=1.5333(8). \tag{26e}\] These estimates are illustrated in Fig. 7 and compared with the exact result from the lattice model, \(\omega=1.75\)[48]. For this model, the estimate from continued exponential with Borel-Leroy transform \(\omega=1.7278(3)\) (\(l=1.36\)) and continued logarithm estimate \(\omega=1.7842(95)\) are compatible with the exact value. Similarly, the HM prediction \(\omega=1.71(10)\) and BCM calculation \(\omega=1.71(9)\) for the Ising-like model illustrate that resummation of \(\epsilon\)-expansions gives better estimates than the resummation of the coupling-series [3]. It is noted here that while procedures such as BCM and HM implement, large-order asymptotic information of RG functions, our methods implement only the lower-order information as described. Other exponents related to scaling exponent \(\nu\) can be approximately estimated from the Gaussian model (mean-field theory). In contrast, the measurement of subleading exponent \(\omega\) completely requires the corrections from perturbative RG and is important to understand the relevant directions in RG flows. Hence the procedures where only the corrections are used without external parameters to measure the exponent \(\omega\) are more reliable. A more stringent check for these procedures would be to obtain the most accurately measured result from the microgravity experiment for superfluid helium where specific heat exponent \(\alpha_{XY}=-0.0127(3)\)[13]. Using the slowly converging seven-loop \(\epsilon\) expansion [15] of \[\nu_{XY}=2.0000-0.40000\epsilon-0.14000\epsilon^{2}+0.12244\epsilon^{3}-0.30 473\epsilon^{4}+0.87924\epsilon^{5}-3.1030\epsilon^{6}+12.419\epsilon^{7}, \tag{27}\] we obtain the sequences directly for continued exponential fraction, continued exponential, continued exponential with Figure 4: Estimates of \(\omega\) at successive orders compared with MC result [12] for \(XY\) universality class. Figure 3: Estimates of \(\omega\) at successive orders compared with MC result [17] for Ising-like model. Figure 5: Estimates of \(\omega\) at successive orders compared with MC result [45] for Heisenberg universality class. Figure 6: Estimates of \(\omega\) at successive orders for self-avoiding walks model compared with exact result [49; 50] in two-dimensional system. Figure 7: Estimates of \(\omega\) at successive orders for Ising-like model compared with exact result [48] in two dimensional system. Borel-Leroy transform, continued fraction and estimates for \(\nu_{XY}\) in \(d=3\) at consecutive orders as \[C_{1}=0.53547,C_{2}=0.61992,C_{3}=0.71029,C_{4}=0.66851,C_{5}=0.67366,C_{6}=0.67157,C_{7}=0.67161,\] \[S^{2}(C_{5})=0.67070(74), \tag{28a}\] \[D_{1}=0.61070,D_{2}=0.68421,D_{3}=0.64135,D_{4}=0.67758,D_{5}=0.65 436,D_{6}=0.67576,D_{7}=0.65963,\] \[S^{2}(D_{5})=0.6677(11),\] (28b) \[E_{1}=0.61018,E_{2}=0.68298,E_{3}=0.64144,E_{4}=0.67610,E_{5}=0.65 569,E_{6}=0.67395,E_{7}=0.66219,\] \[S^{2}(D_{5})=0.6704(25),\] (28c) \[H_{1}=0.60000,H_{2}=0.72222,H_{3}=0.64474,H_{4}=0.67334,H_{5}=0.6 6396,H_{6}=0.67012,H_{7}=66944,\] \[S^{2}(H_{5})=0.6617(48). \tag{28d}\] We compare these estimates with microgravity experimental value (exp) \(\nu_{XY}=0.6709(1)\)[13] and most reliable MC result \(\nu_{XY}=0.67169(7)\)[12] in Fig. 8. The oscillating sequences converging towards these precise values can be visualized in Fig. 8(a). Using the continued exponential with Borel-Leroy transform, we obtain the estimate \(\nu_{XY}=0.6704(25)\) at \(l=22.51\) (Fig. 9), where it is observed that \(\nu_{XY}\in[0.669,0.677]\) for \(l\in[10,25]\). Further using Josephson's identity, \(\alpha=2-d\nu\), we obtain estimates for continued exponential fraction, continued exponential with Borel-Leroy transform and continued fraction as \(\alpha_{XY}=-0.0121(22)\), \(\alpha_{XY}=-0.017(20)\), \(\alpha_{XY}=-0.0112(76)\) and \(\alpha_{XY}=0.015(14)\), respectively. The initial three estimates and recent resummation of seven-loop RG \(\alpha_{XY}=-0.0123(11)\) (HM) seem more compatible with the precise experimental value. However, the Figure 8: The estimates of \(\nu_{XY}\) compared with precise experimental value [13] and MC result [12]. significant errors in these predictions from RG are concerning since they cannot completely address the mismatch of predictions from MC and experimental value [14] which can be distinctly seen in Fig. 8(b). Eight-loop RG functions may help in resolving these issues. ### Lattice Ising model (\(n=1\)) Considering the microscopic degrees of freedom, the discrete lattice Ising model provides the same statistical description to describe the nature of continuous phase transitions in \(O(1)\)\(\phi^{4}\) field models. The partition function of the simplest one-dimensional Ising model [51] is, \[Z=\sum_{\{\sigma_{i}\}}\exp{\left[\sum_{<i,j>}B(\sigma_{i},\sigma_{j})\right]} =\sum_{\{\sigma_{i}\}}\exp{\left[K\sum_{<i,j>}\sigma_{i}\sigma_{j}+\frac{h}{2 }\sum_{<i,j>}(\sigma_{i}+\sigma_{j})\right]}. \tag{29}\] where \(\sigma_{i}=\pm 1\) is the spin at each lattice site \(i\). \(B(\sigma_{i},\sigma_{j})\) is the energy per bond between two nearest neighbour lattice sites \(i\) and \(j\). Here \(K=J/k_{B}T\), \(J\) is the nearest neighbour coupling constant, and \(h\) is the external magnetic field. The partition function just over the nearest neighbours is taken as \[Z=\sum_{\{\sigma_{i}\}}^{N^{\prime}}\exp{\left[\sum_{<i,i+1>}^{N^{\prime}}B( \sigma_{i},\sigma_{i+1})\right]}. \tag{30}\] \(\sum_{\{\sigma_{i}\}}^{N^{\prime}}\) indicates summing over all possible \(2^{N^{\prime}}\) configurations of \(N^{\prime}\) spins and \(\sum_{<i,i+1>}\) indicates summation over all nearest neighbour pairs. Though no phase transition exists on this one-dimensional model, extension to two dimensions led to interesting analytical conclusions based on Kramers-Wannier duality [52] in the seminal work by Onsager [53]. This is used to study ordering in paramagnetic-ferromagnetic transitions. #### iv.2.1 Low temperature expansions Different perspectives are used for solving this partition function \(Z\) of two and three-dimensional Ising models [34]. A diagrammatic approach was used to capture the co-existence of different phases in the vicinity of critical points using the method of low-temperature expansions. The partition function \(Z\) was derived by studying excitation and interactions among the excitation around the most stable configuration at \(T\to 0\). These series expansions are divergent, and so initially, Pade approximants were applied by Baker to obtain an analytic continuation [54; 55]. Similarly, we use continued exponential to study the extensive quantity, specific heat \(C_{v}\) derived from such low-temperature expansions in the factors of \(u=\exp{(-4K)}\). The critical exponent \(\alpha\) can be derived by studying the behaviour of \(C_{v}\) at constant volume near the critical temperature as \(C_{v}\sim|T-T_{c}|^{-\alpha}\). Here we study the behaviour of \(C_{v}(K)\)[54] close to the critical point \(K_{c}\) (\(\sim 1/T_{c}\)) for the \(d=2\) simple quadratic lattice (sq) where \[C_{v}(K)/K^{2}=64u^{2}+288u^{3}+1152u^{4}+4800u^{5}+21504u^{6}+101920u^{7}+50 2016u^{8}+2538432u^{9}+13078720u^{10}+68344496u^{11}, \tag{31}\] \(d=3\) simple cubic lattice (sc) where \[C_{v}(K)/K^{2}=144u^{3}+1200u^{5}-2016u^{6}+11760u^{7}-33792u^{8}+135216u^{9}-4 48800u^{10}+1643664u^{11}-5671872^{12}, \tag{32}\] Figure 9: The estimate of \(\nu_{XY}\) vs shift parameter \(l\) is plotted, with the error bars showing the relation for error. \(d=3\) body-centred cubic lattice (bcc) where \[C_{v}(K)/K^{2}=256u^{4}+3136u^{7}-4608u^{8}+4480u^{10}-123904u^{11}+111360u^{12}+5 51616u^{13}-2464896u^{14}+4190400u^{15}, \tag{33}\] and \(d=3\) face-centered cubic lattice (fcc) where \[C_{v}(K)/K^{2}=576u^{6}+11616u^{11}-14976u^{12}+28800u^{15}+172032u^{16}-554880u ^{17}+374976u^{18}+138624^{19}+787200u^{20}. \tag{34}\] The Taylor expressions around \(K=0\) for these expressions of \(C_{v}(K)/K^{2}\) are recast into continued exponential (Eq. (3)) up to the ninth order such as sq: \[84593392\exp(-43.044K\exp(-0.0557K\exp(2.0512K\exp(0.9789K\exp(1.0873K \exp(1.1605K\] \[\exp(0.8735K\exp(2.02115K\exp(-9.9314K))))))))\] (35) sc: \[-54793248\exp(-56.9186K\exp(0.0277K\exp(1.3411K\exp(2.2139K\exp(4.3708K \exp(1.5199K\] \[\exp(0.069K\exp(-57.104K\exp(34.032K))))))))\] (36) bcc: \[-34418496\exp(-68.824K\exp(0.03089K\exp(2.0647K\exp(2.7629K\exp(5.6212K \exp(1.9136K\] \[\exp(-0.6834K\exp(14.292K\exp(-1.2633K))))))))\] (37) Figure 10: Illustrating the behaviour of \(C_{v}\) vs \(1/K\) for \(d=3\) and \(d=2\) lattice around \(1/K_{c}\). fcc: \(4366089\exp(-96.531K\exp(0.2539K\exp(1.1231K\exp(-11.0745K\exp(25.0895K\exp(6.7036K\) \[\exp(13.5554K\exp(6.4608K\exp(15.9134K))))))))\right) \tag{38}\] assuming that low-temperature expansions around \(K=0\) are sufficient to capture the nature of singular behaviour. To illustrate the behaviour of expressions of the continued exponential, we plot \(C_{v}(K)/C_{v}(0.06)\) around the critical points \(1/K_{c}\) for \(d=3\) sc, bcc and fcc in Fig. 10(a). The \(C_{v}(K)\) is normalised with an arbitrarily high value \(C_{v}(0.06)\), and it can be observed that this captures a similar singular nature for sc, bcc and fcc in the vicinity of \(1/K_{c}\) from the low-temperature side. The critical values for \(1/K_{c}\) in the literature [54] are given by \(1/0.4407=2.2692\), \(1/0.2217=4.5102\), \(1/0.1575=6.3505\), \(1/0.1021=9.7923\) for sq, sc, bcc and fcc correspondingly. Similarly, \(C_{v}(K)\) for \(d=2\) sq seems to possess singular nature from the high-temperature side in Fig. 10(b). This unique behaviour may be related to the Kramers-Wannier duality on square lattice [52] where the strong coupling at low temperature gets mapped to the weak coupling at high temperature and vice-versa. From these curves, it is deduced that the value for exponent \(\alpha\) at their corresponding \(K_{c}\) is \(\alpha=0.1026,0.1193\) for three-dimensional bcc, fcc and \(\alpha=-0.0138\) for two-dimensional sq respectively. These seem to be comparable with values \(\alpha=0.11\) for \(d=3\) and \(\alpha=0\) for \(d=2\) Ising models [51]. #### iv.2.2 Migdall-Kadanoff position space renormalization Kadanoff's renormalization scheme was used on two-dimensional Ising models using successive approximations to control the divergent long-range interactions, and the correlation length critical exponent \(\nu_{Ising}\approx 1\) was extracted [56; 57; 35]. However, the primitive renormalization approach [35] does not produce a reliable estimate of \(\nu_{Ising}\), which was systematically improved later [56; 57]. Similarly, we take the most straightforward position space renormalization scheme of the one-dimensional Ising model, where the decimation of every alternate spin on the lattice essentially reduces the \(N^{\prime}\) degrees of freedom by a rescaling factor of \(b=2\) in Eq.(30) [51]. Then we introduce new interactions in the renormalization scheme to account for long-range behaviour and implement continued exponential to approximate the divergent interactions controlled by a free parameter \(a\). Further, Migdal-Kadanoff bond moving approximation is used to obtain exponent \(\nu_{Ising}\) for phase transitions on fractal systems with non-integer dimensions \(1<d<2\). There is a simple mapping between the original spins to the renormalized spins (\(\{\sigma_{i}\}\mapsto\{\sigma_{i}^{\prime}\}\)) in the partition function \(Z\) (Eq. 30) after summing over the decimated spins \(s_{i}\) as [51] \[\sum_{\{\sigma_{i}^{\prime}\}}^{N^{\prime}/2}\sum_{\{s_{i}\}}^{N^ {\prime}/2}\exp\left[\sum_{i=1}^{N^{\prime}/2}B(\sigma_{i}^{\prime},s_{i})+B(s _{i},\sigma_{i+1}^{\prime})\right]=\\ \sum_{\{\sigma_{i}^{\prime}\}}^{N^{\prime}/2}\prod_{i=1}^{N^{ \prime}/2}\Bigg{[}\sum_{s_{i}=\pm 1}\mathrm{e}^{B(\sigma_{i}^{\prime},s_{i})+B(s _{i},\sigma_{i+1}^{\prime})}\Bigg{]}\equiv\sum_{\{\sigma_{i}^{\prime}\}}^{N^{ \prime}/2}\mathrm{e}^{\left[\sum_{i<i,i+1>}^{N^{\prime}/2}B^{\prime}(\sigma_{i} ^{\prime},\sigma_{i+1}^{\prime})\right]}. \tag{39}\] Where the bond energy of the renormalized spins is \[B^{\prime}(\sigma_{1}^{\prime},\sigma_{2}^{\prime})=\frac{h^{\prime}}{2}( \sigma_{1}^{\prime}+\sigma_{2}^{\prime})+K^{\prime}\sigma_{1}^{\prime}\sigma_{ 2}^{\prime}. \tag{40}\] The renormalized interactions are obtained from the assumption that the renormalized parameters \((h^{\prime},K^{\prime})\) are functions of \((h,K)\) having similar formalism. We assume here that the renormalized parameters \((h^{\prime},K^{\prime})\) are analytical functions of \((h,K)\) since it reflects Kadanoff's scaling idea [38]. \(h\) and \(K\) are independent parameters that govern the continuous phase transition in a ferromagnetic-paramagnetic system around the point of criticality. The renormalized Hamiltonian per bond with the renormalized interactions is \[R(\sigma_{1}^{\prime},\sigma_{2}^{\prime})\equiv\exp\left[K^{ \prime}\sigma_{1}^{\prime}\sigma_{2}^{\prime}+\frac{h^{\prime}}{2}(\sigma_{1}^{ \prime}+\sigma_{2}^{\prime})\right]\\ =\sum_{s_{1}=\pm 1}\exp\Bigg{[}k_{1}(K\sigma_{1}^{\prime}s_{1}+K \sigma_{2}^{\prime}s_{1}+K_{2}(\sigma_{1}^{\prime}s_{1})(\sigma_{2}^{\prime}s_{ 1}))+h_{1}\left(\frac{h}{2}(\sigma_{1}^{\prime}+s_{1})+\frac{h}{2}(\sigma_{2}^{ \prime}+s_{1})\right)+\\ k_{2}((K\sigma_{1}^{\prime}s_{1})^{2}+(K\sigma_{2}^{\prime}s_{1})^ {2}+K_{2}^{2}(\sigma_{1}^{\prime}s_{1})^{2}(\sigma_{2}^{\prime}s_{1})^{2})+\\ h_{2}\left(\left(\frac{h}{2}\sigma_{1}^{\prime}\right)^{2}+\left( \frac{h}{2}\sigma_{2}^{\prime}\right)^{2}+2\left(\frac{h}{2}s_{1}\right)^{2} \right)+\\ k_{3}((K\sigma_{1}^{\prime}s_{1})^{3}+(K\sigma_{2}^{\prime}s_{1})^ {3}+K_{2}^{3}(\sigma_{1}^{\prime}s_{1})^{3}(\sigma_{2}^{\prime}s_{1})^{3})+\\ h_{3}\left(\left(\frac{h}{2}\sigma_{1}^{\prime}\right)^{3}+\left( \frac{h}{2}\sigma_{2}^{\prime}\right)^{3}+2\left(\frac{h}{2}s_{1}\right)^{3} \right)+\cdots\Bigg{]}, \tag{41}\] where \(\{k_{i}\}\) and \(\{h_{i}\}\) are the coefficients associated with the power series. The above expression \(R(\sigma_{1}^{\prime},\sigma_{2}^{\prime})\) is the most generalized series expansion. Though it reduces the degrees of freedom, the renormalisation procedure typically introduces more interactions than in the original Hamiltonian. We introduce a new long-range interaction due to the renormalization procedure between pair of spin pairs \((\sigma_{1}^{\prime}s_{1})\) and \((\sigma_{2}^{\prime}s_{1})\) with a probabilistic weight \(K_{2}\) in the probabilistic description. Since \({\sigma^{\prime}}_{i}^{2},s_{1}^{2}=1\) and \({\sigma^{\prime}}_{i}^{3}=\sigma^{\prime}_{i},s_{1}^{3}=s_{1}\), the probabilistic weight is not going to change due to them. So we do not include the power series terms of the individual spins and spin pairs for determining the renormalized interactions. Rather, we include only the terms with \(K_{2}\) assigning the weight \(K_{2}=K^{a}\). We assign \(a\) as the parameter that controls the strength of long-range interactions. To confine the long-range interactions, we convert the analytical function to a continued exponential such that \(k_{i}=(i+1)^{i-1}/i!\) and renormalized interactions \(R(\sigma^{\prime}_{1},\sigma^{\prime}_{2})\) becomes \[R(\sigma^{\prime}_{1},\sigma^{\prime}_{2})=\exp\left[K^{\prime} \sigma^{\prime}_{1}\sigma^{\prime}_{2}+\frac{h^{\prime}}{2}(\sigma^{\prime}_{1 }+\sigma^{\prime}_{2})\right]=\\ \mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{ e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}}\mathrm{e}^{K^{a}\sigma^{\prime}_{1} \sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2} \mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{ \prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^ {a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1} \sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2} \mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{ \prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^ {a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1} \sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2} \mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{ \prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^ {a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1} \sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2} \mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{ \prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^ {a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1} \sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2} \mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{ \prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{ 1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2} \mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{ \prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^ {a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1} \sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2} \mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{ \prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a} \sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma ^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^ {a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1} \sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e }^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1} \sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2} \mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{ 2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{ 2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma ^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{ \prime}_{2}\mathrm{e}^{K^{a}\sigma^{\prime}_{1}\sigma^{\prime}_{2}\mathrm{e}^{K^{a} \sigma^{\prime}_{1}\sigma^{\prime}_ be verified; however, their realization is theorised and studied with relevant order parameters related to the systems [61; 62; 63; 64; 65; 66; 67; 68]. However, since quantum phase transitions can happen at \(T=0\), reduced temperature \(|T-T_{c}|\) in the field-theoretic description is replaced with similar measures, such as variation of coupling constant from their critical values. While purely bosonic field theories describe \(O(n)\) universality classes, a new class of universality emerges for Dirac and Weyl systems in the presence of fermionic fields described by Gross-Neveu-Yukawa (GNY) models [69; 70]. Recently four-loop RG functions have been solved for different GNY models to address the critical exponents of such universality classes [36]. However, they employed only simple diagonal Pade approximants to evaluate the critical exponents as spurious poles riddled the other non-diagonal terms. Typically a thorough analysis is required when using Pade-based methods with critical inspection for poles and their removal, as performed for different \(\epsilon\)-expansions in recent work [21]. We use the four-loop \(\epsilon\) expansions [36] to determine critical exponents related to these models implementing continued functions. ### Chiral Ising universality class The Chiral Ising model that can describe quantum phase transitions is a modification of the field-theoretic Ising model where fermions (\(\psi\)) are coupled to a scalar field (\(\phi\)) with Yukawa coupling. There are additional critical exponents in these GNY models associated with the RG gamma functions of the real scalar field and fermions, anomalous dimensions of bosons (\(\eta_{\phi}\)) and fermions (\(\eta_{\phi}\)). The difference in the description of such GNY models has been generalized by a parameter \(N\), the number of fermion flavours of the four-component Dirac fermion in the model. These models have a range of applicability depending on \(N\). The most physically relevant systems are the semimetal-charge density wave transition of electrons in graphene for \(N=2\)[71] and semimetal-insulator transition of spinless fermions on honeycomb lattice for \(N=1\). These systems have also been studied using other methods such as non-perturbative functional renormalization group (FRG) [72; 73], quantum Monte-Carlo simulations (QMC) [74; 75; 76] and CB [77; 78] to calculate their corresponding critical exponents. For \(N=1/4\), this model is theorised to exhibit emergent supersymmetry properties on the boundary of topological superconductors [79]. We implement continued exponential fraction, continued exponential and continued exponential with Borel-Leroy transformation (Eq.s (2), (3) and (4)) to obtain estimates for exponents [36] \[1/\nu =2-0.9524\epsilon+0.007225\epsilon^{2}-0.09487\epsilon^{3}-0.0126 5\epsilon^{4} \tag{48a}\] \[\eta_{\phi} =0.5714\epsilon+0.1236\epsilon^{2}-0.02789\epsilon^{3}+0.1491 \epsilon^{4},\] (48b) \[\eta_{\phi} =0.07143\epsilon-0.006708\epsilon^{2}-0.02434\epsilon^{3}+0.0175 8\epsilon^{4},\] (48c) \[\omega =\epsilon-0.3525\epsilon^{2}+0.4857\epsilon^{3}-1.338\epsilon^{4}, \tag{48d}\] for \(N=2\) in \(d=2+1\). These estimates at consecutive orders for \(1/\nu\), \(\eta_{\phi}\), \(\eta_{\psi}\) and \(\omega\) are illustrated, compared with QMC predictions [74] in Fig.s 11(a), 11(b), 12(a) and 12(b), respectively. Figure 11: Comparing Chiral Ising universality class \(1/\nu\) and \(\eta_{\phi}\) of \(N=2\) with QMC results [74]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(d\) & 1.25 & 1.375 & 1.5 & 1.650 & 1.750 & 1.875 \\ \hline \(\nu_{Ising}\) & 2.9879 & 1.9414 & 1.5542 & 1.3162 & 1.2158 & 1.1253 \\ \hline \(\nu_{Ising}\)[58] & 2.593 & 1.983 & 1.627 & 1.353 & 1.223 & 1.098 \\ \hline \end{tabular} \end{table} Table 1: Critical exponent of \(O(1)\) class \(\nu_{Ising}\) for \(1<d<2\). Similarly, we obtain estimates for exponents [36] \[1/\nu =2-0.8347\epsilon-0.0057\epsilon^{2}-0.0603\epsilon^{3}-0.0903 \epsilon^{4}, \tag{49a}\] \[\eta_{\phi} =0.4\epsilon+0.1025\epsilon^{2}-0.0632\epsilon^{3}+0.1986\epsilon ^{4},\] (49b) \[\eta_{\psi} =0.1\epsilon-0.0102\epsilon^{2}-0.0330\epsilon^{3}+0.0507\epsilon ^{4}, \tag{49c}\] for \(N=1\) in \(d=2+1\). These estimates at consecutive orders for \(1/\nu\), \(\eta_{\phi}\) and \(\eta_{\psi}\) are illustrated, compared with predictions from QMC [75; 76], CB [78] predictions in Fig.s 13(a), 13(b) and 14(a), respectively. And, similarly we obtain estimates for exponents [36] \[1/\nu =2-0.5714\epsilon-0.0204\epsilon^{2}+0.0240\epsilon^{3}-0.0596 \epsilon^{4}, \tag{50a}\] \[\eta_{\phi} =\eta_{\psi}=0.1429\epsilon+0.0408\epsilon^{2}-0.0480\epsilon^{3 }+0.1193\epsilon^{4},\] (50b) \[\omega =\epsilon-0.4286\epsilon^{2}+1.1763\epsilon^{3}-4.0099\epsilon^{ 4}, \tag{50c}\] for \(N=1/4\) in \(d=2+1\) which are illustrated, compared with FRG predictions [80] in Fig.s 15(a), 14(b), 15(b), respectively. We observe that these predictions tabulated in Table 2 are mostly comparable with existing literature from FRG, QMC, and CB and are precisely compatible with Pade resummation of RG [36]. Estimates seem to undershoot or overshoot slightly, whereas there is large uncertainty in predicting anomalous fermion dimension \(\eta_{\psi}\) for \(N=2,1\) from different approaches. When handling \(\eta_{\phi}\), \(\eta_{\psi}\) with continued exponential with Borel-Leroy transformation, spurious poles were encountered, where estimates are not available. ### Chiral XY universality class In the chiral XY model, Dirac fermions undergo continuous \(U(1)\) symmetry breaking described by a complex scalar field. The physically interesting systems in this model which can describe the quantum criticality of superconducting Figure 12: Comparing Chiral Ising universality class \(\eta_{\psi}\) and \(\omega\) of \(N=2\) with QMC [74] and RG [36] estimates. Figure 13: Comparing Chiral Ising universality class \(1/\nu\) and \(\eta_{\phi}\) of \(N=1\) with QMC [75; 76] estimates. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(N\) & \(1/\nu\) & \(\eta_{\phi}\) & \(\eta_{\psi}\) & \(\omega\) \\ \hline & 0.989(45) (\(S(C_{3})\)) & 0.699(11) (\(S(C_{2})\)) & 0.0708 (\(S(C_{2})\)) & \\ & 0.9531(35) (\(S(D_{3})\)) & 0.688(51)(\(S(D_{2})\)) & 0.033(46) (\(S(D_{2})\)) & \\ 2 & 1.1608(88) (\(S(E_{3})\)) & 0.664 (\(E_{2}\)) & 0.054(14) (\(S(E_{2})\)) & \\ 2 & 0.931, 0.945 [36] & 0.7079, 0.6906 [36] & 0.0539, 0.0506 [36] & 0.814(51) (\(S(E_{2})\)) \\ & 0.994(27) [72] (FRG) & 0.742 [78] (CB) & 0.044 [78] (CB) & \\ & 1.20(1) [74] (QMC) & 0.7765 [72] (FRG) & 0.0276 [72] (FRG) & 0.794, 0.777 [36] \\ & 0.621 (74) [74] (QMC) & 0.381(1) [74] (QMC) & \\ \hline & 1.093(27) (\(S(C_{3})\)) & 0.494(14) (\(S(C_{2})\)) & \\ & 1.23(38) (\(S(D_{3})\)) & 0.482(42) (\(S(D_{2})\)) & 0.1019(77) (\(S(C_{2})\)) & \\ & 1.213(18) (\(S(E_{3})\)) & 0.4539 (\(E_{2}\)) & 0.1004 (\(S(D_{2})\)) & \\ 1 & 1.101 [36] & 0.4969, 0.4872 [36] & 0.1011 (\(E_{3}\)) & \\ & 1.075(4) [72] (FRG) & 0.5506 [72] (FRG) & 0.0976, 0.0972 [36] & \\ & 1.14 [75] [QMC) & 0.544 [78] (CB) & \\ & 1.30 [76] (QMC) & 0.54(6) [75] (QMC) & \\ \hline & 1.411(21) (\(S(C_{3})\)) & 0.1754(71) (\(S(C_{2})\)) & 0.1754(71) (\(S(C_{2})\)) & \\ & 1.426(7) (\(S(D_{3})\)) & 0.169(18) (\(S(D_{2})\)) & 0.169(18) (\(S(D_{2})\)) & \\ 1/4 & 1.479(46) (\(S(E_{3})\)) & 0.1573 (\(E_{2}\)) & 0.1573 (\(E_{2}\)) & \\ & 1.415 [36] & 0.171, 0.170 [36] & 0.171, 0.170 [36] & \\ & 1.385, 1.395 [80] (FRG) & 0.167,0.174 [80] (FRG) & \\ & 0.164[77] (CB) & 0.164[77] (CB) & \\ \hline \end{tabular} \end{table} Table 2: Critical exponents of Chiral Ising universality class \(1/\nu\), \(\eta_{\phi}\), \(\eta_{\psi}\) and \(\omega\) for \(N=2,1,1/4\). Our values derived from continued functions (\(\{C,D,E\}\)) are compared with recent literature. Figure 14: Comparing Chiral Ising universality class \(\eta_{\psi}\) of \(N=1,1/4\) and \(\eta_{\phi}\) of \(N=1/4\) with CB [78] and FRG [80] estimates. Figure 15: Comparing Chiral Ising universality class \(1/\nu\) and \(\omega\) of \(N=1/4\) with FRG [80] estimates. states in graphene are for \(N=2\)[81]. This is related to Kekule transition on two-dimensional graphene structures [82; 83; 84]. Another interesting application of this model is in surface states of three-dimensional topological insulators where emergent supersymmetry is theorised for \(N=1/2\)[81; 85]. We obtain the estimates of critical exponents [36] \[1/\nu =2-1.2\epsilon+0.1829\epsilon^{2}-0.3515\epsilon^{3}+0.5164 \epsilon^{4}, \tag{51a}\] \[\eta_{\phi} =0.6667\epsilon+0.1211\epsilon^{2}-0.005048\epsilon^{3}+0.1938 \epsilon^{4},\] (51b) \[\eta_{\phi} =0.1667\epsilon-0.02722\epsilon^{2}-0.05507\epsilon^{3}+0.04202 \epsilon^{4},\] (51c) \[\omega =\epsilon-0.3783\epsilon+0.6271\epsilon^{3}-1.853\epsilon^{4}, \tag{51d}\] for \(N=2\) in \(d=2+1\). These estimates at consecutive orders for \(1/\nu\), \(\eta_{\phi}\), \(\eta_{\psi}\) and \(\omega\) are illustrated, compared with predictions from QMC [86], FRG [87] in Fig.s 16(a), 16(b), 17(a) and 17(b), respectively. We obtain the estimates of critical exponents [36] \[1/\nu =2-\epsilon+0.3333\epsilon^{2}-0.8569\epsilon^{3}+2.7629\epsilon^ {4}, \tag{52a}\] \[\eta_{\phi} =\eta_{\psi}=\epsilon/3,\] (52b) \[\omega =\epsilon-0.333\epsilon+0.8569\epsilon^{3}-2.7629\epsilon^{4}, \tag{52c}\] for \(N=1/2\) in \(d=2+1\). Estimates for \(1/\nu\) and \(\omega\) are illustrated, compared with predictions from CB [88] in Fig.s 18(a) and 18(b), respectively. The estimated values in Table 3 are comparable with predictions from other interesting field-theoretic studies of FRG [87], QMC [86], CB [88] and is compatible with Pade resummation [36]. ### Chiral Heisenberg universality class In Chiral Heisenberg model \(SU(2)\) symmetry is broken where the description of eight component spinors (\(N=2\)) can correspond to transition towards an antiferromagnetic spin-density wave state in graphene and related materials [89; 90; 91]. Figure 16: Comparing Chiral XY universality class \(1/\nu\) and \(\eta_{\phi}\) of \(N=2\) with QMC [86] estimates. Figure 17: Comparing Chiral XY universality class \(\eta_{\psi}\) and \(\omega\) of \(N=2\) with FRG [87] estimates. In this case, it is interesting to note that our precise estimates of critical exponents [36] \[1/\nu =2-1.527\epsilon+0.4076\epsilon^{2}-0.8144\epsilon^{3}+2.001\epsilon ^{4}, \tag{53a}\] \[\eta_{\phi} =0.8\epsilon+0.1593\epsilon^{2}+0.02381\epsilon^{3}+0.2103 \epsilon^{4},\] (53b) \[\eta_{\psi} =0.3\epsilon-0.05760\epsilon^{2}-0.1184\epsilon^{3}+0.04388 \epsilon^{4},\] (53c) \[\omega =\epsilon-0.4830\epsilon^{2}+0.9863\epsilon^{3}-2.627\epsilon^{4}, \tag{53d}\] for \(N=2\) in \(d=2+1\) are more in comparison with previous predictions from FRG [92] and QMC [93; 94] than the simple Pade estimates [36] (Table 4). These estimates at consecutive orders for \(1/\nu\), \(\eta_{\phi}\), \(\eta_{\psi}\) and \(\omega\) are illustrated, compared with predictions from QMC [93; 94] in Fig.s 19(a), 19(b), 20(a) and 20(b), respectively. ## IV Conclusion Simple techniques were implemented on RG perturbative expansions of \(O(n)\)-symmetric models and Gross-Neveu-Yukawa models to better define the nature of classical and quantum phase transitions. Precise critical parameters were derived in such systems from methods using continued functions. Only the first few terms in the perturbation series are used, and methods are tried without using arbitrarily free parameters which influence the convergence. Continued exponential was implemented on perturbative low-temperature expansions and position-space renormalization scheme of the Ising model to calculate critical exponents corresponding to the system. One can further implement this convergence behaviour of continued functions on any wide range of perturbation methods to improve the convergence, especially when only a few terms are available in the divergent series. However, the accuracy of values we obtain from continued functions is only for small perturbation parameters, especially \(\epsilon=1\), which makes it an ideal method to study classical systems with 3 dimensions and quantum systems with 2+1 dimensions. Further, to improve it for larger perturbation parameters, one can try to use these continued functions to interpolate from both weak and strong coupling limits using the large-order asymptotic behaviour of the perturbation coefficients, if available. The exact and unique convergent properties of an individual continued function can be further studied more rigorously based on its limits of applicability and accuracy.
2305.00836
On the existence of Siegel modular forms with extra twists
In this paper, we study Siegel modular forms with extra twists. We provide conditions on the level and genus of the forms that is necessary for the existence of extra twists for Siegel modular forms. We also give explicit examples of Siegel modular forms with extra twists that are different from the complex conjugation
Debargha Banerjee, Ronit Debnath
2023-05-01T14:05:45Z
http://arxiv.org/abs/2305.00836v1
# Siegel modular forms with extra twists ###### Abstract. In this paper, we study Siegel modular forms with extra twists. We provide conditions on the level and genus of the forms that is necessary for the existence of extra twists for Siegel modular forms. We also give explicit examples of Siegel modular forms with extra twists that are different from the complex conjugation. Key words and phrases:Siegel Modular forms, Yoshida lifts 2000 Mathematics Subject Classification: Primary: 11F46, Secondary: 11F80, 11F30 The first named author was partially supported by the SERB grant MTR/2017/000357 and CRG/2020/000223. We thank Professor Abhishek Saha for fruitful email correspondence in the initial stage of the project. 48]. In this case, the existence is guranteed by the work of Doi-Yamauchi, Birch and Koike. For \(k=2\), Ribet and Momose [13] studied the algebra \(\operatorname{Lie}(\rho_{f,l}(G_{\mathbf{Q}}))\) and showed that this is an explicit central simple algebra. Eknath Ghate and his collaborators (cf. [6], [9], [5]) studied this algebra for higher weights \(k\geq 2\) and they give a formulae for local algebras in terms of slopes of modular forms (cf. [5] for details). In the the recent past, there is a progress regarding the Galois representations associated with more general automorphic forms using the discovery of perfectoid spaces by Scholze [18]. The aim of this paper is to extend results to the automorphic forms for the symplectic group \(\operatorname{Gsp}_{2g}/\mathbf{Q}\) with genus \(g\geq 2\). We extend a result of Ribet [1, pg. 49 Theorem 5.7] to automorphic forms on the group \(\operatorname{Gsp}_{2g}/\mathbf{Q}\) with \(g\geq 2\) and give explicit examples of Siegel modular forms with extra twists. Using perfectoid spaces and building on work of Harris- Taylor-Thorne-Clozel-Shin and many others, we associate to every Siegel modular forms \(F\) of arbitrary genus \(g\), the compatible system of \(\lambda\) -adic Galois representation of Peter Scholze [18, p. 1034, Theorem 5.1.4] \[\rho_{F,\lambda}:G_{\mathbf{Q}}\to\operatorname{GSp}_{2g}(K_{\lambda})\] where \(K=\mathbf{Q}(t_{p})\) is the number field obtained by adjoining the Hecke eigenvalues \(t_{p}\) of \(F\) for all \(p\in\mathbb{N}\). These Galois representations are continuous, semi-simple and they encode the Satake parameters of the automorphic representation associated to \(F\). In our first main theorem, we encapsulate a condition on the genus \(g\), weight \(k\) and level \(N\) of the Siegel modular form \(F\) which is sufficient but not necessary for this Siegel modular form to have an extra twist. The level condition requires a high prime power divisibility. This is not so surprising even for \(g=1\), modular forms with extra twists are more the norm than the exception in the case of a high prime power. Denote by \(\rho_{l}:=\prod_{\lambda|l}\rho_{F,\lambda}\), \(\mathfrak{g}_{l}=\operatorname{Lie}(\rho_{F,l}(G_{\mathbf{Q}}))\), \(\mathfrak{gsp}_{2g}=\operatorname{Lie}(\operatorname{Gsp}_{2g})\) and \[\mathfrak{a}_{l}=\{u\in\mathfrak{gsp}_{2g}(K\otimes\mathbf{Q}_{l})| \mathrm{Tr}(u)\in\mathbf{Q}_{l}\};\] where \(\mathrm{Tr}(u)\) denotes the trace of the matrix \(u\). Note traces as sufficient to determine the representations as being isomorphic [20, p. 11]. We now state our first theorem: **Theorem 1**.: _Let \(F\) be a non-CM (cf. Definition 15) Siegel Modular form of genus \(g\geq 2\), weight \(k\), level \(N\) such that \(|g-k|\) is odd. Let \(N\) be chosen such that \((\mathbf{Z}/N\mathbf{Z})^{\times}\) has an element of order \(2g\). The Siegel modular forms \(F\) admits an extra twist if and only if we then have a strict inclusion \(\mathfrak{g}_{l}\subsetneq\mathfrak{a}_{l}\)._ The sufficiency of the conditions in the theorem above stems from Proposition 16. From the Petersson inner product, \((c,\epsilon^{-1})\) is always an extra twist for the Siegel modular forms. It is natural to ask if there are extra twists of Siegel modular forms _different_ from complex conjugation. In our second theorem, we explicitly produce Siegel modular forms with extra twists that are not coming from complex conjugation. Start with two classical elliptic modular form \((f,g)\) with \(f\in S^{1}_{k_{1}}(N,\epsilon)\), \(g\in S^{1}_{k_{2}}(N,\epsilon)\). Using a suitable embedding \(\mathrm{GL}_{2}\times\mathrm{GL}_{2}\hookrightarrow\mathrm{GSp}_{4}\), it is possible to produce a Siegel modular form \(Y(f\otimes g)\) that is a Yoshida lift of \((f,g)\). For two carefully chosen classical modular forms \(f\) and \(g\), their Yoshida lift denoted by \(Y(f\otimes g)\) is a Siegel modular form with an extra twist. **Theorem 2**.: _Let \(f\) and \(g\) be two classical non-CM elliptic modular forms with extra twist associated to the same Dirichlet character \(\chi\) different from complex conjugation. Assume that the Yoshida lift \(Y(f\otimes g)\) of \((f,g)\) exists, then \(Y(f\otimes g)\) is a Siegel modular form that contains extra twists different from complex conjugation._ The above theorem says that the group of extra twists can be really large. The method of this theorem also provides a systematic way of producing large class of examples of Siegel modular forms with extra twists. In fact, we give few examples of explicit Siegel modular forms with extra twists. We construct these examples by taking the Yoshida lift of two classical modular forms of the same level and same central nebentypus character. For families of modular forms, group of extra twists are studied by Conti [7] in a recent paper. Kumar et al use the extra twists for Siegel modular forms with armetic applications in the context of Lang-Trotter conjecture [12]. It is worth mentioning that as of now it is very hard to compute Fourier coefficients and hence Hecke eigenvalue for Siegel modular forms for congruence subgroups even for \(g=2\). There are several ways to produce Siegel modular forms using various lifts like symmetric cube, Saito-Kurakawa or Yoshida lifts. It will be really intriguing to find _extra twists_ of non-lifted Siegel modular forms that are not coming from complex conjugation. ## 2. Siegel Modular Forms ### Motivation Consider the case of classical elliptic modular forms. These are differential forms on the space obtained by the action of \(\mathrm{SL}_{2}(\mathbf{Z})\) on the upper half plane \(\mathbb{H}\). Recall, the upper half plane can be expressed in terms of the group \(\mathrm{SL}_{2}(R)/\mathrm{SO}(2)\), where \(\mathrm{SO}(2)=U(1)\), is the stabilizer of the point \(i=\sqrt{-1}\). This is a maximal compact subgroup. The group \(\mathrm{SL}_{2}(\mathbf{Z})\) is the automorphism group of the lattice \(\mathbf{Z}^{2}\) with the standard alternating inner product \(<,>\) defined as: \[<(a,b),(c,d)>=ad-bc.\] We wish to generalise by taking for \(g=2\) (the same generalisation works for arbitrary \(g\)). The lattice \((\mathbf{Z}^{2})^{2}\) of rank \(2\times 2\), with basis \(e_{1},e_{2},f_{1},f_{2}\) provided with the symplectic form \(<,>\) defined by : \[<e_{i},e_{j}>=0,<f_{i},f_{j}>=0,<e_{i},f_{j}>=\delta_{ij}\] with \(\delta_{ij}\) being the Kronecker's delta. The symplectic group \(\mathrm{Sp}_{4}(\mathbf{Z})\) is by definition the automorphism group of the symplectic lattice \(\mathrm{Sp}_{4}(\mathbf{Z}):=Aut(\mathbf{Z}^{4},<,>)\). By using the basis of the \(e^{\prime}s\) and \(f^{\prime}s\) we can write the elements as group of matrices: \(\begin{bmatrix}A&B\\ C&D\end{bmatrix}\) where \(A,B,C\) and \(D\) are integral \(2\times 2\) matrices satisfying the following conditions. ### Siegel modular group Let \(R\) be a commutative ring with \(1\) and fix an integer \(g\geq 2\). Denote by \(I_{g}\) and \(0_{g}\) be the identity and zero matrix of the ring \(M_{g}(R)\) and now consider the matrix \(J_{g}=\begin{bmatrix}0_{g}&I_{g}\\ -I_{g}&0_{g}\end{bmatrix}\). Denote by \(\mathrm{GSp}_{2g}(R)\) the algebraic group of symplectic similitudes with respect to \(J_{g}\). Hence, \[\mathrm{GSp}_{2g}(R)=\{M\in\mathrm{GL}_{2g}(R)|M^{t}JM=\mu(M)J\}.\] The map \(M\to\mu(M)\) defines a character \(\mu:\mathrm{GSp}_{2g}(R)\to R^{\times}.\) We refer to \(\mu\) as the similitude factor. The group \[\mathrm{Sp}_{2g}(R)=\{M\in\mathrm{GL}_{2g}(R)|M^{t}JM=J\}\] is called the symplectic group of degree \(2g\) with coefficients in \(R\). Since the above set is the automorphism group of the alternating skew-symmetric form defined by \(J_{g}\), hence \(\mathrm{Sp}_{2g}(R)\) is a subgroup of \(\mathrm{GL}_{2g}(R)\). For this article, we are mostly interested in \(R=\mathbf{Z}\) or \(\mathbf{R}\). **Definition 3**.: [11, Page 2] For an open set \(D\subset\mathbf{C}^{g}\), a function \(F:D\rightarrow\mathbf{C}\) is called holomorphic if it is continuous and if for each fixed \((z_{1}^{0},...,z_{g}^{0})\in D\), and each \(j=1,...,g\), the function of a single variable which is determined by the assignment \(z_{j}\to F(z_{1}^{0},...,z_{j-1}^{0},z_{j},z_{j+1}^{0},...,z_{g})\) is holomorphic. The upper half plane for arbitrary genus \(g\geq 1\) is the set \(\mathbb{H}_{g}=\{X+iY|X,Y\in M_{g}(\mathbf{C}),X=X^{t},Y=Y^{t},Y>0\}\). The condition \(Y>0\) means it is positive definite as a matrix. We now define the vector valued Siegel modular form in arbitrary genus \(g>1\) here and subsequently state the more explicit definition in genus \(g=2\). Suppose \(\rho:\mathrm{GL}_{g}\rightarrow\mathrm{GL}(V)\) be a finite dimensional complex representation. Then by a vector valued Siegel modular form of genus \(g\), we mean a holomorphic function \(F:\mathbb{H}_{g}\rightarrow\mathbf{C}\) such that \(F((A\tau+B)(C\tau+D)^{-1})=\rho(C\tau+D)F(\tau)\) for all \(\gamma=\begin{bmatrix}A&B\\ C&D\end{bmatrix}\in\mathrm{Sp}_{2g}(\mathbf{Z})\) and \(\tau\in\mathbb{H}_{g}\). The vector valued Siegel modular form of genus \(2\) can be described more explicitly [3]. The scalar valued Siegel modular forms are just a special case of that. For non negative integers \(k\) and \(j\), let \(\rho_{k,j}:\mathrm{GL}_{2}(\mathbf{C})\rightarrow\mathrm{GL}_{j+1}(\mathbf{C})\) be the irreducible representation of signature \((j+k,k)\); \[\rho_{k,j}=det^{k}\otimes Sym^{j}.\] We now define the slash operator on a function \(F:\mathbb{H}_{g}\rightarrow\mathbf{C}\). For a matrix \(\gamma=\begin{bmatrix}A&B\\ C&D\end{bmatrix}\in\mathrm{Sp}_{2g}(\mathbf{Z})\), and \(\tau\in\mathbb{H}_{g}\), we have the action \[(F|_{k,j}(\gamma))(\tau)=(\rho_{k,j}(C\tau+D))^{-1}F((A\tau+B)(C\tau+D)^{-1}).\] **Definition 4**.: A holomorphic function \(F:\mathbb{H}_{2}\rightarrow\mathbf{C}\) is called a vector valued Siegel modular form with respect to the full subgroup \(\mathrm{Sp}_{4}(\mathbf{Z})\) and weight \(\rho_{k,j}\) if \(F|_{k,j}[\gamma]=F\) for all \(\gamma\in\mathrm{Sp}_{4}(\mathbf{Z})\). The case that \(j=0\), we are mainly concern with this case in this paper and such Siegel modular forms are called scalar valued. So henceforth when we say Siegel modular forms we mean scalar valued ones. Recall the following definition of Siegel modular forms (respectively cusp forms) with respect to a subgroup [2]. **Definition 5**.: Let \(K\) be a subgroup of the full modular group \(\mathrm{Sp}_{2g}(\mathbf{Z})\). Then \(F\) is said to be a Siegel Modular Form of weight \(k\) and character \(\chi\) for the subgroup \(K\) if the following conditions are satisfied : 1. \(F\) is a holomorphic function on \(\mathbb{H}_{g}\). 2. For every matrix \(M\in K\), the function \(F\) satisfies \(F|_{k}(M)=\chi(M)F\). The set \(M_{g}^{k}(K,\chi)\) denotes Siegel Modular Forms of genus \(g\), weight \(k\) for the character \(\chi\) and subgroup \(K\). **Definition 6**.: We define the operator \(\Phi\) on \(F\in M_{g}^{k}(K,\chi)\) by \((\Phi F)(Z^{\prime})=\lim_{t\to\infty}F(\begin{bmatrix}Z^{\prime}&0\\ 0&it\end{bmatrix})\) with \(Z^{\prime}\in\mathbb{H}_{g-1}\) and \(t\in\mathbb{R}\). **Definition 7**.: A Siegel modular form \(F\in M_{g}^{k}(K,\chi)\) is said to be a cusp form if \(F\) Lies in the kernel of the \(\Phi\) operator. The principal congruence subgroup for the of level \(N\) for the full modular group denoted by \(\Gamma^{(g)}(N)\) is defined to be \[\Gamma^{(g)}(N)=\{M\in\mathrm{Sp}_{2g}(\mathbf{Z})\mid M\equiv I_{2g}\;(modN)\}.\] A congruence subgroup \(K\) of \(\mathrm{Sp}_{2g}(\mathbf{Z})\) is any subgroup of \(\mathrm{Sp}_{2g}(\mathbf{Z})\) such that \(K\supset\Gamma^{(g)}(N)\) for some \(N\). We are interested in the following two types of the congruence subgroups: \[\Gamma^{(g)}_{0}(N)=\{\begin{bmatrix}A&B\\ C&D\end{bmatrix}\in\mathrm{Sp}_{2g}(\mathbf{Z})\mid C\equiv 0\;(mod\;N)\}\] and \[\Gamma^{(g)}_{1}(N)=\{\begin{bmatrix}A&B\\ C&D\end{bmatrix}\in\mathrm{Sp}_{2g}(\mathbf{Z})\mid C\equiv 0\;(mod\;N),A,D \equiv I_{g}\;(mod\;N)\}.\] Obvious from the definitions we have the strict inclusions \(\Gamma^{(g)}(N)\subsetneq\Gamma^{(g)}_{1}(N)\subsetneq\Gamma^{(g)}_{0}(N)\). ## 3. Extra twists for Siegel modular forms In this section, we define extra twists for Siegel modular forms and state some properties of the same. **Proposition 8**.: _[_14_]_ _A Siegel modular form \(F\) has the expansion_ \[F(Z)=\sum_{A\in E^{2g},A\geq 0}t(A)e^{\pi itr(AZ)} \tag{1}\] _where \(t(A)\) denotes the coefficients of the expansion, \(E^{2g}\) denotes the set of \(2g\times 2g\) half integral matrices, and \(\operatorname{tr}\) denotes the trace._ As recalled in the introduction, we associate to every Siegel modular forms \(F\) of arbitrary genus \(g\geq 1\), the compatible system of \(\lambda\) -adic Galois representations [18, p. 1034, Theorem 5.1.4] \[\rho_{F,\lambda}:G_{\mathbf{Q}}\to\operatorname{GSp}_{2g}(K_{\lambda})\] where \(K=\mathbf{Q}(t_{p})\) is the number field obtained by adjoining the Hecke eigenvalues \(t_{p}\) of \(F\) for all \(p\in\mathbb{N}\). These Galois representations are continuous, semi-simple and they encode the Satake parameters of the automorphic representation associated to \(F\). Let \(\Gamma=Aut(K)\) and consider the set \(D:=\{\varepsilon:G_{\mathbf{Q}}\to K^{\times}\}\) be the set of characters from \(G_{\mathbf{Q}}\) to \(K^{\times}\). Let \(V\) be the corresponding module over \(K\otimes\mathbf{Q}_{l}\) for which \(Aut(V)=\operatorname{GSp}_{2g}(K\otimes\mathbf{Q}_{l})\). Following [4], we define the Siegel modular forms with extra twists using Galois representation. Following lemma proves that the image of the Galois representation associated to a Siegel modular form is non-abelian. **Lemma 9**.: _The image \(\rho_{F,\lambda}(G_{\mathbf{Q}})\) is non abelian._ Proof.: Let \(c\) be the complex conjugation in \(G_{\mathbf{Q}}\). Since \(c^{2}=I\) and \(\det\rho_{\lambda}(c)=-1\), the eigenvalues of \(\rho_{\lambda}(c)\) are \(+1,-1\). Hence the matrix \(\rho_{\lambda}(c)\) is similar to its Jordan canonical form with \(+1\)'s and \(-1\)'s in the diagonal. So the elements of \(\operatorname{GSp}_{2g}(K_{\lambda})\) which commute with it form a block diagonal subgroup T. Since the representation \(\rho_{\lambda}\) is irreducible the image cannot be a subset of \(T\). Hence there are elements in the image that do not commute with \(\rho_{\lambda}(c)\) and so the image is non abelian. **Definition 10**.: A Siegel modular form \(F\) is said to have an _extra twist_ if there exists a tuple \((\gamma,\chi_{\gamma})\) with \(\gamma\in\Gamma\), \(\chi_{\gamma}\in D\) such that \(\rho_{F}\cong\gamma(\rho_{F})\otimes\chi_{\gamma}\). We now state some properties about extra twists. **Lemma 11**.: _The extra twists for \(\rho_{F}\) over \(\mathbf{Q}\) form a group._ Proof.: Let \(\gamma_{1},\gamma_{2}\in\Gamma\) be two extra twists of Siegel modular forms. This means that \(\rho_{F}=\gamma_{1}(\rho_{F})\otimes\chi_{\gamma_{1}}\) and \(\rho_{F}=\gamma_{2}(\rho_{F})\otimes\chi_{\gamma_{2}}\). Hence \(\rho_{F}=\gamma_{1}\gamma_{2}(\rho_{F})\otimes\chi_{\gamma_{1}}\chi_{\gamma_ {2}}=\gamma_{1}\gamma_{2}(\rho_{F})\otimes\chi_{\gamma_{1}.\gamma_{2}}\) where the character \(\chi_{\gamma_{1}.\gamma_{2}}\) denotes the product of the characters \(\chi_{\gamma_{1}}\) and \(\chi_{\gamma_{2}}\). Further the identity element, inverse are trivial and we have already cHecked closure. So the set of self twists for \(\rho_{F}\) over \(\mathbf{Q}\) form a group. We prove two lemmas similar to [5]. **Lemma 12**.: _For every extra twist \(\gamma\), the character \(\chi_{\gamma}\) satisfying the equivalence is uniquely determined._ Proof.: Let \(\chi_{\gamma_{1}}\) and \(\chi_{\gamma_{2}}\) be two characters associated to the automorphism \(\gamma\). Hence \(\rho_{F}=\gamma(\rho_{F})\otimes\chi_{\gamma_{1}}=\gamma(\rho_{F})\otimes \chi_{\gamma_{2}}\). This means that \(\gamma(\rho_{F})\otimes(\chi_{\gamma_{1}}-\chi_{\gamma_{2}})=0\). If \(c\) denotes complex conjugation, \(\rho_{F}(c)\neq 0,\) and hence \(\chi_{\gamma_{1}}=\chi_{\gamma_{2}}\). **Lemma 13**.: _The association \(\delta\to\chi^{\delta}\) defines a cocycle on the group of self-twist with values in \(K^{\times}\)._ Proof.: For \(\gamma,\delta\in\Gamma\), the identity \(\chi_{\gamma\delta}\to\chi_{\gamma}\chi_{\delta}^{\gamma}\) shows that \(\gamma\to\chi_{\gamma}\) is a \(1-\) cocycle. Specializing to \(g\in G_{\mathbf{Q}}\), we see that \(\gamma\to\chi_{\gamma}(g)\) is a \(1-\) cocycle. By Hilbert's Theorem 90, \(H^{1}(\Gamma,E^{\times})\) is trivial and there is an element \(\alpha(g)\in E^{\times}\) such that \[\frac{\gamma(\alpha(g))}{\alpha(g)}=\chi_{\gamma}(g) \tag{2}\] for all \(\gamma\in\Gamma\). Clearly \(\alpha(g)\) is completely determined (upto multiplication) by elements of \(F^{\times}\). Varying \(g\in G_{\mathbf{Q}}\) we obtain the well defined map \[\overline{\alpha}:G_{\mathbf{Q}}\to E^{\times}/F^{\times}.\] Since each \(\chi_{\gamma}\) is a character, \(\overline{\alpha}\) is a homomorphism. The following proposition closely follows from [4, Proposition (2.15)]. **Proposition 14**.: _Let \(\mathbf{Q}[Tr(Ad(\rho))]\) denote the ring generated over \(\mathbf{Q}\) by the set \(Tr(Ad(\rho)(g))\) for \(g\in G_{\mathbf{Q}}\). Then every element of \(\mathbf{Q}[\operatorname{Tr}(Ad(\rho))]\) is fixed by all self twists for \(\rho\) over \(\mathbf{Q}\)._ Proof.: From [4, Proposition (2.14)] we get that there is an equivalent representation to \(\rho\) which we call \(\rho^{\prime}\) satisfying : 1. \(\rho^{\prime}(g)\in\operatorname{GSp}_{4}(K_{\rho})\) where the field \(K_{\rho}=\mathbf{Q}(tr(Ad(\rho)))\). 2. \(\gamma(\rho^{\prime})=\rho^{\prime}\otimes\epsilon\). For \(g\in G_{\mathbf{Q}}\), we consider the \(1\)-cocycle \(c_{g};\Gamma_{[g]}\to L^{\times}\) defined by \(c_{g}(\gamma)=s^{-1}((\rho^{\prime})^{\gamma}\rho^{\prime}(g)^{-1})\), where \(s:L^{\times}\to\operatorname{GSp}_{4}(L)\) denotes the scalar morphism. By Hilbert's 90, \(H^{1}(\Gamma_{[\rho^{\prime}]},L^{\times})\) is the trivial module. Hence \(c_{g}(\gamma)=\frac{a_{g}}{\gamma(a_{g})}\) with some \(a_{g}\in L^{\times}\). Then for any \(g\in G\) and \(\gamma\in\gamma_{[\rho^{\prime}]}\), we obtain \[\rho^{\prime}(g)^{\gamma}\otimes\frac{\gamma(a_{g})}{a_{g}}=\rho^{\prime}(g).\] The identity \(\gamma(\rho^{\prime}(g)a_{g})=\rho^{\prime}(g)a_{g}\) shows that \(\rho(g)a_{g}\in\mathrm{GSp}_{4}(K_{\rho})\). Hence the field \(\mathbf{Q}(Tr(Ad(\rho(g))))\) is fixed by all self twists for \(\rho\) over \(\mathbf{Q}\). We first define the Siegel modular form without complex multiplication below which is important for the main theorems as they true hold for forms without complex multiplication. The definition is taken from [4, Definition 2.3]. **Definition 15**.: A Siegel modular form \(F\) is said to admit complex multiplication (or be cm) if there exists non trivial \(\epsilon\) such that \(\rho_{F}\cong\rho_{F}\otimes\epsilon^{-1}\). Denote the matrix \(dU_{g}=\left(\begin{array}{cc}d^{-1}I_{g}&0\\ 0&dI_{g}\end{array}\right)\). Corresponding to a congruence subgroup \(K\) and a character \(\psi:(\mathbf{Z}/N\mathbf{Z})^{\times}\to\mathbf{C}^{\times}\), we have the space \(S_{k}(K,\chi,\psi)=\{u\in S_{k}(K,\chi)\mid F<d>=\psi(d)F\}\), where \(<d>=dU_{g}\). For the next proposition, we assume that \(g=2\). It is worth mentioning that [2, Lemma 4.14], to our knowledge holds for scalar valued Siegel modular forms and hence we haven't generalised this result to vector valued Siegel modular forms here. In the next Proposition we see how the same result can be generalised to Siegel modular forms of higher genus putting similar conditions connecting the level and the genus. **Proposition 16**.: _Let \(F\in S^{g}_{k}(K,\chi,\psi)\) be a Siegel cusp form of level \(N\) and weight \(k\). Let the level \(N\) be chosen so that \((\mathbf{Z}/N\mathbf{Z})^{\times}\) has an element of order \(2g\). Assume that \(|g-k|\) is odd. There exists \(\sigma\in H=\ker(\psi)\), such that the characteristic polynomial of \(\sigma\) has distinct roots._ Proof.: We assume \(g\geq 2\) as we already know the result for \(g=1\). Recall that we start with an \(N\) for which \((\mathbf{Z}/N\mathbf{Z})^{\times}\) has an element of order \(2g\), which we call \(a\). So \(a^{g}=-1\). Rewriting the equation from [2, Lemma 4.14], we get \(\chi(aU_{g})^{g}a^{gk-g(g+1)}F=\psi(a)F\). Hence, we deduce that \(\chi(-I)(-1)^{k-g+1}F=\psi(a)F\). For \(gk\in 2\mathbf{Z}\) (which is ensured when \(F\neq 0\)), this further gives \((-1)^{gk+k-g+1}=(-1)^{k-g+1}=\psi(a)\). Given \(g\) and \(k\) are of different parity, we are able to find a \(\sigma\in H\) having \(2g\) distinct roots such that \(\rho_{\psi}(\sigma)=1\) with \(\rho_{\psi}\) as in [8, Pg 385]. From this we conclude that \(\sigma\) is an element such that it satisfies the equation \(X^{2g}-1=0\). This equation has \(2g\) distinct roots. **Corollary 17**.: _If \(g=2\), we choose \(N\) such that \((\mathbf{Z}/N\mathbf{Z})^{\times}\) has an element of order \(4\). Assume that weight is odd. There exists \(\sigma\in H=\ker(\psi)\), such that \(\sigma\) the characteristic polynomial of \(\sigma\) has distinct roots._ Furthermore, we conclude that \(\pi_{N}(conj)=-1=\sigma^{2}\). From [21, Lemma 1.1], we thus deduce again the following corollary. **Corollary 18**.: _We have an inclusion \(\rho_{l}(H)\subset GSp_{2g}(L\otimes Q_{l})\) with \(L=K^{<\sigma>}\) and \(L\subset\mathbb{R}\)._ From [21, Lemma 1.1], we deduce that \(\rho_{l}(H)\subset GSp_{4}(L\otimes Q_{l})\), where \(L=K^{<\sigma>}\). Now as \((conj)\in\pi_{N}^{-1}(<\sigma>)\), \(L\subset\mathbb{R}\). For an embedding \(\sigma:K\hookrightarrow\overline{\mathbf{Q}}_{l}\), let \(\rho_{\sigma}\) be the map \[\rho_{\sigma}:G_{\mathbf{Q}}\to\mathrm{GSp}_{2g}(K\otimes\overline{\mathbf{Q} _{l}})\to\mathrm{GSp}_{2g}(\overline{\mathbf{Q}_{l}});\] where the first map is given by \(\rho_{l}\) and the latter map is induced by \(\sigma\) entry wise by \(\sigma(k,t):=\sigma(k)t\). Let \(\mathfrak{gsp}_{2g}(\overline{\mathbf{Q}}_{l,\sigma})=\mathfrak{g}_{\sigma}\) denotes the Lie algebra of the image of \(\rho_{\sigma}(G_{\mathbf{Q}})\), and so similarly for \(\tau\), we have \(\mathfrak{gsp}_{4}(\overline{\mathbf{Q}}_{l,\tau})=\mathfrak{g}_{\tau}\). Also let \(\mathfrak{g}_{l}^{\prime}:=\{(u,u^{\prime})\in\mathfrak{g}_{\sigma}\times \mathfrak{g}_{\tau}\}\). We show that \(\mathfrak{g}_{l}^{\prime}\) is surjective onto each of its components. We state the following Proposition that we use in the proof of the main theorem. The idea of this proof essentially follows from [15, Step 1, Pg 790]. **Proposition 19**.: _Given \(\mathfrak{g}_{l}^{\prime}\) as above, the projective maps \(p_{\sigma}:\mathfrak{g}_{l}^{\prime}\to\mathfrak{g}_{\sigma}\) and \(p_{\tau}:\mathfrak{g}_{l}^{\prime}\to\mathfrak{g}_{\tau}\) are surjective._ Proof.: Since the \(\lambda\)-adic Galois representation associated to Siegel modular form is irreducible, we have \(End_{\mathfrak{g}_{\sigma}}(V_{\sigma})=\overline{\mathbf{Q}_{l}}\). Also \(\mathfrak{g}_{\sigma}\) is reductive with a centre that is diagonalisable as \(G_{\mathbf{Q}}\) acts semi simply on \(V\). It follows that \(\mathfrak{g}_{\sigma}\subset\mathfrak{gsp}_{2g}(V_{\sigma})\). Now \(\mathfrak{gsp}_{2g}(V_{\sigma})=\mathfrak{sp}_{2g}(V_{\sigma})\oplus\overline{ \mathbf{Q}_{l}}^{\times}\). Now as \(\mathfrak{sp}_{2g}(V_{\sigma})\) is simple, and \(\mathfrak{g}_{\sigma}\) is semisimple, \(\mathfrak{g}_{\sigma}=\mathfrak{sp}_{2g}(V_{\sigma})\) or \(\mathfrak{g}_{\sigma}=\mathfrak{gsp}_{2g}(V_{\sigma})\). The former is impossible as \(\chi(Frob_{l})\neq 1\) and hence \(\mathfrak{g}_{\sigma}=\mathfrak{gsp}_{2g}(V_{\sigma})\). Goursat's lemma states that if \(G_{1}\) and \(G_{2}\) are groups such that \(H\) is a subgroup of \(G_{1}\times G_{2}\), such that the two projections \(p_{1}:H\to G_{1}\) and \(p_{2}:H\to G_{2}\) are surjective. If \(N_{1}\) is the kernel of \(p_{2}\) and \(N_{2}\), the kernel of \(p_{1}\), then \(H\) is the graph of an isomorphism from \(G_{1}/N_{1}\cong G_{2}/N_{2}\). We now prove a lemma about the kernels of the Goursat's lemma in our context. For each embedding \(\sigma:E\to\overline{\mathbf{Q}_{l}}\), define \(G_{\sigma}=\operatorname{GSp}_{2g}(V_{\sigma})\) and \(G_{l}:=\prod_{\sigma:E\to\overline{\mathbf{Q}_{l}}}\operatorname{GSp}_{2g}(V_ {\tau})\). Let \(J=\{(u,u^{\prime})\in\operatorname{GSp}_{2g}(V_{\sigma})\times\operatorname{ GSp}_{2g}(V_{\tau})\}\) where \(u=\rho_{\sigma}(g)\) and \(u^{\prime}=\rho_{\tau}(g)\) for some \(g\in G_{\mathbf{Q}}\). Hence \(J\subset G\times G^{\prime}\) on which we can apply Goursat's lemma as conditions of the same are satisfied due to Proposition19. **Lemma 20**.: _With \(G_{\sigma},G_{\tau}\) and \(J\) as above, and if \(G_{l}\subsetneq A_{l}\), then \(J\) is the graph of an isomorphism \(G_{\sigma}\to G_{\tau}\) for some choice of \(\sigma\) and \(\tau\)._ Proof.: We know that \[\operatorname{GSp}_{2g}(V_{\sigma})=\operatorname{Sp}_{2g}(V_{\sigma})\bigoplus \overline{\mathbf{Q}_{l}}^{\times},\operatorname{GSp}_{2g}(V_{\tau})= \operatorname{Sp}_{2g}(V_{\tau})\bigoplus\overline{\mathbf{Q}_{l}}^{\times}.\] We show that the kernel of the homomorphism is trivial. The kernel is an normal subgroup of \(\operatorname{Sp}_{2g}(V_{\sigma})\bigoplus\overline{\mathbf{Q}_{l}}^{\times}\). Recall \(\operatorname{Sp}\) is a simple group. Following [13, p. 106], recall that if you resrict to certain open subgroup, the Galois representation remains irreducible. Hence, \(G_{\sigma}\) contains \(\operatorname{Sp}_{2g}(V_{\sigma})\). However, the determinant which is an open map. Hence the only possibilities for the second component of the kernel can be trivial or whole of \(\overline{\mathbf{Q}_{l}}^{\times}\). Since we have the assumption that \(G_{l}\subsetneq A_{l}\), there exists at least one embedding \(\sigma\) for which \(G_{\sigma}\neq A_{\sigma}\). Hence for that \(\sigma\), the normal subgroup \(N_{\sigma}=\{\pm I\}\times 1\). So the graph is from \(G_{\sigma}/N_{\sigma}\to G_{\tau}/N_{\tau}\) where for the sake of isomorphism, both \(N_{\sigma}\) and \(N_{\tau}\) are trivial because for any other choice of \(N_{\tau}\), the isomorphism would not exist. ## 4. The Main Theorem In this section we prove our main Theorem 1 for general \(g\geq 1\). By extension of scalars, we regard \(\rho_{l}\) as the \(K\otimes\overline{\mathbf{Q}_{l}}\) representation, \[\rho_{l}:G_{\mathbf{Q}}\to\operatorname{GSp}_{2g}(K\otimes\mathbf{Q}_{l})\to \operatorname{GSp}_{2g}(K\otimes\overline{\mathbf{Q}_{l}})\] Changing bases if necessary we proved in Corollary 18, that there exists \(H\) such that \(\rho_{l}(H)\subset\operatorname{GSp}_{2g}(L\otimes\mathbf{Q}_{l})\). Here we assume that \(N\in\mathbb{N}\) such that \((\mathbf{Z}/N\mathbf{Z})^{\times}\) has an element of order \(4\). Our aim is to calculate Lie algebra of \(\rho_{l}(G_{\mathbf{Q}})\) which we have denoted as \(\mathfrak{g}_{l}\). On \(H\), \(\rho_{\sigma}=\rho_{\tau}\iff\sigma|_{L}=\tau|_{L}\). We list the following properties of the \(l\)-adic Galois representation: 1. \(\rho_{l}\) is semisimple. 2. \(det(\rho_{l})=\chi(l)^{2k-3}\)__ 3. No representation \(\rho_{\lambda}\) with \(\lambda|l\) becomes abelian on an open subgroup of \(G_{\mathbf{Q}}\). Because of the aforementioned conditions, \(\rho_{l}\) satisfies the conditions for [15, Theorem 4.4.10]. We now prove our main Theorem 1. The main theorem says if certain conditions are satisfied then the Siegel modular forms contain extra twist if and only if \(\mathfrak{g}_{l}\subsetneq\mathfrak{a}_{l}\). In other word, proving main theorem is equivalent to showing the following statements are equivalent as in the classical cases of elliptic modular forms: 1. \(\mathfrak{g}_{l}\subsetneq\mathfrak{a}_{l}\)__ 2. \(\exists\ \sigma,\tau\) s.t. \(\sigma|_{K}\neq\tau|_{K}\) but there \(\exists\) an open subset \(H_{0}\) of \(G_{\mathbf{Q}}\) such that \(\rho_{\sigma}:H_{0}\to\mathrm{GSp}_{2g}(\overline{\mathbf{Q}_{l}})\) and \(\rho_{\tau}:H_{0}\to\mathrm{GSp}_{2g}(\overline{\mathbf{Q}_{l}})\) are isomorphic. 3. There exists a finite order character \(\phi:G_{\mathbf{Q}}\to\overline{\mathbf{Q}_{l}}^{\times}\) s.t. \(\rho_{\tau}\cong\rho_{\sigma}\otimes\phi\). Proof.: We establish \((1)\implies(2)\). We use the same argument as in [19, Lemma 7] and modify it for our case. Recall that \(\mathfrak{g}_{l}^{\prime}=\{(u,u^{\prime})\in\mathfrak{g}_{\sigma}\times \mathfrak{g}_{\tau}\}\). At Lie algebra level, \(\mathfrak{g}_{l}^{\prime}\subset\mathfrak{gsp}_{2g}(V_{\sigma})\times\mathfrak{ gsp}_{2g}(V_{\tau})\). Suppose \(\mathfrak{g}_{l}\subsetneq\mathfrak{a}_{l}\), as the projections are surjective by Proposition 19. By applying Goursat's Lemma 3, we deduce that \(\mathfrak{g}_{l}^{\prime}\) is the graph of an isomorphism \(20\ \alpha:\mathfrak{gsp}_{2g}(V_{\sigma})\to\mathfrak{gsp}_{2g}(V_{\tau})\) which takes \(1\) to \(1\). Note that as the projections are surjective, the graph always exists. However only when is the inclusion proper are the kernels trivial. Recall that \(\mathfrak{gsp}_{2g}(V)=\mathfrak{sp}_{2g}(V)\oplus\overline{\mathbf{Q}_{l}}^{ \times}I_{n}\). Hence, any automorphism of \(\mathfrak{gsp}_{2g}(V)\) is determined by what it does to \(\mathfrak{sp}_{2g}(V)\). Hence our automorphism is determined by its restriction to \(\mathfrak{sp}_{2g}(V)\). So by [10], \(\alpha(u)=f\circ u\circ f^{-1}\) where \(u\in\mathfrak{gsp}_{2g}(V_{\sigma})\) and \(f\in V_{\sigma}\to V_{\tau}\). Here \(f\) is an isomorphism of \(\mathfrak{g}_{l}^{\prime}\) modules. Shifting to Lie group level, we deduce that \(\overline{f}\) is an isomorphism of \(U\) modules. So if \(H_{0}=\rho_{l}^{-1}(U)\) where \(\overline{f}\) is an isomorphism of \(U\) modules, it satisfies the required conditions of (2). We now prove \((3)\implies(1)\). We start by assuming that there exists a finite order character \(\phi\) for which \(\rho_{\sigma}\cong\rho_{\tau}\otimes\phi\). Hence \(det(\rho_{\sigma})=det(\rho_{\tau}\otimes\phi)\). Since \(\phi\) is only a finite order character, \(det(\rho_{\sigma}\rho_{\tau}^{-1})=\phi^{2g}\subsetneq\overline{\mathbf{Q}_{l }}^{\times}\). Hence \(\mathfrak{g}_{\sigma}\mathfrak{g}_{\tau}^{-1}\subsetneq\mathfrak{a}_{\sigma }\mathfrak{a}_{\tau}\), which gives us \(\mathfrak{g}_{l}\subsetneq\mathfrak{a}_{l}\). To prove \((3)\implies(2)\) is easy. We start with the assumption that there is a finite order character such that \(\rho_{\tau}\cong\rho_{\sigma}\otimes\phi\). Hence the kernel of \(\phi\) will be a finite index subgroup of \(G_{\mathbf{Q}}\) which will be our \(H_{0}\). So, as \(\phi\equiv 1\) on \(H_{0}\), \(\rho_{\tau}|_{H_{0}}\cong\rho_{\sigma}|_{H_{0}}\) and \((2)\) holds true. We now prove \((2)\implies(3)\). For any finite index subgroup \(H_{0}\) of \(G_{\mathbf{Q}}\), the image \(\rho_{\sigma}(H_{0})\) has commutant consisting of scalar matrices in \(\mathrm{GSp}_{2g}(\overline{\mathbf{Q}_{l}})\) by Schur's lemma. The same is also true for the representation \(\rho_{\tau}\). Hence there exists \(\phi:G_{\mathbf{Q}}\to\overline{\mathbf{Q}_{l}}^{\times}\) of finite order as \(H_{0}\) being open is a finite index subgroup of \(G_{\mathbf{Q}}\) such that \(\rho_{\sigma}\cong\rho_{\tau}\otimes\phi\). ## 5. Examples of Siegel modular forms with extra twists ### Yoshida lifts of classical modular Forms with characters We briefly recall the theory of Yoshida lifts [17]. This is basically coming from the embedding \(\mathrm{GL}_{2}\times\mathrm{GL}_{2}\hookrightarrow\mathrm{GSp}_{4}\). Given two classical elliptic, weight \(2\) modular forms \(f\) and \(g\) of level \(N_{1},N_{2}\) respectively (two automorphic representations for \(\mathrm{GL}_{2}\)) associated to the same primitive character \(\chi\), the Yoshida lift \(F\) is the automorphic representation for \(\mathrm{GSp}_{4}\). Recall that for the pair \((f,g)\), automorphic representation \(F:=Y(f\otimes g)\) is the Yoshida lift if the following conditions are satisfied: 1. The adelization of \(F\) generates an irreducible cuspidal automorphic represenation \(\pi_{F}\) of \(\mathrm{GSp}_{4}(\mathbb{A})\). 2. The local \(L-\) parameter for \(\pi_{F,v}\) at each place \(v\) is the direct sum of the \(L\)-parameters for \(\pi_{f,v}\) and \(\pi_{g,v}\). The Yoshida lift is a special case of Langlands functoriality coming from the embedding of dual groups \[\{(g_{1},g_{2})\in\mathrm{GL}_{2}(\mathbf{C})\times\mathrm{GL}_{2}(\mathbf{C })|\det(g_{1})=\det(g_{2}))\}\to\mathrm{GSp}_{4}(\mathbf{C})\] given by \[(\begin{bmatrix}a&b\\ c&d\end{bmatrix},\begin{bmatrix}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}\end{bmatrix})\to\begin{bmatrix}a&0&b&0\\ 0&a^{\prime}&0&b^{\prime}\\ c&0&d&0\\ 0&c^{\prime}&0&d^{\prime}\end{bmatrix}.\] Note that the determinant condition of the embedding is satisfied since in both cases the determinant is equal to \(\chi(p)p^{k-1}\) as the elliptic modular forms are associated to the same character \(\chi\). We now define Yoshida lifts of two given classical modular forms associated to the same character. Following Roberts [16], we list below the necessary conditions for the existence of the Yoshida lifts. Given two classical newforms \(f\) and \(g\), we say the pair satisfies the conditions of a Yoshida lift if 1. The modular form \(f\) is not a scalar multiple of \(g\) 2. The characters of \(f\) and g arise from the same primitive Dirichlet character 3. One of the weights is \(2\) and the other weight has to be an even integer greater than \(2\). 4. There exists a finite prime \(p\) at which \(\pi_{f,p}\) and \(\pi_{g,p}\) are both discrete series. **Definition 21**.: Suppose these conditions are satisfied and \(f,g\in S^{1}_{2}(N,\chi)\), there is a unique representation \(\Pi_{F,p}\) of \(\operatorname{Gsp}_{4}(\mathbf{Q}_{p})\) satisfying \[L(\Pi_{F,p})=L(\pi_{f,p})\oplus L(\pi_{g,p}).\] In this case \(F\) is said to be a Yoshida lift of \(f\) and \(g\). If \(f,g\) are two classical modular forms with coefficient fields \(K_{1},K_{2}\) and \(\Gamma_{1}\) and \(\Gamma_{2}\) are the respective group of extra twists, then we denote their Yoshida lift by \(Y(f\otimes g)\). We also denote the group of extra twists for the Yoshida lift as \(\Gamma_{Y}\) **Lemma 22**.: _If two classical modular forms \(f\) and \(g\) satisfy the conditions listed in 5.1, then \(\Gamma_{Y}\supset\Gamma_{1}\cap\Gamma_{2}\)._ Proof.: Suppose \(\gamma\in\Gamma_{1}\cap\Gamma_{2}\). Now by definition this means \(K_{1}\) and \(K_{2}\) have to be equal for a non zero \(\gamma\) to exist in the intersection. Hence their compositum is \(K=K_{1}=K_{2}\). This implies that \(\gamma(\rho_{f})=\rho_{f}\otimes\chi_{\gamma}\) and \(\gamma(\rho_{g})=\rho_{g}\otimes\chi_{\gamma}\). Because of the equation 21, we also directly have that, for the Yoshida lift \(F=Y(f\otimes g)\), we have \(\gamma(\rho_{F})=\rho_{F}\otimes\chi_{\gamma}\). In our subsequent example of Siegel modular form with extra twist, we show that it is not necessarily true that we have \(\Gamma=\Gamma_{1}\cap\Gamma_{2}\). In fact, all the examples we have \(\Gamma_{Y}\supsetneq\Gamma_{1}\cap\Gamma_{2}\). In this section we give an explicit examples of Siegel modular forms with extra twists. We find examples of Siegel modular forms with extra twists by taking the Yoshida lifts of two classical modular forms with appropriate, weight, character and having extra twists themselves. ### Proof of Theorem 2 In this subsection, we prove that the group of extra twists for Siegel modular forms can be very large. Proof.: Let \(f,g\in S_{k}(N,\chi)\) be two elliptic modular forms with extra twists different from complex conjugation. Let \(K=K_{f}\cdot K_{g}\) be the compositum of the Hecke fields \(K_{f},K_{g}\) of \(f\) and \(g\). There exists at least one \(\gamma\in Aut(K)\) such that \[\gamma(\rho_{f})=\rho_{f}\otimes\chi_{\gamma},\gamma(\rho_{g})=\rho_{g} \otimes\chi_{\gamma}. \tag{3}\] Recall that we assume the necessary condition for the existence of Yoshida lift of \((f,g)\) is satisfied and let \(F=Y(f\otimes g)\) be the Yoshida lift for the pair \((f,g)\) as defined in SS 5.1. We claim that \(F\) is a Siegel modular form with extra twist by same \(\gamma\). In other words, we need to show that there exists a character \(\chi_{\gamma}\) such that \[\gamma(\rho_{F})=\rho_{F}\otimes\chi_{\gamma}. \tag{4}\] This is an equality of two \(4\) dimensional Galois representation. By Brauer-Nesbitt theorem, this is only true if all the coefficients in the characteristic polynomial are equal. The characteristic polynomial of the left hand side in equation 4 is \((x^{2}-\gamma(a_{p})x+p^{k-1})(x^{2}-\gamma(a_{p}^{\prime})+p^{k-1})\) and that of right hand side is \((x^{2}-a_{p}\chi_{\gamma}(p)x+p^{k-1})(x^{2}-a_{p}^{\prime}\chi_{\gamma}(p)x+p ^{k-1})\). The above equality follows from equation 3. We expect that the endomorphism algebra of the conjectural motive associated to \(Y(f\otimes g)\) is a direct sum of the endomorphism algebras for \(f\) and \(g\). ### Examples of the phenomenon described above In this section, we find explicit examples of Siegel modular forms with extra twists different complex conjugations. All these examples show that the Hecke field of Yoshida lift is smaller that the compositum of the individual elliptic modular forms. 1. Let \(f,g\in S_{2}^{1}(30,\chi)\) be two newforms with \(\chi\) is a Dirichlet character on \(\chi:(\mathbf{Z}/30\mathbf{Z})^{\times}\to\mathbf{C}^{\times}\) such that \(\chi\) is determined on its generators by \(\chi(7)=-\zeta_{8}^{2}\) and \(\chi(11)=-1\). Note we can do so because this is a four dimensional space. Since the conductor of the character is \(15\), condition \((2)\) of Yoshida lifting is satisfied and we can consider the Yoshida lifting \(Y(f\otimes g)\) and this is an example of an explicit Siegel modular form with an extra twist. In this example the classical modular forms could be chosen with the expansion \(f=q+\zeta_{8}q^{2}+(\zeta_{8}^{3}-\zeta_{8}^{2}-1)q^{3}+\zeta_{8}^{2}q^{4}+...\) and \(g=q+\zeta_{8}^{3}q^{2}+(\zeta_{8}^{9}-\zeta_{8}^{6}-1)q^{3}+\zeta_{8}^{6}q^{4}+.\). For a detailed study of this space, let \(K_{f,g}\) be the compositum of the coefficient fields of \(f\) and \(g\). It must be said that no matter what \(f\) and \(g\) we choose, in any case \(K_{Y(f\otimes g)}\subsetneq K_{f,g}\). Let \(\Gamma_{F}\) be the group of extra twists of \(F\). Then it is readily evident that \(\Gamma_{F}\supseteq\Gamma_{1}\cap\Gamma_{2}\). For the \(f\) and \(g\) we have chosen, \(\Gamma_{1}=\Gamma_{2}\). Not also that \(K_{f,g}=\mathbf{Q}(\zeta_{8})\). We see that in this expansion the coefficient of \(q^{2}\) in the sum of \(f+g\) is \(\zeta_{8}+\zeta_{8}^{3}\). We claim that all of the coefficients of the sum \(f+g\) belong to the field \(\mathbf{Q}(\zeta_{8}+\zeta_{8}^{3})\). It is easy to check manually that for any power \(1\leq i\leq 8\), \(\zeta_{8}^{i}+\zeta_{8}^{3i}\) can be written in terms of \(\zeta_{8}+\zeta_{8}^{3}\). Hence all the sum of Hecke eigenvalues of \(f\) and \(g\) are in \(\mathbf{Q}(\zeta_{8}+\zeta_{8}^{3})\). From the definition of Yoshida lift 21, we get that the Hecke eigenvalues of \(Y(f\otimes g)\in\mathbf{Q}(\zeta_{8}+\zeta_{8}^{3})\). Hence this is an explicit example of when field of Hecke eigenvalues of the Yoshida lift is a proper subset of the compositum of the field of the Hecke eigenvalues of the concerned classical modular forms. 2. Consider the \(8\) dimensional complex vector space \(S^{1}_{2}(100,\chi)\) where the character \(\chi\) is defined by \(\chi(51)=-1\) and \(\chi(77)=\frac{\mu^{6}-3\mu^{2}}{4}\); here \(\mu\) is a root of the polynomial \(p(x):=x^{8}-7x^{4}+16=0\). Such a character \(\chi\) has conductor \(20\) and the order of the group of inner twists is \(8\). Let \(\mu_{1}:=\frac{7+\sqrt{15}i}{2},\mu_{2}:=-(\frac{7+\sqrt{15}i}{2})\) be the root of \(p(x)\). For \(i=1,2\), we have the Fourier expansions \[f_{i}=q+\mu_{i}q^{2}+\frac{3\mu_{i}^{7}-13\mu_{i}^{3}}{8}q^{3}+....\] The Hecke eigenvalue ring \(K_{f_{1}f_{2}}\) is of dimension \(8\) over \(\mathbf{Q}\). The Hecke eigenvalue ring is determined by the coefficients of \(1,q\) and \(q^{2}\) itself. Hence by Definition 21, the coefficient ring, \(K_{Y(f_{1}\otimes f_{2})}\) is determined by the sum of Hecke eigenvalues of \(q\) and \(q^{2}\) under the two roots. By a small calculation, we see that this field happens to be \(\mathbf{Q}(\sqrt{15}i)\) which is of extension degree \(2\) over \(\mathbf{Q}\). Hence \(K_{f_{1}f_{2}}\) is of extension degree \(4\) over \(K_{Y(f_{1}\otimes f_{2})}\). In these case also \(K_{Y(f_{1}\otimes f_{2})}\subsetneq K_{f_{1},f_{2}}\). We give two examples only but from LFMFDB it is evident that the group of extra twists for Yoshida lifts can be very large although it can be controlled (as expected) from the individual classical elliptic modular forms.
2301.03610
Updating the $^{56}$Ni Problem in Core-collapse Supernova Explosion
Details of the core-collapse supernova (CCSN) explosion mechanism still need to be fully understood. There is an increasing number of successful examples of reproducing explosions in multidimensional hydrodynamic simulations, but subsequent studies pointed out that the growth rates of the explosion energy $\dot{E}_\mathrm{expl}$ of these simulations are insufficient to produce enough $^{56}$Ni to match observations. This issue is known as the `$^{56}$Ni problem' in CCSNe. Recently, however, some studies have suggested that this $^{56}$Ni problem is derived from the simplicity of the explosion model. In response, we investigate the effect of the explosion energy growth rate $\dot{E}_\mathrm{expl}$ on the behavior of nucleosynthesis in CCSNe in a more realistic model. We employ the 1D Lagrangian hydrodynamic code, in which we take neutrino heating and cooling terms into account with the light-bulb approximation. We reiterate that, consistent with previous rebuttal studies, there is the $^{56}$Ni problem: Although $^{56}$Ni is synthesized to almost the same mass coordinate independent of $\dot{E}_\mathrm{expl}$, some of the innermost material in the low-$\dot{E}_\mathrm{expl}$ model failed to escape, leading to a shift in the innermost mass coordinate of the ejecta to the outer positions. Comparing our results with observations, we find that while modern slow explosions can, in principle, reproduce observations of standard Type II SNe, this is not possible with stripped-envelope SNe. Our finding places a strong constraint on the explosion mechanism. There are significant differences in the progenitor structures and the explosion mechanism between Type II and stripped-envelope SNe.
Ryo Sawada, Yudai Suwa
2023-01-09T19:00:01Z
http://arxiv.org/abs/2301.03610v1
# Updating the \({}^{56}\)Ni Problem in Core-collapse Supernova Explosion ###### Abstract Details of the core-collapse supernova (CCSN) explosion mechanism still need to be fully understood. There is an increasing number of successful examples of reproducing explosions in multidimensional hydrodynamic simulations, but subsequent studies pointed out that the growth rates of the explosion energy \(\dot{E}_{\rm expl}\) of these simulations are insufficient to produce enough \({}^{56}\)Ni to match observations. This issue is known as the \({}^{56}\)Ni problem' in CCSNe. Recently, however, some studies have suggested that this \({}^{56}\)Ni problem is derived from the simplicity of the explosion model. In response, we investigate the effect of the explosion energy growth rate \(\dot{E}_{\rm expl}\) on the behavior of nucleosynthesis in CCSNe in a more realistic model. We employ the 1D Lagrangian hydrodynamic code, in which we take neutrino heating and cooling terms into account with the light-bulb approximation. We reiterate that, consistent with previous rebuttal studies, there is the \({}^{56}\)Ni problem: Although \({}^{56}\)Ni is synthesized to almost the same mass coordinate independent of \(\dot{E}_{\rm expl}\), some of the innermost material in the low-\(\dot{E}_{\rm expl}\) model failed to escape, leading to a shift in the innermost mass coordinate of the ejecta to the outer positions. Comparing our results with observations, we find that while modern slow explosions can, in principle, reproduce observations of standard Type II SNe, this is not possible with stripped-envelope SNe. Our finding places a strong constraint on the explosion mechanism. There are significant differences in the progenitor structures and the explosion mechanism between Type II and stripped-envelope SNe. (stars:) supernovae: general--hydrodynamics + Footnote †: journal: ApJ 0000-0002-8807-6883]Ryo Sawada 0000-0002-4880-7883]Yudai Suwa 0000-0002-4880-7883][email protected] ## 1 Introduction Radioisotope \({}^{56}\)Ni is an important product in supernova nucleosynthesis, which drives supernova (SN) brightness. \({}^{56}\)Ni decays into \({}^{56}\)Co, and then into \({}^{56}\)Fe. This nuclear decay chain powers the light curve of SNe, and thus, \({}^{56}\)Ni masses of SNe have been estimated with reasonable accuracy from the light curve (see, e.g., Arnett, 1982; Hamuy, 2003).1 On the other hand, the amount of synthesized \({}^{56}\)Ni is sensitive to the temperature \(T\), the density \(\rho\), and the number of electrons per nucleon (electron fraction) \(Y_{e}\), i.e., explosion property and pre-SNe core structure (e.g., Woosley & Weaver, 1995; Thielemann et al., 1996; Woosley et al., 2002). These two factors, that is, the amount of \({}^{56}\)Ni synthesis can be accurately estimated from observations and strongly reflect the explosion's innermost nature, suggest the following. \({}^{56}\)Ni is the best probe to constrain an aspect of the SN explosion mechanism accurately (e.g., Maeda & Tominaga, 2009; Suwa & Tominaga, 2015). Details of the explosion mechanism of core-collapse supernovae (CCSNe) are not yet fully understood. The most promising scenario is the delayed neutrino-driven explosion (Bethe & Wilson, 1985). While this scenario had once not been reproduced by numerical simulations, the situation has brought substantial progress over a few decades. Now, there is an increasing number of successful examples of reproducing explosions in multidimensional hydrodynamic simulations, with a detailed neutrino transport (see, e.g., Lentz et al., 2015; Takiwaki et al., 2016; Muller et al., 2017; O'Connor & Couch, 2018; Glas et al., 2019; Bollig et al., 2021; Burrows & Vartanyan, 2021; Bruen et al., 2022, and references therein). Although the details now depend on the numerical methods and physical approximations employed in each simulation, there seems to be a general understanding that the explosion succeeds by the growth of the hydrodynamic instability over a sufficient time. Indeed, most, if not all, of those state-of-the-art simulations, have shown a slow increase of explosion energy, and the growing rate of the explosion energy is typically \(\dot{E}_{\rm expl}=\mathcal{O}(0.1)\) Bethe s\({}^{-1}\) (1 Bethe\(\equiv 1\times 10^{51}\) erg), especially for 3D simulations. However, recent several studies have shown that to reproduce the typical observed mass of \({}^{56}\)Ni by the explosive nucleosynthesis in the ejecta, the growth rate of the explosion energy of \(\dot{E}_{\rm expl}=\mathcal{O}(1)\) Bethe s\({}^{-1}\) is required in several methods (Sawada & Maeda, 2019; Suwa et al., 2019; Saito et al., 2022). Sawada & Maeda (2019) found the inverse-correlation between \({}^{56}\)Ni yield and explosion energy growth rate \(\dot{E}_{\rm expl}\) by 1D simulations with the simple thermal-bomb modeling and post-processing detailed-nucleosynthesis, and Suwa et al. (2019) also came to the same conclusion by conducting hydrodynamic simulations with an approximate neutrino heating model that self-consistently follows core-collapse and shock-revival. Saito et al. (2022) also confirmed this trend, using the same method as Sawada & Maeda (2019), but modeled for individual objects to reduce observational uncertainties. If these results are correct, the current multi-D simulations, which give explosion energy growth rates of \(\dot{E}_{\rm expl}=\mathcal{O}(0.1)\) Bethe s\({}^{-1}\), would be observationally unfavorable. We refer to this issue as the nickel mass problem (\({}^{56}\)Ni problem,' hereafter) in this paper. However, this \({}^{56}\)Ni problem is still under some debate. In particular, Imasheva et al. (2023) just recently pointed out the most obvious question to the \({}^{56}\)Ni problem. Imasheva et al. (2023) used the same method as Sawada & Maeda (2019), with simple thermal injection modeling and post-processing detailed nucleosynthesis, but scrutinized the treatment of initial conditions. In the recent slow explosion scenario, the pre-SN star experiences sufficient gravitational contraction just before the successful explosion. They found that the correlation between \({}^{56}\)Ni yield and explosion energy growth rate \(\dot{E}_{\rm expl}\) is the result of ignoring this initial collapse. They argued that this correlation disappears when the initial collapse is included and also that further initial collapse inversely results in more \({}^{56}\)Ni being synthesized in slower explosions. Their arguments also apply to Sawada & Maeda (2019) and Saito et al. (2022), but not to Suwa et al. (2019). Suwa et al. (2019) solved self-consistently the core collapse and shock revival with the light-bulb scheme and found this correlation even though they took into account the initial collapse phase. This result is inconsistent with the conclusion of Imasheva et al. (2023). Note that Suwa et al. (2019) performed no detailed nucleosynthesis calculations. Instead, they estimated the \({}^{56}\)Ni amount simply by the temperature of hydrodynamic simulations. Therefore, we perform hydrodynamic and detailed nucleosynthesis calculations in this study. This study aims to clarify the detailed picture of how \({}^{56}\)Ni synthesis occurs in the current CCSN explosion scenario. By clarifying this picture, we also expect to explain the origin of the differences between the two studies and, by extension, the cause of the \({}^{56}\)Ni problem itself. In this paper, therefore, we simulate one-dimensional hydrodynamics in the light-bulb scheme as in Suwa et al. (2019), then perform detailed nucleosynthesis in a post-process manner. The goal of this study is to present a detailed picture of \({}^{56}\)Ni nucleosynthesis in CCSNe with self-consistent explosion modeling. Furthermore, we aim to sort out the controversial \({}^{56}\)Ni problem. In Section 2, we describe our simulation methods, the progenitor models, and post-processing analysis. Our results are summarized in Section 3. In Section 4, we revisit the \({}^{56}\)Ni problem through a detailed comparison of our results and observations, and discuss the uncertainties involved. We conclude in Section 5. ## 2 Simulation Methods Following the computational setup performed in Suwa et al. (2019), we employ a 1D Lagrangian Newtonian hydrodynamic code based on blcode.2 Basic equations under a spherically symmetric configuration, as we per form in this paper, are given as follows: \[\frac{\partial r}{\partial M_{r}} =\frac{1}{4\pi r^{2}\rho}\;, \tag{1}\] \[\frac{Dv}{Dt} =-\frac{GM_{r}}{r^{2}}-4\pi r^{2}\frac{\partial P}{\partial M_{r}}\;,\] (2) \[\frac{D\epsilon}{Dt} =-P\frac{D}{Dt}\left(\frac{1}{\rho}\right)+\mathcal{H}-\mathcal{C }\, \tag{3}\] where \(r\) is the radius, \(M_{r}\) is the mass coordinate, \(t\) is time, \(\rho\) is the density, \(v\) is the radial velocity, \(P\) is pressure, \(\epsilon\) is the specific internal energy, and \(D/Dt\equiv\partial/\partial t+v_{r}\partial/\partial r\) is the Lagrangian time derivative. The artificial viscosity of Von Neumann & Richtmyer (1950) is employed to capture a shock. The system of equations (1)-(3) is closed with the Helmholtz equation of state (Timmes & Swesty, 2000), which describes the stellar plasma as a mixture of arbitrarily degenerate and relativistic electrons and positrons, black-body radiation, and ideal Boltzmann gases of a defined set of fully ionized nuclei, taking into account corrections for the Coulomb effects. In this work, neutrino heating and cooling are added by a light-bulb scheme. In the light-bulb scheme, neutrino cooling is given as a function of temperature, and neutrino heating is a function of the radius with parameterized neutrino luminosity. The heating term \(\mathcal{H}\) and the cooling term \(\mathcal{C}\), terms in Equation (3) are assumed to be \[\mathcal{H} =1.544\times 10^{20}\ \mathrm{erg\ g^{-1}\ s^{-1}}\] \[\times\left(\frac{L_{\nu_{e}}}{10^{52}\mathrm{MeV}}\right)\left( \frac{r_{\nu_{e}}}{100\mathrm{km}}\right)^{-2}\left(\frac{T_{\nu_{e}}}{4.0 \mathrm{MeV}}\right)^{2}\, \tag{4}\] \[\mathcal{C} =1.399\times 10^{20}\ \mathrm{erg\ g^{-1}\ s^{-1}\times\left( \frac{T}{2.0\mathrm{MeV}}\right)^{6}. \tag{5}\] Here, we fix the neutrino temperature as \(T_{\nu_{e}}=4\) MeV. We take into account these terms only in the post-shock regime. We modified the inner boundary conditions so that the innermost mass shell does not shrink within 50 km from the center to mimic the existence of a proto-neutron star (PNS). Also, the light-bulb scheme in this study tends to overestimate the neutrino-driven wind from the PNS surface at the post-explosion phase because it keeps giving a constant neutrino luminosity. Therefore, in this study, we consider the mass coordinate that experienced \(r<200\) km as the neutrino-driven wind and separate it from the ejecta. The numerical computational domain contains \(1.5M_{\odot}\) and uses a 1500 grid with a mass resolution of \(10^{-3}M_{\odot}\). We set the inter boundary at \(M_{s/k_{b}=4}-0.5M_{\odot}\) for each pre-explosion star. Each mass coordinate captures the time evolution of the hydrodynamic quantities, so that nucleosynthesis calculations are performed as a post-processing analysis with this trajectory. We calculate a reaction network of 640 nuclear species with the torch code (Timmes, 1999). The initial conditions adopted in this study are a subset of non-rotating stars with solar metallicity, which evolved from the main sequence to the onset of iron-core collapse, as published by Sukhbold et al. (2018). The physics of this set of progenitors was discussed in detail in this literature. Figure 1 shows the density structures Figure 1: Density structure as a function of the enclosed mass for the considered progenitors with \(M_{\mathrm{ZAMS}}=12.3M_{\odot}\) (cyan line), \(16.0M_{\odot}\) (blue line), \(19.7M_{\odot}\) (red line), and \(21.0M_{\odot}\) (magenta line), and its details with the entropy per nucleon. of the progenitor as a function of the enclosed mass, and its details with the entropy per nucleon. ## 3 Result ### Overview of the explosion dynamics Figure 2 shows the time evolution of radial velocity and temperature as a function of the mass coordinate for the model \(16.0M_{\odot}\). We first use an example of a model with \(M_{\rm ZAMS}=16.0M_{\odot}\) throughout this section. From the velocity figure, it can be seen that the shock begins to propagate outward from the point where the silicon/oxygen (Si-O) layer (\(\approx 1.62M_{\odot}\)) accretes onto the shock wave, due to the rapid decrease in ram pressure (Marek & Janka, 2009; Suwa et al., 2016). From the temperature figure, we can confirm that the post-shock temperature of the ejecta is spatially almost constant so we define the shock temperature as the temperature of the material just behind the shock wave. Figure 3 shows that the mass shell until the arrival of the shock in the explosion model is consistent with its behavior in the non-exploding model. In other words, we can confirm that the behavior of the mass shell up to the arrival of the shock is independent of the explosion detail. Thus, by overlaying the shock evolution on the trajectory of the mass shell in the non-exploding model, we can compare several models at once to see where the shock impacts each of the mass shells. In the following subsections, we present the results focusing on the effect of the explosion energy growth rate \(\dot{E}_{\rm expl}\) on \({}^{56}\)Ni nucleosynthesis. The results are summarized in Table 1. These yields consist only of unbound \({}^{56}\)Ni by gravity as determined by a 10-second simulation. We first use an example of a model with \(M_{\rm ZAMS}=16.0M_{\odot}\) throughout Sections 3.2 and 3.3. ### Hydrodynamics and \({}^{56}\)Ni Synthesis Region Figure 4 shows the time evolution of the shock for the models \(L_{\nu}=3,~{}5,~{}{\rm and}~{}7\times 10^{52}~{}{\rm erg~{}s^{-1}}\) and the trajectory of the mass shell in the unexploded model. In each model, the time evolutions of the shock are shown by colored lines for the range where the shock satisfies \(T_{9}>5\) (\(T_{9}\equiv T/10^{9}\) K), and by black dashed lines for the range where \(T_{9}<5\). We refer to the mass coordinate that can Figure 3: Radius evolution of Lagrangian mass shells with time for the explosion (\(L_{\nu}=3\times 10^{52}~{}{\rm erg~{}s^{-1}}\)) and non-exploding model (\(L_{\nu}=0~{}{\rm erg~{}s^{-1}}\)) of \(16.0M_{\odot}\). The thick black solid lines are the mass shells, spaced in steps of 0.1 \(M_{\odot}\), and the thin gray solid/dashed lines are spaced in steps of 0.02 \(M_{\odot}\). The difference between the dotted and solid lines corresponds to the explosion and non-exploding models, respectively. The blue line marks the shock radius of the explosion model. Figure 2: Time evolution of the velocity (top) and the temperature (bottom) as a function of the mass coordinate for model \(16.0M_{\odot}\). In both panels, each snapshot time corresponds to approximately every 0.1 seconds from 0.5 seconds to 1.5 seconds from the start of the simulation. The gray line corresponds to \(T=5\times 10^{9}\) K. spread at the shock temperature of \(T_{9}\approx 5\) as \(M_{T_{9}=5}\). We find that in all models \(M_{T_{9}=5}\) is near the mass coordinate with the enclosed mass \(\approx 1.65M_{\odot}\), which is indicated by the dotted line in Figure 4. More detailed values are given in Table 1, and this trend is almost universal, independent of the progenitor models. In all models, we find that \(M_{T_{9}=5}\) is near the mass coordinate with an enclosed mass of approximately \(1.65M_{\odot}\), as indicated by the dotted line in Figure 4. More detailed values are given in Table 1, and this trend is nearly universal and independent of the progenitor models. Qualitatively, this can be understood by using a zero-order approximation to estimate the shock radius at which the shock temperature is \(T_{9}=5\). When applying a simple fireball model in which the region behind the shock wave is uniform and dominated by radiation pressure (e.g., Woosley et al., 2002), we can estimate the following relation between the temperature \(T\), the shock radius \(r_{\rm sh}\) and the explosion energy \(E_{\rm expl}\) as follows: \[E_{\rm expl}=\frac{4\pi}{3}r_{\rm sh}^{3}(t)\ aT^{4}\, \tag{6}\] where \(a\) is the radiation constant. Then with \(E_{\rm expl}(t)\equiv 10^{51}\) ergs, the radius with \(T_{9}=5\) (\(r_{{}_{T_{9}=5}}\)) can be estimated as follows: \[r_{{}_{\rm T_{9}=5}}\approx 3.6\times 10^{8}(E_{\rm expl}/10^{51})^{1/3}\ {\rm cm}. \tag{7}\] This estimated radius is classically well-known (e.g., Woosley et al., 2002; Nomoto et al., 2013). If we consider the time evolution of \(E_{\rm expl}(t)=\dot{E}_{\rm expl}\cdot t\), this classical radius is satisfied with an adequate large \(\dot{E}_{\rm expl}\). However, at the shock velocity \(V_{\rm sh}=10^{9}\ {\rm cm\ s^{-1}}\), it takes less than 1 second to reach this radius. In other words, if the case of \(\dot{E}_{\rm expl}\lesssim 1\) Bethe s\({}^{-1}\), it takes a few seconds to reach 1 Bethe, and obviously, the radius of \(T_{9}=5\) will be small. In fact, from Figure 4, we can confirm that even in this simulation, the radius of \(T_{9}=5\) is reduced in the case of low-\(\dot{E}_{\rm expl}\). But at the same time, the time evolution of the shock radius is also slower down for lower-\(\dot{E}_{\rm expl}\) models, and the mass shell falls more inward due to collapse. Eventually, the'mass coordinate' of \(M_{T_{9}=5}\) seems to be approximately the same regardless of the \(\dot{E}_{\rm expl}\). This result suggests a very interesting trend. \({}^{56}\)Ni is synthesized mainly by complete Si burning at \(T\gtrsim 5\times 10^{9}\ {\rm K}\) (see in detail in Appendix A and Woosley et al., 1973). Thus, this hydrodynamical result suggests that the outermost mass coordinates, where \({}^{56}\)Ni is primarily synthesized, are insensitive to the explosion energy growth rate \(\dot{E}_{\rm expl}\). To confirm this trend in more detail, we next discuss the results of nucleosynthesis calculations. Although this is not relevant to the main focus of this paper, we show in Figure 5 for reference the comparison of the explosion energy in the simulation \(E_{\rm sim}\) with the estimated energy in the fireball approximation \(E_{\rm app}\). The explosion energy \(E_{\rm sim}\) in the hydrodynamical simulation is defined as the integral of the sum of specific internal, kinetic, and gravitational energies over all zones, in which it is positive. The estimated energy in the fireball approximation \(E_{\rm app}\) is given by the following equation using only the shock radius \(r_{\rm sh}\) and shock Figure 4: The time evolution of the shock radius in models \(L_{\nu}=3,\ 5,\ {\rm and}\ 7\times 10^{52}\ {\rm erg\ s^{-1}}\) with the mass shell trajectory in the unexploded model, on the time-radius plane. Figure 5: Comparison of the explosion energy in the simulation \(E_{\rm sim}\) (dashed line) with the estimated energy in the fireball approximation \(E_{\rm app}\) (solid line), which comes from Eq (8) in models \(L_{\nu}=2,\ 3,\ {\rm and}\ 5\times 10^{52}\ {\rm erg\ s^{-1}}\). The horizontal axis is the post-bounce time. temperature \(T\): \[E_{\rm app}=\frac{4\pi}{3}r_{\rm sh}^{3}(t)\ aT^{4}f(T_{9})\, \tag{8}\] where \(f(T_{9})=1+(7/4)\cdot T_{9}^{2}/(T_{9}^{2}+5.3)\) is a correction term to account for both radiation pressure and non-degenerate electron-positron pairs (e.g., Freiburghaus et al., 1999). As Figure 5 shows, this simple estimation is able to reproduce the explosion energy of the simulation with good enough accuracy. This supports the validity of the above discussion, and also suggests that thermal energy is dominant in the early phases of the explosion. ### Nucleosynthesis: Distribution of \({}^{56}\)Ni synthesis Figure 6 shows the abundance distribution as a function of the mass coordinate, for \(L_{\nu}=3,\ 5,\ {\rm and}\ 7\times 10^{52}\) erg s\({}^{-1}\) with \(16.0M_{\odot}\). As shown in Figure 6, focusing specifically on \({}^{56}\)Ni, we can confirm that the outermost mass radius, where \({}^{56}\)Ni is primarily synthesized, is in a similar position independent of the explosion energy growth rate \(\dot{E}_{\rm expl}\). In Table 1, we show the outermost mass radius where \({}^{56}\)Ni is largely synthesized (here, we define it as \(X(^{56}{\rm Ni})>0.5\)). However, at the same time, the innermost mass radius, which is gravitationally unbounded, depends strongly on the explosion energy growth rate \(\dot{E}_{\rm expl}\). ## 4 Discussion: Update 'Ni problem' Figure 7 shows the synthesized amount of \({}^{56}\)Ni as a function of the explosion energy growth rate \(\dot{E}_{\rm expl}\). It can be clearly seen that there is a decreasing trend of the synthesized amount of \({}^{56}\)Ni toward decreasing \(\dot{E}_{\rm expl}\). The reason for this trend is explained in section 3.3, but this figure tells us that the same trend is generally observed regardless of the mass and structure of the progenitor. For comparison with observations, in Figure 7, we adopted two typical values based on a recent systematic survey for more than 300 events of CCSNe; \(0.07M_{\odot}\)3 as the median estimated from stripe-envelope supernovae (SE-SNe) and \(0.03M_{\odot}\) from Type-II SNe (Rodriguez et al., 2021, 2022). Note that the figure plots the synthesized amount of \({}^{56}\)Ni ; not all \({}^{56}\)Ni can be ejected. In other words, the figure shows the maximum amount of \({}^{56}\)Ni that can be ejected by each CCSN model, and if the calculated mass of \({}^{56}\)Ni is larger than the observed value, then the model can reproduce the observed value. First, compared to the median value of Type II supernovae \(0.03M_{\odot}\), even a modern slow explosion (\(\dot{E}_{\rm expl}\lesssim 1\) Bethe s\({}^{-1}\) ) provides enough amount of \({}^{56}\)Ni to reproduce the observations. On the other hand, compared to the SE-SNe median of \(0.07M_{\odot}\), a very rapid explosion of \(\dot{E}_{\rm expl}\)\(\gtrsim 2\) Bethe s\({}^{-1}\) is required to reproduce this value. This translates to a time scale of \(t\lesssim 0.5\) seconds to the typical explosion energy \(\sim 1.0\) Bethe, and this timescale is very difficult to reproduce with current multi-D self-consistent calculations. Footnote 3: The \(\sim 0.07M_{\odot}\) is often adopted as a typical value obtained for well-studied nearby SNe is on average (e.g., SN 1987A, SN 1994I, SN 2002ap; Arnett et al., 1989; Iwamoto et al., 1994; Mazzali et al., 2002) and we adopt this value in previous studies. However, in this study, we clearly mention here that we do not use \(0.07M_{\odot}\) in the context of the typical for nearby SNe because we consider observational constraints from recently updated large-scale observational data. Here we discuss a few caveats in this problem as follows. 1. **[Observation of Type-II SNe]** The typical \({}^{56}\)Ni mass of canonical-CCSNe has been extensively discussed by large-scale observations in recent years. In particular, Type II SNe, when volume-limited, account for nearly \(\sim 60\%\) of the observed CCSNe (e.g., Li et al., 2011; Jones et al., 2021). Recently, Type II SNe have been found to have lower median nickel masses than SE-SNe (e.g., Anderson, 2019), confirming that this is not due to observational bias (Ouchi et al., 2021). Furthermore, the observed kinetic energy is also found to have a lower median value than the classical typical value (\(\sim 0.6\) Bethe; Martinez et al., 2022). These facts also support the possibility that the'slow' explosion results in the current state-of-the-art simulations are relatively consistent with standard Type II SNe. However, nickel synthesis and explosion energy (\(M_{\rm Ni}\approx 0.03M_{\odot}\) and \(E_{\rm expl}\sim 0.6\) Bethe) still remain important benchmarks for multidimensional self-consistent simulations, and it should be checked whether they are truly achieved. And, another important point is that this is only a statement of the median. According to Ouchi et al. (2021), the fitting function for the cumulative histogram for observed \({}^{56}\)Ni masses of each CCSNe is \(f(x)=\tanh{(14.60\times x)}\)4 as a variable of observed \({}^{56}\)Ni masses. This roughly implies that more than 20% of the Type II supernovae synthesize \({}^{56}\)Ni above \(0.075M_{\odot}\). While \(0.03M_{\odot}\) is a somewhat explainable value, this value is challenging to reproduce in multi-D self-consistent simulations. So, we need to explain and reproduce the high \({}^{56}\)Ni objects that will exist to some extent. 2. **[Multidimensional effect]** How the amount of synthesized \({}^{56}\)Ni changes in a multi-D explosion model is one of the issues to be discussed. Since this model is a 1D model and explodes only with thermal energy as shown in Figure 5, the temperature should be higher than the multi-D model, especially considering the geometric structure and the change to kinetic energy in the non-radial direction (Suwa et al., 2019). Therefore, we should note that the same \(\dot{E}_{\rm expl}\) in a multi-D model would have less \({}^{56}\)Ni than in a 1D model. In fact, with the exception of particular model results (Bollig et al., 2021, discussed next), the multi-D self-consistent simulation has even more difficulty with \({}^{56}\)Ni synthesis than the estimate of this study (e.g., Bruen et al., 2022). Therefore, for the same synthesis conditions, the 1D model gives a robust maximum limit on the volume and the amount of \({}^{56}\)Ni synthesis. The additional \({}^{56}\)Ni amount newly occurring due to the multi-D effect will be discussed next. 3. **[Additional mechanism to add \({}^{56}\)Ni ]** Another possibility for an additional \({}^{56}\)Ni, and one of the most often cited candidates for a solution to this problem, is the 'outflow' from the PNS surface for several seconds of the post-explosion phase (e.g., Wongwathanarat et al., 2017; Witt et al., 2021). Recent detailed simulations have predicted proton-rich ejecta in the post-explosion 'outflow' (e.g., Bruenn et al., 2016). In particular, Bollig et al. (2021) have observed the downflow/outflow system that results in a smooth and efficient transition from the incoming flow to the outgoing flow, with the outflow providing \({}^{56}\)Ni to \(\lesssim 0.05M_{\odot}\). However, we already found that the contribution of such replenishment is small for regular CCSNe explosions (Sawada and Suwa, 2021). That is, this outflow system is part of an 'energetic' model of the state-of-the-art simulations that succeeds in producing sufficient amounts of \({}^{56}\)Ni, and it is debatable whether this outflow system contributes to canonical-CCSNe explosions. Figure 6: Abundance distribution as a function of the enclosed mass \(Mr\), for (a) \(L_{\nu}=3\times 10^{52}\) erg s\({}^{-1}\), (b) \(L_{\nu}=5\times 10^{52}\) erg s\({}^{-1}\), and (c) \(L_{\nu}=7\times 10^{52}\) erg s\({}^{-1}\). All the models here are with \(16.0M\odot\) of Sukhbold et al. (2018). In all panels, the vertical dotted grey line indicates the location of the mass shell with an enclosed mass \(1.65M_{\odot}\). Figure 7: The amount of \({}^{56}\)Ni as a function of the growth rate of the explosion energy, \(\dot{E}_{\rm expl}\). The gray line indicates a typical value of \({}^{56}\)Ni, \(0.07M_{\odot}\). 4. **[Comparison to Imasheva et al. (2023) ]** Finally, we compare our results with those of Imasheva et al. (2023) who just recently pointed out the most obvious doubts about the '\({}^{56}\)Ni problems'. Figure 8 is a schematic comparison of our results with theirs. Their argument is that the correlation between \(\dot{E}_{\rm expl}\) and \({}^{56}\)Ni disappears when the initial collapse is included, and that further initial collapse inversely results in more \({}^{56}\)Ni being synthesized in slower explosions. Noting that the innermost ejecta radius is fixed in the thermal bomb model, their argument is consistent with the present results where \({}^{56}\)Ni is synthesized to almost the same mass coordinate independent of \(\dot{E}_{\rm expl}\), shown in Figure 4. We also confirm that \({}^{56}\)Ni is synthesized slightly more outwardly in models with slower initial collapse (i.e., the \(M_{\rm ZAMS}=19.5M_{\odot}\) model). The difference from their study is the treatment of the innermost mass coordinate of the ejecta, i.e., their inner boundary condition. Our explosion model determines the innermost mass coordinate of the ejecta self-consistently. We then found that the innermost material that could be ejected in the high-\(\dot{E}_{\rm expl}\) model could not achieve the escape condition in the low-\(\dot{E}_{\rm expl}\) model, leading to moving the innermost mass coordinate of the ejecta outward. In fact, for low \(\dot{E}_{\rm expl}\) models, Imasheva et al. (2023) themselves had mentioned the possibility that some of the innermost material may be unable to achieve escape conditions, remain gravitationally bound, and thus not contribute to the yield, and we confirmed this in this paper. Although our results are consistent with theirs, we confirm that \({}^{56}\)Ni problems reappear because the innermost ejecta radius shifts depending on the intensity of the \(\dot{E}_{\rm expl}\). We conclude that the modern slow explosion (\(\dot{E}_{\rm expl}\lesssim 1\) Bethe s\({}^{-1}\) ) can reproduce the observations of a standard Type II supernova. However, this is only a statement of a principal possibility. How much \({}^{56}\)Ni can be synthesized is an important benchmark for multidimensional self-consistent simulations, and it should be confirmed whether the median value for a standard Type II supernova (\(\approx 0.03M_{\odot}\)) is indeed achieved. On the other hand, the \({}^{56}\)Ni problem clearly exists in the explosion mechanism of SE-SNe, that is, the modern slow explosions cannot reproduce the SE-SNe observations. As a simple and straightforward solution that satisfies the \({}^{56}\)Ni problem without fine-tuning, we conclude that the SE-SNe favors active explosions in the early stages of shock revival (\(\dot{E}_{\rm expl}\gtrsim 2\) Bethe s\({}^{-1}\)). Since such high explosion energies are probably inconsistent with the standard explosion mechanism, the \({}^{56}\)Ni problem may require a different explosion mechanism for the SE-SNe. Anderson (2019) had already suggested from observations, but our results once again imply significant differences in the progenitor structures and/or the explosion mechanism between type II and SE-SNe. ## 5 Summary Figure 8: Schematic picture to the ‘\({}^{56}\)Ni problem in CCSNe’ as suggested by the the previous studies and this study, respectively. At first, Sawada & Maeda (2019) raised the \({}^{56}\)Ni problem because the \({}^{56}\)Ni synthesized region varies with the growth rate of the explosion energy \(\dot{E}_{\rm expl}\) when an un-collapsed progenitor is used. When taking into account that the progenitor collapses just before the explosion, the \({}^{56}\)Ni synthesized region becomes insensitive to \(\dot{E}_{\rm expl}\), and thus, Imasheva et al. (2023) proposed a disappearance of the \({}^{56}\)Ni problem. However, this study re-proposes the \({}^{56}\)Ni problem on the grounds that while the \({}^{56}\)Ni synthesized region is insensitive to \(\dot{E}_{\rm expl}\), the ejectable innermost mass radius depends on the \(\dot{E}_{\rm expl}\), as calculated using the light-bulb scheme in which the PNS masses is determined self-consistently. In this paper, we investigated the effect of the explosion energy growth rate \(\dot{E}_{\rm expl}\) on the behavior of \({}^{56}\)Ni nucleosynthesis in CCSNe. For numerical simulations, we employed the 1D Lagrangian hydrodynamic code in which neutrino heating and cooling terms are taken into account by the light-bulb approximation. The initial conditions are taken from Sukhbold et al. (2018), which have \(M_{\rm ZAMS}=12.3,16.0,18.0\), and \(19.5M_{\odot}\). Our first purpose was to present a detailed picture of \({}^{56}\)Ni nucleosynthesis in CCSNe with self-consistent explosion modeling. We found that \({}^{56}\)Ni is synthesized up to the almost same mass coordinate independent of \(\dot{E}_{\rm expl}\). We also found that in the low-\(\dot{E}_{\rm expl}\) model, some of the innermost material that was ejected in the high-\(\dot{E}_{\rm expl}\) model failed to achieve the escape condition, leading to moving the innermost mass coordinate of the ejecta to the outer positions. This means that while the \({}^{56}\)Ni nucleosynthesis volume is insensitive to the nature of the explosion, the ejected amount of \({}^{56}\)Ni is highly dependent on how much of the innermost PNS surface region is ejectable. Furthermore, our other goal was to sort out the recent controversial \({}^{56}\)Ni problem. We found that there is a decreasing trend of the synthesized amount of \({}^{56}\)Ni toward decreasing \(\dot{E}_{\rm expl}\). Compared to observations, we found that the modern slow explosion (\(\dot{E}_{\rm expl}\lesssim 1\) Bethe s\({}^{-1}\) ) can reproduce the observations of a standard Type II supernova in a principal. However, this does not mean that the \({}^{56}\)Ni problem has been solved, and the \({}^{56}\)Ni synthesis (\(M_{\rm Ni}\approx 0.03M_{\odot}\)) still remains an important benchmark for multi-D self-consistent simulations. It should be checked whether they are truly achieved. And extremely important are the comparison results with SE-SNe. We found that the median value of SE-SNe (\(M_{\rm Ni}\approx 0.07M_{\odot}\)) is challenging to reproduce in the modern slow explosion (\(\dot{E}_{\rm expl}\lesssim 1\) Bethe s\({}^{-1}\) ). As a simple and straightforward solution that satisfies the amount of \({}^{56}\)Ni without fine-tuning, the SE-SNe favors active explosions in the early stages of shock revival (\(\dot{E}_{\rm expl}\gtrsim 2\) Bethe s\({}^{-1}\)). Thus, we conclude that there are significant differences in the progenitor structures and/or the explosion mechanism between type II and SE-SNe. blcode, torch(Timmes, 1999) ## Acknowledgments The work has been supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grants 21J00825, 21K13964 (RS), 18H05437, 20H00174, 20H01904, and 22H04571 (YS).
2310.15733
Supernova Ejecta with Crystalline Silicate Dust in the Supernova Remnant MSH 15-52
IRAS 15099-5856 in the young supernova remnant (SNR) MSH 15-52 is the first and only SNR-associated object with crystalline silicate dust detected so far, although its nature and the origin of the crystalline silicate are still unclear. In this paper, we present high-resolution mid-infrared (MIR) imaging observations of the bright central compact source IRS1 of IRAS 15099-5856 to study the spatial distributions of gas and dust and the analysis of its Spitzer MIR spectrum to explore the origin of IRS1. The MIR images obtained with the T-ReCS attached on the Gemini South telescope show a complicated, inhomogeneous morphology of IRS1 with bright clumps and diffuse emission in [Ne II] 12.81 $\mu$m and Qa 18.30 $\mu$m, which confirms that IRS1 is an extended source externally heated by the nearby O star Muzzio 10, a candidate for the binary companion of the progenitor star. The Spitzer MIR spectrum reveals several ionic emission lines including a strong [Ne II] 12.81 $\mu$m line, but no hydrogen line is detected. We model the spectrum using the photoionization code CLOUDY with varying elemental composition. The elemental abundance of IRS1 derived from the model is close to that of SN ejecta with depleted hydrogen and enhanced metals, particularly neon, argon, and iron. Our results imply that IRS1 originates from the SN ejecta and suggest the possibility of the formation of crystalline silicate in newly-formed SN dust.
Hyun-Jeong Kim, Bon-Chul Koo, Takashi Onaka
2023-10-24T11:11:51Z
http://arxiv.org/abs/2310.15733v2
# Supernova Ejecta with Crystalline Silicate Dust in the Supernova Remnant MSH 15\(-\)52 ###### Abstract IRAS 15099-5856 in the young supernova remnant (SNR) MSH 15\(-\)52 is the first and only SNR-associated object with crystalline silicate dust detected so far, although its nature and the origin of the crystalline silicate are still unclear. In this paper, we present high-resolution mid-infrared (MIR) imaging observations of the bright central compact source IRS1 of IRAS 15099-5856 to study the spatial distributions of gas and dust and the analysis of its Spitzer MIR spectrum to explore the origin of IRS1. The MIR images obtained with the T-ReCS attached on the Gemini South telescope show a complicated, inhomogeneous morphology of IRS1 with bright clumps and diffuse emission in [Ne ii] 12.81 \(\mu\)m and Qa 18.30 \(\mu\)m, which confirms that IRS1 is an extended source externally heated by the nearby O star Muzzio 10, a candidate for the binary companion of the progenitor star. The Spitzer MIR spectrum reveals several ionic emission lines including a strong [Ne ii] 12.81 \(\mu\)m line, but no hydrogen line is detected. We model the spectrum using the photoionization code CLOUDY with varying elemental composition. The elemental abundance of IRS1 derived from the model is close to that of SN ejecta with depleted hydrogen and enhanced metals, particularly neon, argon, and iron. Our results imply that IRS1 originates from the SN ejecta and suggest the possibility of the formation of crystalline silicate in newly-formed SN dust. Infrared spectroscopy(2285) -- Supernova remnants(1667) -- Interstellar medium(847) ## 1 Introduction Silicates are the most common dust species in the interstellar medium (ISM) of galaxies. While silicate dust in the ISM of our Galaxy is indicated to be mostly amorphous (Kemper et al., 2004; Gordon et al., 2023), a sign of crystalline silicate has also been suggested (Douly et al., 2020). Crystalline silicates have so far been detected in evolved stars and young stars (e.g., Molster et al., 1999; Malfait et al., 1999). Crystallization of amorphous silicate grains is suggested to occur in circumstellar disks (Molster et al., 1999), and several hypotheses of the formation of crystalline silicate have been proposed including the radial mixing in the disk and shock waves (e.g., Maaskant et al., 2015, references therein). Crystalline silicate has been also detected in ultraluminous infrared galaxies with the crystalline-to-amorphous silicate mass ratios of \(\sim\)0.1 (Spoon et al., 2006). This suggests that supernovae (SNe) may be a source of crystalline silicates, leading to a model of evolution of crystalline silicates in galaxies (Kemper et al., 2011). Kemper et al. (2011) propose a model of silicate dust evolution in the galaxy that crystalline silicate is formed in SNe and amorphized by cosmic-ray hits, suggesting that the crystalline silicate features could be a useful measure of the youthfulness of the galaxy. However, the formation of silicate dust in SNe is uncertain. Past observations in near-infrared (NIR) to mid-infrared (MIR) have revealed the newly-formed dust in the ejecta of core-collapse SNe although the detected dust mass is smaller by more than two orders of magnitude than the mass theoretically predicted (e.g, Sugerman et al. 2006; Sakon et al. 2009; Szalai et al. 2011). Most of the SN dust have not shown the 10 \(\mu\)m silicate feature, suggesting that they are mostly carbonaceous dust, and only a few SNe show evidence for the formation of silicate dust (Kotak et al. 2009; Shahbandeh et al. 2023). On the other hand, an appreciable amount of dust has been detected in SN 1987A and young supernova remnants (SNRs) from the far-infrared (FIR) observations (e.g., Matsuura et al. 2011; Chawner et al. 2020; Millard et al. 2021), but the dust composition could not be derived because strong dust features are only present in MIR. From the MIR spectroscopic observations of SNRs, the presence of carbonaceous dust is suggested (e.g., Tappe et al. 2006; Andersen et al. 2011), and some SNRs show an indication of silicate dust including unusual non-stoichiometric silicate and metal oxides (e.g., Arendt et al. 2014; Temim et al. 2017; Rho et al. 2018). But there has thus far not been any observational evidence for the presence of crystalline silicate in SNe or SNRs except for one case, MSH 15\(-\)5\(2\) (G320.4-1.2), for which its association with the SN explosion is not yet clearly understood. MSH 15\(-\)5\(2\) is a young, core-collapse SNR with a complex morphology composed of the central pulsar wind nebula and a large radio emission about 40 pc in size at a distance of 5.2\(\pm\)1.4 kpc (Gaensler et al. 1999, 2002). From the large extent of the SNR compared to the young age of 1,700 yrs estimated by the central pulsar PSR B1509-58 (Seward et al. 1983), it was suggested that MSH 15\(-\)5\(2\) is the remnant of Type Ib SN (SN Ib) with a relatively small amount of SN ejecta and that the progenitor of the SNR was in a binary system with the O star Muzzio 10 (2MASS J15135520-5907516) which is \(\sim\)20\({}^{\prime\prime}\) apart from the pulsar (Gaensler et al. 1999). In the SNR, close to the pulsar, a bright MIR source IRAS 15099-5856 was discovered from Infrared Astronomical Satellite observations (Arendt 1991). IRAS 15099-5856 is only seen at wavelengths longer than \(\sim\)10 \(\mu\)m and shows a complicated morphology with a bright central compact source, a surrounding halo of \(\sim\)1\({}^{\prime}\) radius with knots and spurs, and several extended (\(\sim\)4\({}^{\prime}\)), knotty arc-like filaments (Figure 1; Koo et al. 2011). Koo et al. (2011) investigated the central compact source (IRS1) of IRAS 15099-5856 with the AKARI MIR imaging observations and the Spitzer IRS spectroscopy. The absence of emission at short (\(\lesssim\)10 \(\mu\)m) wave bands and the extended morphology observed in the AKARI images imply that IRS1 is an extended source likely heated by a nearby O star Muzzio 10 as proposed earlier (Arendt 1991). A unique feature of IRAS 15099-5856 revealed by the Spitzer IRS spectrum is the prominent crystalline silicate dust features (Koo et al. 2011), which has raised an intriguing question about the origin of IRAS 15099-5856 because it is the first and only detection of crystalline silicate associated with SNRs so far. Koo et al. (2011) proposed a scenario that IRS1 is the material from the progenitor of the SNR ejected at its final evolutionary stage based on the Spitzer spectrum which is well explained by dust models including Mg-rich crystalline silicates and the proximity among IRS1, Muzzio 10, and the pulsar PSR B1509-58. In this scenario, IRS1 might have survived the SN blast wave as being shielded by Muzzio 10, the former binary companion star of the progenitor. However, the nature of IRAS 15099-5856 and its association with Muzzio 10 and/or the central pulsar are still uncertain. While the proper motion of Muzzio 10 is known as \(4.9215\pm 0.0163\) mas yr\({}^{-1}\) with position angle (PA) = \(244.2^{\circ}\) (measured from north to east) from the Gaia Data Release 3 (Gaia Collaboration et al. 2016, 2023), the proper motion of PSR B1509-58 has only been derived with huge uncertainties. Gaensler et al. (1999) reported one-sigma upper limits on the pulsar's proper motion: 39 mas yr\({}^{-1}\) in right ascension and 52 mas yr\({}^{-1}\) in declination. Leung (2018) measured the proper motion of \(\mu_{\alpha}=2\pm 12\) mas yr\({}^{-1}\) and \(\mu_{\delta}=-50\pm 24\) mas yr\({}^{-1}\). The proper motion of the pulsar seems to imply that the site of the SN explosion about 1,700 yrs ago was \(>1^{\prime}\) apart from Muzzio 10, i.e., Figure 1: Three-color image of IRAS 15099-5856 produced with AKARI S11 (B, 11 \(\mu\)m), L15 (G, 15 \(\mu\)m), and L24 (R, 24 \(\mu\)m) images, which is adopted from Figure 1 of Koo et al. (2011). The cross (\(\times\)), diamond, and plus (+) symbols present the peak position of IRS1 at 15 \(\mu\)m, O star Muzzio 10, and the pulsar PSR B1509-58, respectively. no association between the SN progenitor and Muzzio 10, but more accurate measurements are required to confirm. In this paper, we investigate IRS1 as a follow-up of Koo et al. (2011). In Section 2, we present the high-resolution MIR imaging observations of IRS1 and examine the spatial morphology of IRS1 in detail. In Section 3, we analyze the Spitzer spectrum of IRS1 by model calculations to derive the elemental abundance of gas and investigate dust emission. Particularly, we take account of geometry and energy balance to draw a more physically realistic figure of IRS1. In Section 4, we discuss the origin of crystalline silicate in MSH 15\(-\)5\(2\) and dust formation in SN ejecta. In Section 5, we summarize and conclude our study. ## 2 Mid-Infrared Observations of Iras 15099-5856 IRS1 ### T-ReCS Observations and Data Reduction We observed the central compact source IRS1 of IRAS 15099-5856 using the Thermal-Region Camera Spectrograph (T-ReCS; Telesco et al., 1998; De Buizer & Fisher, 2005) attached on the Gemini South telescope (Program ID: GS-2012A-C-4; PI: Onaka, T.) on 2012 May 11 UT. T-ReCS uses a Raytheon \(320\times 240\) pixel Si:As IBC array, providing a pixel scale of \(0\farcs 089\) pixel\({}^{-1}\) with a field of view of \(28\farcs 8\times 21\farcs 6\). We applied the standard chop-nod technique in order to remove time-variable sky background, telescope thermal emission, and the 1/f noise in detector. The chop throw and angle were \(15\arcsec\) and \(20\arcdeg\), respectively. Images were obtained with the Si-6 (\(\lambda_{0}=12.33~{}\mu\)m, \(\Delta\lambda=1.18~{}\mu\)m), [Ne ii] (\(\lambda_{0}=12.81~{}\mu\)m, \(\Delta\lambda=0.23~{}\mu\)m), [Ne ii]cont (\(\lambda_{0}=13.10~{}\mu\)m, \(\Delta\lambda=0.22~{}\mu\)m), Qa (\(\lambda_{0}=18.30~{}\mu\)m, \(\Delta\lambda=1.51~{}\mu\)m), and Qb (\(\lambda_{0}=24.56~{}\mu\)m, \(\Delta\lambda=1.92~{}\mu\)m) filters, among which the Si-6 and [Ne ii]cont filters were used to determine the continuum baseline of the [Ne ii] image. For flux calibration, we observed standard stars \(\gamma\) Cru and \(\omega\) Lup (HD 139127) from Cohen standards (Cohen et al., 1999) with the same filters. The total exposure time of IRS1 was 300 sec for the Qa filter and 900 sec for the others. The standard stars were observed with the exposure time of 30 sec for all filters. Data were reduced by using the custom IDL software MEFTOOLS version 5.01. During the image stacking, bad-frames such as the ones affected by instrumental artifacts have been excluded via visual inspection. We corrected relative offsets between filters using standard stars observed by the same order as IRS1, but it was unable to obtain the accurate astrometric solutions because IRS1 is an extended source without nearby stars bright in MIR that can be used as a reference. Instead, we corrected the absolute astrometry using the O star Muzzio 10 only detected in the Si-6 image. Although this is a rough correction, the peak position in the Qa-band (\(\lambda_{0}=18.30~{}\mu\)m) is coincident with the peak position defined based on the AKARI 15 \(\mu\)m image within \(<\)0\(\farcs\)2. The seeing estimated from the standard stars is about 0\(\farcs\)6. For flux calibration, the standard star \(\omega\) Lup was used for all the filters except Qb that used \(\gamma\) Cru. The in-band fluxes of \(\omega\) Lup in the Si-6, [Ne ii], [Ne ii]cont, and Qa filters are 11.381, 10.569, 10.162, and 5.172 Jy, respectively, at airmass of 1, similar to the airmass of the IRS1 at the time of observations (1.14-1.4); the in-band flux of \(\gamma\) Cru in the Qb filter is 157.567 Jy at airmass of 1\({}^{2}\). Footnote 1: MEFTOOLS was developed and provided by James M. De Buizer via [http://www.jim-debuizer.net/research/](http://www.jim-debuizer.net/research/), but it is no longer available. ### Mid-Infrared Morphology Figure 2 displays the [Ne ii], Qa, [Ne ii]cont, Si-6, and Qb images of IRS1 obtained with T-ReCS and the AKARI S11 (\(\lambda_{0}=11~{}\mu\)m) image with the contours of Qa (green, black) and [Ne ii] (cyan) overlaid. The T-ReCS images were smoothed by a Gaussian kernel with a standard deviation of 1\(\arcsec\). In the AKARI image with low spatial resolution, IRS1 is elliptically extended along east-west direction with PA of \(110\arcdeg\). It is also extended in the T-ReCS images but shows an irregular morphology with sub structures. In the Qa image, IRS1 consists of two parts in the east and west. The eastern part is extended along northwest-southeast (PA = \(144\arcdeg\)) direction and composed of three bright clumps although the brightest peaks are not well defined. The western part, on the other hand, is extended along northeast-southwest (PA = \(50\arcdeg\)) direction and composed of a bright compact knot and diffuse emission. The size of the bright knot obtained by Gaussian fitting is about 2\(\farcs\)7\(\times\)1\(\arcsec\) in FWHM. Owing to this bright knot, the surface brightness of the western part is comparable to the brightness of the eastern part (see below) although the western part is about half the size of the eastern part. The [Ne ii] image is overall similar to the Qa image, but the detailed structure is different. The eastern part in [Ne ii] is extended as large as the eastern part of the Qa image, but two bright knots are distinctively shown with the size about 1\(\farcs\)4 \(\times\) 2\(\farcs\)7 and 1\(\farcs\)3 \(\times\) 1\(\farcs\)7 in FWHM, both of which are larger than the seeing (\(\sim\) 0\(\farcs\)6) measured from the stan dard stars. The western part in [Ne ii] is very faint and much smaller than the western part in the Qa image. A remarkable feature is that the distributions of Qa and [Ne ii] emission are not consistent with each other. Although the whole extent is similar, the bright peaks have offsets between the two images as shown by the Qa contours on the [Ne ii] image and the [Ne ii] contours on the Qa image in Figure 2. This discrepancy does not likely come from the inaccurate astrometry. The relative distances between the peaks are also different in the two images. Therefore, the Qa and [Ne ii] images obtained with T-ReCS indicate not only a complex, inhomogeneous morphology of IRS1 itself but also different distributions of the gas and dust in IRS1. The other T-ReCS images besides Qa and [Ne ii] do not show any particular emission. The [Ne ii]cont image shows no emission, implying no continuum emission in the [Ne ii] image. There is weak continuum around 13 \(\mu\)m in the Spitzer IRS spectrum of IRS1 (see Figure 3), but it may be too weak to be detected in the T-ReCS image. The Si-6 image shows the emission almost identical to [Ne ii]. This implies that there is no other line except [Ne ii]. While IRS1 is bright at wavelengths longer than 15 \(\mu\)m with the spectral energy distribution (SED) peaking at around 30 \(\mu\)m (Figure 3 of Koo et al., 2011), the Qb image at 24.56 \(\mu\)m in Figure 2 does not detect significant emission because of lower sensitivity of the Qb filter. In the Qb image, some faint emission features are shown inside the Qa contours, but they are well below three-sigma (3\(\sigma\)) where \(\sigma\) (\(\simeq 1.0\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) for Qb) is an rms noise. Previously, Koo et al. (2011) suggested that IRS1 is externally being heated by Muzzio 10, which is \(13\farcs 7\) away from IRS1 to the south, based on the non-detection of an embedded point source in optical/NIR and the temperature of Muzzio 10 that is appropriate to produce the observed [Ne ii] line luminosity obtained from the Spitzer spectrum (see Section 3 as well). The T-ReCS images also do not show any signature of a point source embedded in IRS1, confirming the previous prediction. A star might be deeply embedded in the bright [Ne ii] knots, but non-detection of any stellar source in [Ne ii]cont or Si-6 rules out this possibility. ### Flux Estimation We measured the flux of IRS1 from the Qa and [Ne ii] images. Applying aperture photometry, we estimated the flux of the whole source, eastern and western parts, and the two bright knots in [Ne ii] as listed in Table 1. The source regions were determined by the 3\(\sigma\) contours, and the knot regions were determined by the size of the knots. The uncertainty in flux measurements is \(\lesssim\)20%. As described earlier, the Qa flux of the western part is smaller than that of the eastern part, but the surface brightness of both are comparable. The [Ne ii] flux is concentrated on the bright knots with the surface brightness twice larger than the other area. For comparison, the Si-6 flux for the same source region as [Ne ii] is \(1.03\times 10^{-11}\) erg s\({}^{-1}\) cm\({}^{-2}\), which is a little larger than the [Ne ii] flux \(9.58\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) owing to weak Figure 2: T-ReCS images of IRS1 compared with the AKARI S11 (11 \(\mu\)m) image. The cyan contours on the Qa image are the [Ne ii] 12.81 \(\mu\)m contours with the flux levels of 0.3, 0.55, 0.65, 0.9, 1.2, and 1.5 mJy from the outermost. The green or black contours on the other images are the Qa 18.30 \(\mu\)m contours with the flux levels of 0.7, 1.1, 1.5, 2.1, and 2.5 mJy. On the AKARI S11 image, the cross and diamond symbols present the peak position of IRS1 at 15 \(\mu\)m and O star Muzzio 10, respectively. (\(\sim 1\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\)) continuum at 11.5-13 \(\mu\)m seen in the Spitzer IRS spectrum (see Section 3.1 and Figure 3). We compare the flux measured from the T-ReCS images with the flux estimated from the Spitzer IRS spectrum. The Qa flux obtained by using the transmission curve of the Qa filter3 is 10.16 Jy, larger than the Qa flux from the T-ReCS image by a factor of 1.5. The [Ne ii] line flux obtained by Gaussian fitting of the emission line (Section 3.1; Table 2) is \(8.21\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\), about 85% of the [Ne ii] flux from the T-ReCS image. The flux differences between the T-ReCS and Spitzer observations can be explained by the inhomogeneous morphology of IRS1 and slit-loss correction of the Spitzer IRS spectrum as well as in part by sky chopping operation in the T-ReCS observations. The Spitzer IRS spectrum was obtained with two low-resolution modules: the short-low (SL) module covering 5.2-14.5 \(\mu\)m and the long-low (LL) module covering 14.0-38.0 \(\mu\)m. The two slits perpendicularly placed on IRS1 did not cover the same area because of different slit widths (Figure 1 of Koo et al., 2011), and the SL slit along north-south direction with a slit width of 3\(\farcs\)7 only partially covered IRS1, requiring slit-loss correction. The slit-loss correction factor was determined by the brightness distribution of the source. IRS1 was assumed as a Gaussian distribution of \(12\arcsec\times 5\arcsec\) in size, which is very different from the morphology observed in the Qa and [Ne ii] images. Since the LL slit width (10\(\farcs\)7) is larger than the size of IRS1 estimated in the Qa image, the Spitzer spectrum possibly includes extended, diffuse emission as well that is not detected in the T-ReCS observations, leading to a larger Qa flux from the spectrum. We note that Koo et al. (2011) derived the slit-loss correction factor assuming the two-dimensional brightness distribution of IRS1 given by the AKARI images to match the flux between the AKARI images and Spitzer spectrum, which results in the Qa and [Ne ii] flux of 14.3 Jy and \(1.2\times 10^{-11}\) erg s\({}^{-1}\) cm\({}^{-2}\), respectively. Footnote 3: [http://www.gemini.edu/sciops/instruments/trees/imaging/filters](http://www.gemini.edu/sciops/instruments/trees/imaging/filters) ## 3 CLOUDY MODELING OF THE SPITZER IRS SPECTRUM OF IRS1 ### Spectral Characteristics The Spitzer IRS spectrum of IRS1 presented in Figure 3 was obtained in 2008 October 3 UT (Program ID: 50495; PI: Koo, B.-C.) and examined by Koo et al. (2011). In the AKARI images from N3 (\(\lambda_{0}\) = 3.2 \(\mu\)m) to L24 (\(\lambda_{0}\) = 24 \(\mu\)m), IRS1 is only seen at \(>\)11 \(\mu\)m. In the Spitzer IRS spectrum likewise, continuum emission is extremely weak (\(<\)0.1 Jy) at \(\lesssim\)13 \(\mu\)m and steeply increases to \(\sim\)20 \(\mu\)m. The most remarkable feature in the spectrum is the strong and relatively narrow peaks at 23, 27, and 34 \(\mu\)m, which are well explained by crystalline silicate dust. A model of the IRS spectrum produced by modified-blackbody fitting with dust species including crystalline olivine (Mg\({}_{1.9}\)Fe\({}_{0.1}\)SiO\({}_{4}\)), metal oxides (FeO, MgO), and amorphous silicate fairly well reproduces the observed spectrum, providing total dust mass of \(9\times 10^{-3}\)\(M_{\odot}\) and dust temperature of \(\sim\)55-150 K at a distance of 4 kpc (see Koo et al., 2011, for details). The spectrum also shows several ionic emission lines including a strong [Ne ii] 12.81 \(\mu\)m line that was partly discussed in Koo et al. (2011). In Figure 3, the emission lines are not clearly seen because of the prominent dust features except the [Ne ii] line at 12.81 \(\mu\)m. We detected [Ar iii] 8.99 \(\mu\)m, [S iv] 10.51 \(\mu\)m, [Ne ii] 12.81 \(\mu\)m, [Ne iii] 15.56 \(\mu\)m, [S iii] 18.71 \(\mu\)m, and [O iv] 25.89/[Fe ii] 25.99 \(\mu\)m lines (Figure 3) and measured the line fluxes \begin{table} \begin{tabular}{c c c c c c} \hline \hline Filter & Region & \multicolumn{3}{c}{Flux} & Area & Brightness \\ \cline{3-6} & & (Jy) & (\(\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & (arcsec\({}^{2}\)) & (\(\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\)) \\ \hline \multirow{3}{*}{Qa} & whole & 6.85 & 92.6 & 91.17 & 10.2 \\ & east & 3.84 & 51.9 & 29.07 & 17.9 \\ & west & 1.64 & 22.2 & 13.51 & 16.4 \\ \hline \multirow{3}{*}{[Ne ii]} & whole & 2.28 & 9.58 & 71.54 & 1.34 \\ & east & 1.80 & 7.58 & 32.57 & 2.33 \\ \cline{1-1} & west & 0.15 & 0.63 & 4.47 & 1.42 \\ \cline{1-1} & east knot & 0.21 & 0.87 & 1.60 & 5.44 \\ \cline{1-1} & west knot & 0.34 & 1.45 & 2.62 & 5.53 \\ \hline \end{tabular} Note. – The uncertainty in flux measurements is \(\lesssim\)20%. \end{table} Table 1: T-ReCS Qa and [Ne ii] Flux of IRS1 by the Gaussian fitting. No hydrogen line has been seen in the spectrum. For the Gaussian fitting, we used the IDL MPFIT package (Markwardt, 2009)4. The derived line fluxes are listed in Table 2. The flux errors in the table are from the Gaussian fitting and do not include the uncertainty of the observed spectrum. Since the resolving power of the Spitzer IRS LL module5 between 14 and 21.3 \(\mu\)m is given as \(R=2.9524\lambda\), i.e., R \(\sim\)76 at 25.9 \(\mu\)m, two adjacent lines [O iv] 25.89 \(\mu\)m and [Fe ii] 25.99 \(\mu\)m are not resolved. The [S iii] line at 33.48 \(\mu\)m also seems to present in the spectrum, but the line flux was not measured because the line is severely blended with the strong dust feature at 34 \(\mu\)m. The FWHM of the emission lines obtained from the Gaussian fitting is from 0.11 to 0.39 \(\mu\)m depending on the wavelength. These line widths are comparable to the spectral resolving power, implying that the velocity of the ionic lines is not resolved. Footnote 4: [https://pages.physics.wisc.edu/craigm/idl/fitting.html](https://pages.physics.wisc.edu/craigm/idl/fitting.html) Footnote 5: Spitzer Space Telescope Observer’s Manual version 8.0, Chapter 7.1.6, issued by the Spitzer Science Center ([http://ssc.spitzer.caltech.edu](http://ssc.spitzer.caltech.edu)) ### Emission Line Ratios The IR fine-structure emission lines are frequently used as a diagnostic tool in the investigations of gaseous nebulae, ionized regions, or obscured clouds. Particularly, the line ratios of some specific lines have a tight correlation, providing physical conditions of the region of interest (Dinerstein, 1995; Dopita & Sutherland, 2003). In Figure 4, we present the line ratio diagram [Ne iii]\({}_{15.56\mu\mbox{m}}\)/[Ne ii]\({}_{12.81\mu\mbox{m}}\) versus [S iv]\({}_{10.51\mu\mbox{m}}\)/[S iii]\({}_{18.71\mu\mbox{m}}\) of various astronomical objects with the observed line ratios of IRS1 (IRAS 15099). The line ratios of the objects except novae were obtained from the literature: H ii regions in the Galaxy and Large/Small Magellanic Clouds (LMC/SMC) from Tables 2 and 5 of Giveon et al. (2002); giant H ii regions from Table 2 of Lebouteiller et al. (2008); LMC/SMC planetary nebulae (PNe) from Table 2 of Bernard-Salas et al. (2008); a luminous blue variable candidate (cLBV) G79.29+0.46 from Jimenez-Esteban et al. (2010). The line ratios of novae were obtained from the model calculations using the one-dimensional plasma simulation code Cloudy6 version C13.03 (Ferland et al., 2013). Cloudy solves the ionization, chemical, and thermal state of material exposed to an external radiation field or other heating source, and predicts observable quan \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Line} & Wavelength & Flux \\ & \(\mu\)m & (\(\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\)) \\ \hline \([\)Ar III\(]\) & 8.99 & 0.13(\(\pm\)0.02) \\ \([\)S IV\(]\) & 10.51 & 0.14(\(\pm\)0.02) \\ \([\)Ne II\(]\) & 12.81 & 8.21(\(\pm\)0.02) \\ \([\)Ne III\(]\) & 15.56 & 0.97(\(\pm\)0.04) \\ \([\)S III\(]\) & 18.71 & 0.51(\(\pm\)0.07) \\ \([\)O IV\(]\)/[Fe II]a & 25.89/25.99 & 0.93(\(\pm\)0.07) \\ \hline \end{tabular} \end{table} Table 2: Detected Emission Lines and their Fluxes from Spitzer IRS Spectrum of IRS1 Figure 4: \([\)Ne iii\(]_{15.56\mu\mbox{m}}\)/[Ne ii]\({}_{12.81\mu\mbox{m}}\) versus [S iv]\({}_{10.51\mu\mbox{m}}\)/[S iii]\({}_{18.71\mu\mbox{m}}\) line ratio diagram of various astronomical objects (gray cross and open symbols), Cloudy models of novae (black filled hour glass), and IRAS 15099-5856 IRS1 (red star). The gray lines are the Cloudy model grid produced with hydrogen density \(n\)(H)=100 cm\({}^{-3}\), heating source temperature from 35,000 K to 50,000 K, and ISM abundance with varying neon abundance. The small vertical bars in orange, yellow, green, blue, purple, and black indicate neon abundance from \(-\)4.0301 to \(-\)1.5301 with an interval of 0.5 in log scale relative to hydrogen. Figure 3: Spitzer IRS spectrum of IRS1 (Koo et al., 2011) with the detected lines marked. ities such as emission and absorption spectra that can be compared with observations. The nova models were calculated by assuming a blackbody of \(T_{\rm eff}=47,000\) K and \(L=6.3\times 10^{36}\) erg s\({}^{-1}\)(Schwarz et al., 2007) and by adopting the abundances of the novae V1500 Cygni (Ferland & Shields, 1978) and V838 Her (Schwarz et al., 2007), which show enhanced metal abundances (see Table 3). We also overlay a Cloudy model grid produced with hydrogen density \(n\)(H)=100 cm\({}^{-3}\), heating source temperature from 35,000 K to 50,000 K, and the ISM abundance with varying neon abundance from \(-4.0301\) (the ISM abundance) to \(-1.5301\) in log scale relative to hydrogen, i.e., \(n\)(Ne) from \(9.33\times 10^{-3}\) cm\({}^{-3}\) to 2.95 cm\({}^{-3}\). For the ISM abundance, we adopted the protosolar abundance (Asplund et al., 2009, see also Table 3). The shapes of the radiation fields (SEDs) of heating sources were adopted from the pre-calculated stellar atmospheric models of the Tlusty OB star grids7(Lanz & Hubeny, 2003, 2007) provided along with the Cloudy code from which we selected the main-sequence star models at solar metallicity for a given temperature. Footnote 7: [http://tlusty.oca.eu/Tlusty2002/tlusty-frames-cloudy.html](http://tlusty.oca.eu/Tlusty2002/tlusty-frames-cloudy.html) It has been known that there is a good correlation between [Ne iii]\({}_{15.56\micron}\)/[Ne ii]\({}_{12.81\micron}\) and [Si iv]\({}_{10.51\micron}\)/[Si iii]\({}_{18.71\micron}\) (e.g., Martin-Hernandez et al., 2002). The relation is almost linear, and it suggests that the two line ratios are almost equally affected by the hardness of the ionizing radiation. In terms of the stellar \(T_{\rm eff}\), the observed line ratios of the H ii regions of low to high ionization structures can be described with \(T_{\rm eff}=\)35,000 K to 50,000 K (Figure 4; see also Figure 2 of Martin-Hernandez et al., 2002). Figure 4 shows that the relation is in general consistent with the theoretical relation expected for H ii regions with the ISM abundance, although the majority of the H ii regions with low-ionization structure appears to be above the theoretical line. In contrast, the nova models are located well below the theoretical relation for the ISM abundance, which is likely due to the high abundances of heavy elements. The Cloudy model grid, for example, demonstrates how the line ratios vary with neon abundance, and it indicates that the enhanced neon abundance lowers the [Ne iii]\({}_{15.56\micron}\)/[Ne ii]\({}_{12.81\micron}\) ratio. The observed line ratios of IRS1 are similar to those of nova, implying that the elemental composition of IRS1 might be similar to that of nova. Hence, we adopt the nova abundance as an initial abundance set for our modeling of the Spitzer IRS spectrum of IRS1 in Section 3.5. We note that there is an issue about the line ratios of IRS1 derived from the Spitzer IRS spectrum. As described in Section 2.3, IRS1 was observed with two IRS modules with different slit widths that covered different parts of IRS1. Since the two lines of each pair in Figure 4 ([S iv] and [S iii]; [Ne iii] and [Ne ii]) are from different modules, their uncertainty can be large depending on the slit-loss correction. We compare the line ratios of IRS1 on the empirical relation \(\log\left(\mbox{[Ne\,III]/[Ne\,II]}\right)=0.81\times\log\left(\mbox{[S\,IV]/[ Ne\,II]}\right)+0.36\)(Groves et al., 2008) which was derived from the archival spectra of a wide range of astrophysical objects from nearby H ii regions to ultraluminous infrared galaxies obtained by Spitzer and Infrared Space Observatory (ISO; Kessler et al., 1996). With the [S iv]/[Ne ii] ratio derived from the same SL module, the [Ne iii]/[Ne ii] ratio expected by this relation is 0.09, which is comparable to [Ne iii]/[Ne ii] \(\sim\)0.12 derived from the observed spectrum. Therefore, we assume that the slit-loss correction is acceptable, although there is still uncertainty from the brightness distribution between the one we assumed (i.e., 2D Gaussian distribution) and the real distribution of IRS1. ### Model Parameters and Assumptions Previously, Koo et al. (2011) modeled the Spitzer IRS spectrum as thermal emission from several independent dust components using the modified blackbody. Their models well reproduce the observed spectrum, but the derived dust temperatures show large differences ranging from 55 K to 150 K because they treated dust components independently. In this study, we model the IRS spectrum using Cloudy to include physical process and energy balance. We first set the geometry of model calculation. The observations imply that IRS1 with a size \(9\farcs 6\times 5\farcs 1\) or 0.19 pc \(\times\) 0.10 pc at the distance of 4 kpc (Koo et al., 2011), is externally heated by Muzzio 10 separated by \(13\farcs 7\) or 0.27 pc, in projected distance. The separation between IRS1 and Muzzio 10 (\(r_{0}\)) in principle should be treated as a free parameter because it significantly affects the radiation absorbed by IRS1 and total dust mass, but we fixed it to reduce the number of free parameters based on the initial models (See Section 3.4). We also fixed the thickness of IRS1 as the same as the major axis (0.19 pc) of IRS1 on the projected sky. Figure 5 is a schematic figure of the geometry. Since the heating source (= Muzzio 10) is outside the cloud (= IRS1), we adopted a covering factor (= \(\Omega/4\pi\), where \(\Omega\) is an area of the cloud divided by the distance between the heating source and cloud.) to take account into a fraction of the radiation field emitted by the heating source that actually strikes the cloud. In Figure 5, the 'transmitted' radiation is the net emission emerging from the shielded face of the cloud, and it includes both the attenuated continuum radiation of Muzzio 10 and the diffuse emission emitted from IRS1. The'reflected' radiation is the emission from the illuminated face of the cloud back into the direction towards the heating source, and it includes both the backscattered incident radiation of Muzzio 10 and the diffuse emission emitted from IRS1. The reflected emission corresponds to the emission that we observe. The heating source in model calculations was fixed as Muzzio 10. The spectral type of Muzzio 10 is O4.5III(fp) (M. Bessell 2010, private communication) or O5n(f)p (Maiz Apellaniz et al., 2016). Since the luminosity class is uncertain for the latter, we adopted the stellar parameters of an O4.5III star (Martins et al., 2005): \(T_{\rm eff}=40,500\) K, log \(g=3.71\) cm s\({}^{-2}\), and log \(L/L_{\odot}\) =5.76 or log \(Q_{0}=49.52\) s\({}^{-1}\). For the shape of the radiation field, we used the Tlusty O star model at solar metallicity with \(T_{\rm eff}=40,000\) K and log \(g=3.75\) cm s\({}^{-2}\)(Lanz & Hubeny, 2003). Dust species were adopted from Koo et al. (2011). We calculated the absorption/scattering coefficients of each dust from their optical constants following the Bohren-Huffman Mie scattering (Bohren & Huffman, 1983) and compiled dust opacity files using the grain code in Cloudy. The optical constants of dust were adopted from the literature: crystalline olivine (Mg\({}_{1.9}\)Fe\({}_{0.1}\)SiO\({}_{4}\)) from Fabian et al. (2001); FeO and Mg\({}_{0.6}\)Fe\({}_{0.4}\)O from Henning et al. (1995). For amorphous silicate, we used the opacity provided in Cloudy. The compiled opacity assumes a spherical dust grain with a size of 0.25 \(\mu\)m for FeO and 0.1 \(\mu\)m for the others. Figure 6 presents the absorption coefficients (\(Q_{\rm abs}\)) of each dust species. ### Constraints from SED Models Since a large number of free parameters are involved in Cloudy calculations, we first modeled the SED of IRS1 to constrain some parameters. Besides the assumed geometry (Figure 5), heating source of an O4.5III star, and dust species from Koo et al. (2011), low hydrogen density was required to produce an SED that shows no emission at \(\lesssim\)11 \(\mu\)m. We also found that \(r_{0}=0.45\) pc would reasonably well fits the observed MIR flux levels, so we fixed \(r_{0}\) as 0.45 pc. We also produced the models with an internal heating source to verify a possibility that there is a deeply embedded heating source inside IRS1. Assuming a central star surrounded by dust cloud at a certain distance (\(R_{\rm in}\)), we produced the models with heating sources of \(T_{\rm eff}=30,000\) K and 19,000 K corresponding to a B0V and B3V star, respectively. The other parameters such as hydrogen density, \(R_{\rm in}\), or abundance were adjusted to match the observed MIR flux and a dip around 10 \(\mu\)m. Figure 7 compares the observed and model SEDs reddened by the column density of \(9.2~{}\times~{}10^{21}\) cm\({}^{-2}\) with \(R_{\rm V}=3.1\)(Koo et al., 2011). In the models with an embedded heating source, as shown in the figure, continuum emission in NIR and even in optical wavelengths is definitely expected. While a dip around 10 \(\mu\)m can be produced and become deeper by larger \(R_{\rm in}\), continuum emission at short wavelengths is always predicted. A heating source with significantly low temperature can reduce continuum flux but cannot produce ionic emission lines we observe in the IRS spectrum. For example, the strong [Ne ii] line at 12.81 \(\mu\)m is not produced by a heating source with \(T_{\rm eff}=19,000\) K (green line in Figure 7). From the models with an internal heating source, we confirm that IRS1 is externally heated by Muzzio 10. Figure 5: Geometry of Cloudy model calculations of IRS1. The blue star is the heating source Muzzio 10. ‘Inc.’ refers to incident continuum from Muzzio 10, and ‘Trans.’ and ‘Ref.’ refer to reflected and transmitted radiation, respectively. Figure 6: Absorption coefficients (\(Q_{\rm abs}\)) of dust species used in Cloudy model calculations. The black, red, blue, and green colors represent crystalline olivine (Mg\({}_{1.9}\)Fe\({}_{0.1}\)SiO\({}_{4}\)), FeO, Mg\({}_{0.6}\)Fe\({}_{0.4}\)O, and amorphous silicate, respectively. The size of a dust grain is 0.1 \(\mu\)m in radius except FeO with a radius of 0.25 \(\mu\)m. In Figure 7, the blue solid- and red dashed-lines are the models with the same heating source (\(T_{\rm eff}=30,000\) K) but with different hydrogen density of \(\log n({\rm H})=2.4\) (or, \(n({\rm H})=250\) cm\({}^{-3}\)) and \(\log n({\rm H})=-4.9\) (or, \(n({\rm H})=1.3\times 10^{-5}\) cm\({}^{-3}\)), respectively. In Cloudy, densities of gas and dust are defined relative to hydrogen. For these two models with different hydrogen densities, we adjusted the amount of metals and grains to result in the same dust masses. The SEDs of the two models are almost identical in MIR but very different in NIR. While the model with a moderate hydrogen density (blue) shows a group of emission lines from hydrogen, the model with negligible hydrogen (red) shows no emission line at all. This holds true for a case that is heated externally as well (see Figure 10 in Section 3.5). Since we do not observe any hydrogen emission in optical and/or NIR, we have assumed that hydrogen is depleted in IRS1 and fixed the hydrogen density to be almost zero (but set the value of \(n({\rm H})\sim 10^{-5}\) cm\({}^{-3}\) to run the code). ### Modeling of the Spitzer IRS Spectrum From the SED models, we fixed the geometry of an externally heated cloud, the heating source of an O4.5III star, and negligibly-small hydrogen density. With these parameters, we modeled the Spitzer IRS spectrum to derive the physical/chemical characteristics of IRS1. In Cloudy, the initial abundance can be adopted from the stored abundance sets including H ii regions, general ISM, novae, and PNe. Based on the line ratio diagram (Figure 4, Section 3.2), we adopted the abundance of the nova V1500 Cyg listed in Table 3 as an initial abundance set and adjusted the amount of metals and grains by changing the scale factors. The elements not listed in Table 3 were not included in the calculations. With the hydrogen density \(\log n({\rm H})=-4.9\), we applied a scale factor of 7.5 in log scale to metals (the elements heavier than helium) and grains in the nova abundance to fit the MIR flux level of the IRS spectrum. We first searched for a model that reproduces the observed dust features by changing the amount of four dust species: crystalline olivine, FeO, Mg\({}_{0.6}\)Fe\({}_{0.4}\)O, and silicate. With the fixed dust abundance, we determined the gas abundance that explains the observed line flux derived in Section 3.1 by changing the densities of six elements involved in the formation of the observed emission lines: nitrogen, oxygen, neon, sulphur, argon, and iron. These six ions do not independently act but are tightly correlated to each other. For example, the increased neon abundance does not always lead to stronger neon lines, or the increased iron abundance strengthens the [Ne ii] line as well as iron lines but not the [Ne iii] line. Since hydrogen is depleted in IRS1, heating process mostly depends on photoelectric heating by dust and heavy elements rather than photoionization by hydrogen; thus, changes of the metal and dust abundances affect the heating and cooling processes, complicating the model calculations. With a large number of free parameters and limited observational data, it is unlikely possible to find the only model that perfectly fits IRS1. Instead, we intend to present a reference model that reasonably well explains the observations and to discuss some parameters that affect the modeling results. In Figure 8, we present the reference model of IRS1 in red color that fits fairly well the Spitzer IRS spectrum in both dust features and line intensities. The model spectrum was smoothed to the spectral resolution of the Spitzer IRS modules. The model reproduces the observed dust features at 18, 23, 27, and 34 \(\mu\)m but with larger flux and narrower width for the 23 \(\mu\)m peak. The continuum slope from 15 to 20 \(\mu\)m in the model is also steeper than observed. These differences can be explained by the sizes and shapes of dust grains. We have assumed a spherical dust grain of 0.1 \(\mu\)m (or 0.25 \(\mu\)m for FeO) because of the limited availability of the optical constants or opacities of dust in MIR, but dust properties in fact highly depend on both dust size and shape (e.g., Koike et al., 1989, 2010; Min, 2015). For example, the 10 \(\mu\)m silicate feature becomes broader and is shifted to longer wavelength as the grain shape deviates from a perfect sphere. The prominent features of forsterite (Mg-rich crystalline silicate) are also significantly sup Figure 7: SEDs of IRS1 from the observations and Cloudy models with an embedded heating source. The blue solid- and red dashed-lines are the models with the same heating source of \(T_{\rm eff}=30,000\) K (B0V) but with different hydrogen density of \(\log n({\rm H})=2.4\) and \(-4.9\) (or, \(n({\rm H})=250\) and \(1.3\times 10^{-5}\) cm\({}^{-3}\)), respectively. The green line is the model with the heating source of \(T_{\rm eff}=19,000\) K (B3V). The symbols are the fluxes from various observations as labeled in the legend. The open symbols with an arrow represent upper limits. pressed for the dust grains with larger size (Figure 5 of Min 2015) or elliptical shape (Figure 9 of Koike et al. 2010). This implies that the model of the IRS spectrum can be improved by using elliptical and/or larger dust grains. We note that the dust models in Koo et al. (2011) that used a continuous distribution of ellipsoids (CDE) for FeO give a better fit for the steep continuum shape at 15-20 \(\mu\)m, but their models produced by independent dust components result in the temperature of 90-150 K for FeO and Mg\({}_{0.6}\)Fe\({}_{0.4}\)O (or MgO), which is much higher than the temperature of the other dust components around 55 K. Table 4 presents dust mass and temperature obtained from the reference model. Most dust mass is contributed by amorphous silicate. The contribution from crystalline olivine is small but indispensable to fit the observed dust features. The total dust mass is much smaller (\(\sim\)27%) than the dust mass of \(9\times 10^{-3}\)\(M_{\odot}\) derived from the modified-blackbody fit (Koo et al. 2011), likely due to the constrained geometry. Total dust mass highly depends on the geometry such as \(r_{0}\) or the thickness of IRS1 both of which were fixed in our calculations. If we increase \(r_{0}\) to 0.52 pc, dust mass becomes comparable to that of Koo et al. (2011). We note that dust mass from our calculation is consistent with dust mass from the model with a single dust of carbonaceous-silicate (Koo et al. 2011). Comparing to Koo et al. (2011), the relative fraction and temperature of the individual dust species are also different. This may come from the differences in dust opacity. While the optical constants of each dust were adopted from the same literature, the calculations of dust absorption coefficients are different likely due to the assumed shapes and sizes of dust grain, \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Component} & Mass & Temperature \\ & (\(10^{-3}\)\(M_{\odot}\)) & (K) \\ \hline Crystalline olivine (Mg\({}_{1.9}\)Fe\({}_{0.1}\)SiO\({}_{4}\)) & 0.06 & 79 \\ FeO & 0.04 & 67 \\ Mg\({}_{0.6}\)Fe\({}_{0.4}\)O & 0.20 & 79 \\ Amorphous silicate & 2.09 & 72 \\ \hline Total & 2.40 & \\ \hline \end{tabular} \end{table} Table 4: Dust Parameters of IRS1 from the Reference Model Figure 8: Spitzer IRS spectrum (black) of IRS1 and Cloudy models with different thickness. The red line is the reference model that fits fairly well the Spitzer IRS spectrum in both dust features and line intensities. The purple and blue lines are the models calculated with the same parameters as the reference model but with a half and twice of the thickness. The model spectra were smoothed to the spectral resolution of the Spitzer IRS modules. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{ Atom} & ISM\({}^{a}\) & Nova (V1500 Cyg)\({}^{b}\) & Nova (V838 Her)\({}^{c}\) & IRS1 Model \\ \hline H & 1.00E+02 & 3.16E+07 & 3.16E+07 & 1.26E-05 \\ He & 9.55E+00 & 3.09E+06 & 4.47E+06 & 1.23E-06 \\ C & 2.95E-02 & 2.95E+04 & 6.03E+04 & 3.72E-01 \\ N & 7.41E-03 (\(-\)0.60) & 3.09E+05 (1.02) & 7.41E+04 (0.09) & 3.89E-04 (\(-\)2.98) \\ O & 5.37E-02 (0.26) & 5.37E+05 (1.26) & 2.82E+04 (\(-\)0.33) & 1.07E+00 (0.46) \\ Ne & 9.33E-03 (\(-\)0.50) & 6.46E+04 (0.34) & 1.91E+05 (0.50) & 3.24E+01 (1.94) \\ Mg & 4.37E-03 (\(-\)0.83) & 1.20E+03 (\(-\)1.39) & 1.58E+03 (\(-\)1.58) & 1.51E-02 (\(-\)1.39) \\ Si & 3.55E-03 (\(-\)0.92) & 1.12E+03 (\(-\)1.42) & 2.04E+03 (\(-\)1.47) & 1.41E-02 (\(-\)1.42) \\ S & 1.45E-03 (1.31) & 5.13E+02 (\(-\)1.76) & 6.92E+03 (\(-\)0.94) & 7.24E-02 (0.71) \\ Cl & 1.86E-05 (\(-\)3.20) & 5.89E+00 (\(-\)3.70) & 5.89E+00 (\(-\)4.01) & 7.41E-05 (\(-\)3.70) \\ Ar & 2.75E-04 (\(-\)2.03) & 1.15E+02 (\(-\)2.41) & 1.15E+02 (\(-\)2.72) & 1.15E-01 (\(-\)0.51) \\ Fe & 3.47E-03 (\(-\)0.93) & 1.48E+03 (\(-\)1.30) & 8.51E+03 (\(-\)0.85) & 1.17E+01 (1.50) \\ \hline \end{tabular} Note. – The abundance is the absolute number density (cm\({}^{-3}\)) applied to the Cloudy models. The numbers in parentheses are the number density of the element relative to carbon in log scale, which shows the relative abundances among metals. \end{table} Table 3: Abundances used in the CLOUDY Modeling resulting in slightly different dust properties. For example, the strength of two peaks at 18 \(\mu\)m and 23 \(\mu\)m of the absorption coefficient of crystalline olivine are comparable in our calculations (Figure 6), while the 23 \(\mu\)m peak is only 60% relative to the 18 \(\mu\)m peak in Koo et al. (2011), requiring larger mass and lower temperature to fit the observed 23 \(\mu\)m feature. For FeO, we assume a spherical grain of 0.25 \(\mu\)m, whereas Koo et al. (2011) assumed a CDE with a size of 0.1 \(\mu\)m which shows the weaker, broader, and asymmetric peak of the absorption coefficient. The Cloudy model with the abundance listed in Table 3 also reproduces several ionic lines observed in the IRS spectrum. Since we have applied a scale factor of 7.5 in log scale to metals and grains in the abundance of the nova V1500 Cyg and adjusted the amounts of selected ions, we also present the relative abundance of metals to the carbon abundance. The reference Cloudy model indicates that neon, argon, and iron are enhanced in IRS1. As we pointed out earlier, the reference model is not the only model that can explain IRS1, but the overall trend in abundance may not be very different in order to produce the observed lines. The predicted lines and their fluxes are presented in Table 5 with the relative strength to the observed line fluxes. The predicted line fluxes mostly agree with the observations. The [S iii] line at 33.48 \(\mu\)m and [Fe iii] line at 22.93 \(\mu\)m are also predicted. While these two lines are predicted to be strong, they are blended with the dust features of crystalline olivine at 34 \(\mu\)m and 23 \(\mu\)m, respectively, so hardly detectable in the IRS spectrum. In the model spectrum, after we smoothed it to the Spitzer IRS resolution, the [S iii]\({}_{33.48\mu\rm m}\) and the [Fe iii]\({}_{22.93\mu\rm m}\) lines have become hidden under the dust features as well (Figure 8). On the other hand, the [Fe ii]\({}_{25.99\mu\rm m}\) line predicted from the model is much weaker than observed. While we noted that the line detected at \(\sim\)25.9 \(\mu\)m could be either [O iv]\({}_{25.89\mu\rm m}\) or [Fe ii]\({}_{25.99\mu\rm m}\), the Cloudy model only predicts [Fe ii]\({}_{25.99\mu\rm m}\). We also believe that [Fe ii] is more plausible because the UV radiation of an O4.5III star is not hard enough to ionize O iii to O iv of which ionization potential is 54.9 eV. The reason for the weaker [Fe ii] line in the model is because most iron is in Fe iii or Fe iv as shown in Figure 9 (see also Bautista & Pradhan, 1998). The [Fe ii] line becomes stronger when the thickness of IRS1 increases. In Figure 8 and Table 5, we present two models produced by the same parameters as the reference model except for the cloud thickness: the models with the thickness twice and a half of the thickness of the reference model. In Table 5, the [Fe ii]\({}_{25.99\mu\rm m}\) and [Fe iii]\({}_{22.93\mu\rm m}\) lines are the only lines that significantly vary by thickness. This behavior is \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Model\({}^{a}\) & [Ar III] & [S IV] & [Ne II] & [Ne III] & [S III] & [Fe III]\({}^{b}\) & [Fe II] & [S III]\({}^{b}\) \\ (\(\times\) thick) & 8.99 \(\mu\)m & 10.51 \(\mu\)m & 12.81 \(\mu\)m & 15.56 \(\mu\)m & 18.71 \(\mu\)m & 22.93 \(\mu\)m & 25.99 \(\mu\)m & 33.48 \(\mu\)m \\ \hline 0.5 & 1.07E-13 (0.80) & 1.22E-13 (0.87) & 7.66E-12 (0.93) & 9.03E-13 (0.93) & 3.48E-13 (0.68) & 2.68E-12 & 7.35E-15 (0.01) & 3.62E-12 \\ 1.0 & 1.31E-13 (0.99) & 1.27E-13 (0.90) & 9.16E-12 (1.12) & 9.08E-13 (0.94) & 5.45E-13 (1.07) & 2.31E-11 & 1.84E-13 (0.20) & 6.60E-12 \\ 2.0 & 1.42E-13 (1.07) & 1.29E-13 (0.91) & 1.01E-11 (1.23) & 9.10E-13 (0.94) & 6.97E-13 (1.37) & 5.35E-11 & 7.60E-13 (0.82) & 9.50E-12 \\ \hline \end{tabular} Note. – Line flux is in erg s\({}^{-1}\) cm\({}^{-2}\). The numbers in parentheses are the relative intensities with respect to the observed fluxes. \({}^{a}\)A scale factor applied to the cloud thickness. \({}^{b}\) Not seen in the Spitzer IRS spectrum, but likely blended with dust features. \end{table} Table 5: Line Fluxes Predicted from the CLOUDY Models and Relative Intensities to the Observations Figure 9: Ionization fraction of neon (left) and iron (right) from the Cloudy models of IRS1, which is defined as the number density of each ion of an element among the total number density of the element. The solid lines are the reference model; the dashed and dotted lines are the models with twice and a half of the thickness, respectively. Ions are presented by different colors: navy (Ne ii, Fe ii), blue (Ne iii, Fe iii), and cyan (Fe iv). also seen in Figure 9 that presents the ionization fraction of neon and iron by depth. In the figure, depth is normalized and is zero at the closest side to the heating source, where the temperature is the highest. The ionization fraction is defined as the number density of each ion of an element among the total number density of the element. Figure 9 shows that the ionization fraction of iron is sensitive to the thickness compared to neon. For example, all of the neon is in Ne ii from the very inside (\(\gtrsim 0.15\) of the depth) regardless of the cloud thickness. In contrast, the fraction of iron ions changes through the whole depth depending on the cloud thickness. Therefore, the model of IRS1 could be improved by leaving the cloud thickness as a free parameter or by finding a constraint of the thickness that reproduces the observed [Fe ii]\({}_{25.99\mu\rm m}\) line. In Cloudy model calculations, we have assumed a lack of hydrogen based on the SED models (Section 3.4). We now examine a possibility of a higher hydrogen density. In Figure 10, we present the reference model without hydrogen, i.e., \(n(\rm H)=10^{-4.9}\) cm\({}^{-3}\), and another two models including some amount of hydrogen, i.e., \(n(\rm H)=10\) and 100 cm\({}^{-3}\), with the observed SED of IRS1. The MIR spectra of the three models are identical as long as the total amount of metals and grains are retained the same, but the models with hydrogen show a group of emission lines from hydrogen in optical and NIR which have not been observed. We also searched for the H\(\alpha\) emission around IRS1 from the VST Photometric H\(\alpha\) Survey of the Southern Galactic Plane and Bulge (VPHAS+; Drew et al., 2014)8. The predicted H\(\alpha\) line flux from the three models of IRS1 are \(4.2\times 10^{-21}\), \(4.2\times 10^{-15}\), and \(1\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) for the models with \(n(\rm H)=10^{-4.9}\), 10, and 100 cm\({}^{-3}\), respectively, after applying the extinction of \(A_{\rm V}\sim 5\) mag (from \(N_{\rm H}=9.2\times 10^{21}\) cm\({}^{-2}\); Koo et al., 2011). For comparison, the \(5\sigma\) limiting magnitude of VPHAS+ is about 20 mag (Drew et al., 2014), or \(1.84\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\) in H\(\alpha\). If IRS1 contains hydrogen, even a small amount of \(\lesssim 10\) cm\({}^{-3}\), the H\(\alpha\) emission is expected to be detected in the VPHAS+ images, but no emission has been found around IRS1. This supports our assumption that hydrogen is depleted in IRS1. We note that there is still weak continuum emission at short wavelength in the reference model without hydrogen, a little stronger than the upper limits from the observation in optical and NIR. This continuum is the reflected emission of the incident radiation field from the heating source, so it may not be observed if it is diffuse. Footnote 8: [http://www.vphasplus.org](http://www.vphasplus.org) ## 4 Crystalline Silicate in Msh 15\(-\)52 MSH 15\(-\)52 is the first SNR in which crystalline silicate is observed. Our analysis in this paper indicates that the elemental abundance of the IR compact source IRS1 where crystalline silicate has been detected is close to that of SN ejecta with depleted hydrogen and high abundance of metals, particularly neon, argon, and iron. This implies that IRS1 (and probably IRAS 15099-5856 as well) originates from the SN ejecta rather than the mass loss of the SN progenitor as has been proposed by Koo et al. (2011). If this is true, MSH 15\(-\)52, besides the existence of crystalline silicate, is a very unique object where we can directly observe dust newly formed in the ejecta of SNe Ib/c, which would have not been possible without the nearby heating source Muzzio 10. While FIR observations (e.g., AKARI, Herschel, and ISO) have revealed cold dust inside SNRs, the spatial resolutions are not high enough to disentangle the SN dust from the surrounding ISM dust and examine dust properties in detail (e.g., Sibthorpe et al., 2010; Koo et al., 2016; Chawner et al., 2020; Millard et al., 2021; Rho et al., 2023, and references therein). The cold and warm dust has been detected in MSH 15\(-\)52 as well. Millard et al. (2021) estimated 0.03-0.06 \(M_{\odot}\) of warm (46-52 K) and 4-15 \(M_{\odot}\) of cold (17-20 K) dust, assuming the distance of 5.2 kpc to MSH 15\(-\)52, from the two-component blackbody model fitting of the MIR to FIR spectrum obtained by the Long Wavelength Spectrometer (LWS) on board the ISO. Since their spectrum is not background-subtracted, they suggest that the warm and cold dust originate from the SN ejecta and background ISM, respectively. The dust mass of 0.03-0.06 \(M_{\odot}\) at 5.2 kpc is scaled to 0.018-0.036 \(M_{\odot}\) at 4 kpc. This is Figure 10: SEDs of IRS1 from the observations and Cloudy models with different hydrogen density. The red (hydrogen-deficient; \(n(\rm H)=10^{-4.9}\) cm\({}^{-3}\)) line is the reference model the same as the red line in Figure 8. The blue (\(n(\rm H)=10\) cm\({}^{-3}\)) and purple (\(n(\rm H)=10^{2}\) cm\({}^{-3}\)) lines are the models with hydrogen. ten times larger than our results, but a direct comparison of the two dust masses is inappropriate because the LWS spectrum of MSH 15\(-\)52 with a large beam size of \(\sim\)80\(\arcsec\)(Gry et al., 2003) includes not only IRS1 but the surrounding, diffuse emission. Dust formation in the SN ejecta of Type II SNe (SNe II) has been widely studied by theoretical calculations (e.g., Sarangi & Cherchneff, 2013, 2015; Brooker et al., 2022) as well as the observations of young SNRs, e.g., Cas A (De Looze et al., 2017), SN 1987A (Matsuura et al., 2015), G54.1+0.3 (Rho et al., 2018), and the Crab Nebula (Gomez et al., 2012). In contrast, few study thus far has been carried out for dust condensation in the ejecta of SNe Ib/c. Observational signatures of dust formation in the SN Ib/c ejecta have been discovered only for a few SNe in the nebula phase: SN Ib 1990I (Elmhamdi et al., 2004), SN Ib(n) 2006jc (Di Carlo et al., 2008; Smith et al., 2008), SN Ic 2020oi (Rho et al., 2021), and SN Ic 2021krf (Ravi et al., 2023). The molecules which become dust seeds in the SN ejecta and their chemical reactions would not be very different between SNe II and Ib/c, but the environments in which dust condensation occurs would differ depending on SN types and progenitor stars. For example, even in the same SN type of IIP, the amount of dust formed in the ejecta and the degree of grain growth predicted by dust condensation models highly depend on the conditions of SN explosion such as progenitor mass, explosion energy, mass of \({}^{56}\)Ni, or clumpy structure of ejecta (Sarangi & Cherchneff, 2013, 2015; Brooker et al., 2022). The observations of dust signatures in the SN Ib/c ejecta listed above indicate that the onset of dust formation is much earlier for SNe Ib/c that is found to be 50-70 days after SN explosion (Di Carlo et al., 2008; Smith et al., 2008; Rho et al., 2021; Ravi et al., 2023) except SN 1990I with 230 days (Elmhamdi et al., 2004) than for SNe II that is estimated to be later than a few hundreds days after explosion (Sarangi et al., 2018, and references therein). The reason for the early dust condensation in the ejecta of SNe Ib/c is thought to be due to the rapid decrease in the gas temperature (Nozawa et al., 2008). For SNe Ib/c, the ejected masses are smaller and the expansion velocities are higher than SNe II because the SN Ib/c progenitors have lost most of the hydrogen/helium envelopes before explosion. This leads to a lower density of gas in the ejecta, and gas temperature drops down more quickly than those in typical SNe II. Nozawa et al. (2008) calculated dust formation in the SN 2006jc applying the SN Ib model of a relatively low-mass (6.9 \(M_{\odot}\)) helium star progenitor with an ejecta mass of 4.9 \(M_{\odot}\). Their calculation predicts the gas temperature reaching a typical dust condensation temperature ranges of 1,000-2,000 K between 50 and 200 days after the SN explosion, which means that dust forms much closer to the explosion center in the ejecta of SNe Ib/c than in the SNe II ejecta (Figure 1 of Nozawa et al., 2008). A similar process of dust formation might have occurred in MSH 15\(-\)5\(2\) of which progenitor is also speculated as a low-mass helium star. The different environmental conditions may bring distinctive characteristics of dust formed in the SN Ib/c ejecta, for example, the formation of crystalline silicate or crystallization of amorphous silicate. If silicates are formed at high (\(>\)1,000 K) temperature, the crystalline lattice structure is the most favorable state (Molster & Kemper, 2005). In a clumpy ejecta, stoichiometric silicate (Mg\({}_{2}\)SiO\({}_{4}\)) is predicted to be formed at high densities (Sarangi & Cherchneff, 2015). Therefore, the condensation of crystalline silicates could take place in the SN ejecta, or silicates first formed in amorphous structure could be crystallized if there is a high-energy process such as heating by pulsar wind nebula, although the condensation temperature and condensing phase of dust are not simply determined but associated with several factors such as gas pressure or gas kinematics (Nagahara et al., 2009; Gail et al., 2013). Then, assuming that crystalline silicates can form in SN dust, why have crystalline silicates not been found in other SNRs except MSH 15\(-\)5_2_? This is probably because it is difficult to observe unshocked SN dust without a heating source such as Muzzio 10. Crystalline silicates, if present, can be undetected in FIR because they exhibit very weak or no spectral signatures at wavelengths longer than 40 \(\mu\)m except for the 69 \(\mu\)m feature (Koike et al., 2003; Sturm et al., 2013). While our results suggest a possibility that the crystalline silicate of IRS1 originates from the SN ejecta, the current observational data with limited spatial and spectral resolution can neither confirm the ejecta origin nor rule out the progenitor origin. The Gemini/T-ReCS images show a slightly different spatial distribution between [Ne II] 12.81 \(\mu\)m and Qa 18.30 \(\mu\)m in a spatial scale less than one arcsecond (Figure 2). This is likely because the [Ne II] line does not trace the Mg silicate but dust with a very smooth spectrum such as Al\({}_{2}\)O\({}_{3}\)(Arendt et al., 2014). The spatial distributions of various ionic lines will be required to examine a correlation between the gas and dust features associated with crystalline silicate. The Spitzer IRS spectrum of IRS1 did not resolve the velocity of the gas component. Previously, Koo et al. (2011) favored the progenitor origin of IRS1 based on the low (\(-160\pm 560\) km s\({}^{-1}\)) central velocity of the [Ne II] line. The velocities of the other lines except [Fe II] 25.99 \(\mu\)m are similarly a few hundreds km s\({}^{-1}\) with an average of \(-444\) km s\({}^{-1}\), but the uncertainties are huge as [Ne ii]. The [Fe ii] line exceptionally shows a large velocity of \(-1710\pm 280\) km s\({}^{-1}\). This may imply a different origin of [Fe ii] from the other lines, but it can be due to an inaccurate measurement of the velocity since the [Fe ii] line is weak and embedded between strong dust features (Figure 3). To confirm the origin of the crystalline silicate in MSH 15\(-\)5\(2\) and explore the possibility of the formation of crystalline silicates in SN ejecta, further observations as well as theoretical investigations are required. Particularly, it is crucial to examine the spatial distributions of gas and dust in IRAS 15099-5856 through the MIR observations with high resolution and sensitivity (e.g., JWST/MIRI). If the origin of the crystalline silicate is found to be the SN ejecta, MSH 15\(-\)5\(2\) will give an unprecedented opportunity to investigate dust formation in SN ejecta, which is not yet clearly known. ## 5 Summary and Conclusion We have presented the MIR imaging observations and analysis of the compact IR source IRS1 of IRAS 15099-5856 in the SNR MSH 15\(-\)5\(2\), which is the first and only object with crystalline silicate dust associated with SNRs so far. The MIR images obtained by using Gemini/T ReCS revealed the morphology of IRS1 and spatial distributions of gas and dust at a spatial resolution of \(\lesssim 1\arcsec\). We have also presented the analysis of the Spitzer IRS spectrum of IRS1 that was previously investigated with the models of thermal emission from multiple independent dust components (Koo et al., 2011). In this paper, we have analyzed the ionic lines and modeled the spectrum considering the geometry and energy balance to derive the chemical abundance of gas as well as dust parameters. The derived abundance is close to that of SN ejecta with poor hydrogen and enhanced metals. This suggests the ejecta origin for the crystalline silicate and may imply the possibility of the formation of crystalline silicate in SN ejecta, but the current observational data are still limited in spatial and spectral resolution. If the origin of the crystalline silicate in IRAS 15099-5856 is confirmed as the SN ejecta by future observations, MSH 15\(-\)5\(2\) will be a very unique, invaluable object that proves the formation of crystalline silicate in SN ejecta and where we can directly observe newly-formed dust in the ejecta of SNe Ib/c. In the following, we summarize our main results. 1. The Gemini/T-ReCS images show a complicated, extended morphology of IRS1 with bright clumps and diffuse emission in [Ne ii] 12.81 \(\mu\)m and Qa 18.30 \(\mu\)m. The [Ne ii]cont image with no emission and the Si-6 image with the almost same emission as [Ne ii] indicate that there is no other line or strong continuum emission. The T-ReCS images confirm the previous prediction (Koo et al., 2011) that IRS1 is extended and externally heated by the nearby O star Muzzio 10. We estimated the [Ne ii] and Qa flux of IRS1 from the T-ReCS images and compared them with the flux derived from the Spitzer IRS spectrum. 2. The Spitzer spectrum of IRS1 shows prominent dust features at 23, 27, and 34 \(\mu\)m that can be explained by crystalline silicate dust. We also detected several ionic lines of [Ar iii] 8.99 \(\mu\)m, [S iv] 10.51 \(\mu\)m, [Ne ii] 12.81 \(\mu\)m, [Ne iii] 15.56 \(\mu\)m, [S iii] 18.71 \(\mu\)m, and [O iv] 25.89/[Fe ii] 25.99 \(\mu\)m. The [O iv] and [Fe ii] are not resolved at the spectral resolving power of the Spitzer IRS LL module, but it is likely [Fe ii]. The estimated line flux is from 0.13 to 8.21 \(\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\). The line widths are comparable to the spectral resolving power, i.e., the velocity is not resolved. 3. We compared the line ratios of [Ne iii]\({}_{15.56}\)\(\mu\)m/[Ne ii]\({}_{12.81}\)\(\mu\)m versus [S iv]\({}_{10.51}\)\(\mu\)m/[S iii]\({}_{18.71}\)\(\mu\)m of various astronomical objects with the observed line ratios of IRS1 on a model grid generated by Cloudy(Ferland et al., 2013). The line ratio diagram shows that the abundance of IRS1 is rather close to the nova abundance with enhanced neon. The absence of hydrogen line in the Spitzer spectrum further suggests that hydrogen is depleted in IRS1. 4. We modeled the Spitzer spectrum of IRS1 using the photoionization code Cloudy(Ferland et al., 2013). We assumed the cloud IRS1 externally heated by an O4.5III star (Muzzio 10) separated by 0.45 pc. For the gas, the nova abundance was initially adopted but with hydrogen depleted. For the dust species, crystalline olivine (Mg\({}_{1.9}\)Fe\({}_{0.1}\)SiO\({}_{4}\)), FeO, Mg\({}_{0.6}\)Fe\({}_{0.4}\)O, and amorphous silicate were included. Spherical dust grains of 0.25 \(\mu\)m (FeO) and 0.1 \(\mu\)m (the others) were assumed. We first fitted the dust features and adjusted the abundance of nitrogen, oxygen, neon, sulphur, argon, and iron to find a model that reproduces the observed lines. We have derived a reference model that fairly well fits the Spitzer spectrum and discussed the factors that affect the models. 5. The reference model fits the dust features at 27 and 34 \(\mu\)m, while it does not well fit the 23 \(\mu\)m feature and steeply-increasing continuum between 15 and 20 \(\mu\)m. The derived dust mass is \(2.4\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\). The derived dust mass is \(2.4\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\).
2306.06917
Video Decoding Energy Reduction Using Temporal-Domain Filtering
In this paper, we study decoding energy reduction opportunities using temporal-domain filtering and subsampling methods. In particular, we study spatiotemporal filtering using a contrast sensitivity function and temporal downscaling, i.e., frame rate reduction. We apply these concepts as a pre-filtering to the video before compression and evaluate the bitrate, the decoding energy, and the visual quality with a dedicated metric targeting temporally down-scaled sequences. We find that decoding energy savings yield 35% when halving the frame rate and that spatiotemporal filtering can lead to up to 5% of additional savings, depending on the content.
Christian Herglotz, Matthias Kränzler, Robert Ludwig, André Kaup
2023-06-12T07:38:09Z
http://arxiv.org/abs/2306.06917v1
# Video Decoding Energy Reduction Using Temporal-Domain Filtering ###### Abstract. In this paper, we study decoding energy reduction opportunities using temporal-domain filtering and subsampling methods. In particular, we study spatiotemporal filtering using a contrast sensitivity function and temporal downscaling, i.e., frame rate reduction. We apply these concepts as a pre-filtering to the video before compression and evaluate the bitrate, the decoding energy, and the visual quality with a dedicated metric targeting temporally downscaled sequences. We find that decoding energy savings yield 35% when halving the frame rate and that spatiotemporal filtering can lead to up to 5% of additional savings, depending on the content. video compression, codec, decoder, energy consumption, temporal filtering + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + Footnote †: journal: Acment of Speech and Signal Processing + + Footnote †: journal our evaluation setup and investigates the performance of the proposed prefiltering approach. Finally, Section 5 concludes this paper. ## 2. Literature Review In the literature, it is well known that quantization and subsampling techniques can lead to substantial power and energy savings (Srivastava et al., 2016). In terms of quantization, it is often reported that a lower number of bits leads to a lower decoder energy consumption (Srivastava et al., 2016). Also in terms of spatial scaling, it was found that a lower resolution can reduce the power consumption of a smartphone (Srivastava et al., 2016). Furthermore, considering the frame rate, it is well known that lower frame rates lead to a reduced power consumption on versatile devices (Srivastava et al., 2016; Srivastava et al., 2016), but the amount of savings was not assessed. In (Srivastava et al., 2016), it was shown that depending on the video content, optimal frame rate estimates can be obtained to optimize the rate-distortion performance, however, the impact on the energy consumption was not discussed. Frame rate reduction is mostly performed by frame averaging (Srivastava et al., 2016). The reason is that using frame averaging, a capturing process is simulated where the shutter angle is maximized. As a consequence, moving objects are blurred. It is noteworthy that the high-frame-rate source sequence should also be captured at a very high shutter angle because otherwise, strong ghosting artifacts can occur due to object repetitions. Therefore, we select frame averaging in this paper. Concerning contrast sensitivity, various studies focused on the spatial contrast sensitivity (Srivastava et al., 2016; Srivastava et al., 2016; Srivastava et al., 2016), the temporal sensitivity (Srivastava et al., 2016; Srivastava et al., 2016), or both at the same time (Srivastava et al., 2016; Srivastava et al., 2016). It was found that for both spatial and temporal frequencies, the contrast sensitivity can be described by a curve separating visible and invisible components of a visual signal. Considering the contrast sensitivity with respect to both temporal and spatial frequencies, it was found that a convex surface in 3D-space accurately describes the limits of human vision (Srivastava et al., 2016). In the field of video processing, the concept of temporal contrast sensitivity was validated in (Srivastava et al., 2016). Furthermore, contrast sensitivity was applied for quality assessment (Srivastava et al., 2016) and to enhance video encoding (Beng et al., 2016). In (Srivastava et al., 2016), the spatiotemporal envelope of the HVS was investigated using the visibility of temporal aliasing artifacts. Furthermore, the STCSF was used to analyze the effects of capturing and displaying a video (Srivastava et al., 2016). ## 3. Temporal Pre-Filtering ### Video as a Spatiotemporal Signal For formalization of our methods, we define the three-dimensional, discrete video signal \(s[\mathbf{x},t]\), where \(s\) is the greyscale luminance value of the YCbCr signal, \(\mathbf{x}=\{x_{\text{ver}},x_{\text{hor}}\}\) the two-dimensional spatial pixel position, and \(t\) the time index (see Fig. 1). Using this notation, we can transform the signal to the frequency domain as \(S[\mathbf{u},w]\), where \(\mathbf{u}\) corresponds to the horizontal and vertical spatial frequencies in the unit cycles per pixel [cpp] and \(w\) to the temporal frequency in the unit frames per second (fps). For simplification, it is common to only consider a single spatial frequency \(u\) because of rotational invariance (Srivastava et al., 2016), which means that an oscillation in any spatial direction can be transformed to a single dimension by rotation. This leads to the simplified definition of the signal \(s[\mathbf{x},t]\) and \(S[\mathbf{u},w]\). In the next step, we need to transform the visual signal, which is given in the YCbCr space, into the contrast domain. For this, we follow a concept presented in (Srivastava et al., 2016). We thus define the contrast \(c\) of a visual signal by the luminance variation divided by the mean luminance as \[c=\frac{s_{\text{max}}-s_{\text{min}}}{s_{\text{mean}}}, \tag{1}\] where, to avoid conversion into physical values, we directly use the luminance component from the YCbCr signal. Hence, \(s_{\text{max}}\) and \(s_{\text{min}}\) are the maximum and the minimum luminance of the video signal, respectively. We set the mean luminance to the mean luminance of the complete video. The contrast is a global property of a video such that it cannot be defined for a single pixel position \([\mathbf{x},t]\). Therefore, we define the contrast in the frequency domain as \[C[\mathbf{u},w]=\frac{|S[\mathbf{u},w]|}{s_{\text{mean}}}, \tag{2}\] where the magnitude of a frequency component \(|S[\mathbf{u},w]|\) corresponds to the amplitude of the waveform at the spatiotemporal frequency \([\mathbf{u},w]\). As such, the amplitude represents the luminance variation of a certain spatiotemporal frequency \([\mathbf{u},w]\) in the video signal \(s[\mathbf{x},t]\). ### Spatiotemporal Contrast Sensitivity Function Using the spatiotemporal contrast sensitivity function (STCSF) as defined in (Srivastava et al., 2016; Srivastava et al., 2016), our goal is now to detect frequency components that are invisible to the HVS and remove them prior to compression. For this, we consider the STCSF as defined on the 2D \(\{u,w\}\)-space. An example for such a STCSF, based on (Srivastava et al., 2016; Srivastava et al., 2016), is visualized in Fig. 2. The surface in the 3D space illustrates the contrast sensitivity (vertical axis) depending on the spatial frequency and the temporal frequency (horizontal axes). The contrast sensitivity is defined as the reciprocal of the minimum visible contrast using the contrast as defined in Eq. (2). Note that this sensitivity is a highly simplified representation of the HVS's limits. For example, the impact of eye movement, e.g., when tracking objects in the video, is neglected. Still, as was also done in the literature (Srivastava et al., 2016), we take the contrast sensitivity as a baseline for our work. Unfortunately, in the literature, we were not able to find a closed-form solution for the STCSF. Thus, as a first approximation, we propose to construct the STCSF as follows. First, we consider the Figure 1. Definition of the visual video signal \(s[\mathbf{x},t]\). contrast sensitivity experiments reported in (Kirshner et al., 2017), which were later discussed in detail in (Kirshner et al., 2017). Here, it was shown that the values of the experimental cut-off frequencies, i.e. the maximum frequencies that are visible, yield approximately \(f_{\text{grad}}=32\,\text{cpd}\) for the spatial and \(f_{\text{temp}}=32\,\text{Hz}\) for the temporal frequency, respectively, where cpd corresponds to the unit cycles per degree. Hence, neglecting the units, the values of the frequencies are very close. Furthermore, for lower spatial and temporal frequencies, as it is also visible in Fig. 2, the surface of the STCSF shows that it is approximately rotationally invariant with respect to the spatiotemporal plane (again neglecting the units). Thus, we define the general spatiotemporal frequency \[f_{\text{st}}=\sqrt{f_{\text{hor}}^{2}+f_{\text{ver}}^{2}+f_{\text{temp}}^{2}} \tag{3}\] as the Euclidean norm of the horizontal as well as the vertical frequency \(f_{\text{hor}}\) and \(f_{\text{ver}}\), respectively, and the temporal frequency \(f_{\text{temp}}\). Note that \(f_{\text{st}}\) does not have a physical unit and is only meaningful when using spatial frequencies in terms of cpd and temporal frequencies in terms of Hz. In the next step, due to the rotational invariance property, we adopt a fit for the contrast sensitivity function defined for the spatial frequency that was proposed in (Kirshner et al., 2017) and reads \[\gamma(f_{\text{st}})=g\left(\text{sech}\left(\left(\frac{f_{\text{st}}}{f_{ 0}}\right)^{p}\right)-\alpha\cdot\text{sech}\left(\frac{f_{\text{st}}}{f_{1}} \right)\right). \tag{4}\] In (Kirshner et al., 2017), it was reported that this function has the lowest root-mean square error with respect to all tested functions. The operator \(\text{sech}()\) is the hyperbolic secant, \(f_{0}\) and \(f_{1}\) are high- and low-frequency scales, \(p\) an exponent for the high-frequency part, \(\alpha\) is an attenuation factor at low frequencies, and \(g\) the gain which scales to the reported contrast sensitivities in (Kirshner et al., 2017). The spatiotemporal frequency \(f_{\text{st}}\) is used as the argument. We adopt the parameter values proposed in (Kirshner et al., 2017), see Table 1. With this representation, we have an analytic description of the contrast sensitivity that we exploit to identify invisible frequency components in the video signal. As this approach is an approximation, we believe that more accurate approaches might improve the results. As a first step in this direction, we test different scaling factors in the evaluation (Section 4) and show how the compression performance changes when using different specifications of the STCSF. To this end, we have to convert video-domain spatial and temporal frequencies \((u,w)\) to the physical domain. Due to the usage of the FFT, the temporal frequency is calculated by \[f_{\text{temp}}=\frac{w\cdot f_{\text{frame}}}{N_{\text{temp}}}, \tag{5}\] where \(w\) is the temporal frequency index, \(f_{\text{frame}}\) the frame rate in fps, and \(N_{\text{temp}}\) the length of the FFT in the temporal domain. Concerning the spatial domain, we have to convert the pixel position given in the \(u\)-domain to the angular domain. To this end, we adopt the so-called Designed Viewing Distance (DVD) as defined in (Dewinger et al., 2017), which assumes that the distance between two pixels corresponds to one arcminute angular distance on the retina of the eye. With this representation, we can avoid the use of metric units, which are otherwise needed in term of the pixel distance and the viewer's distance to the screen. This leads to the conversion factor \(\Gamma=60\,\frac{\text{cpd}}{\text{pixel}}\). Consequently, the spatial frequencies can be calculated by \[f_{\text{hor}}=\frac{u_{\text{hor}}\cdot\Gamma}{N_{\text{hor}}} \tag{6}\] and \[f_{\text{ver}}=\frac{u_{\text{ver}}\cdot\Gamma}{N_{\text{ver}}}, \tag{7}\] where \(\{u_{\text{ver}},u_{\text{hor}}\}=u\) and \(N_{\text{hor}},N_{\text{ver}}\) are the horizontal and vertical length of the FFT, respectively. ### Signal Pruning Our proposed method to remove invisible frequency components is illustrated in Fig. 3. First, we perform a three-dimensional fast Fourier transform (FFT) on the luminance part of the full input video \(s[\mathbf{x},t]\). Afterwards, we compare the magnitude of each frequency component \(|S[\mathbf{u},w]|\) with the minimum visible contrast \(\beta\cdot\gamma(f_{\text{fa}})=\beta\cdot\gamma([\mathbf{u},w])\) and obtain the binary mask \[M[\mathbf{u},w]=|S[\mathbf{u},w]|>\beta\cdot\gamma([\mathbf{u},w]), \tag{8}\] which can obtain a value of either one or zero. Furthermore, we define the scaling factor \(\beta\) to test different scales of the STCSF. Afterwards, we multiply the mask with the transformed video signal entry-wise as \(M[\mathbf{u},w]\cdot S[\mathbf{u},w]\) to remove the invisible frequency components. The resulting signal is inversely transformed (IFFT). After the inverse transform, we perform the temporal downscaling \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Parameter & \(f_{0}\) & \(f_{1}\) & \(\alpha\) & \(p\) & \(g\) \\ \hline Value & 4.1726 & 1.3625 & 0.8493 & 0.7786 & 373.08 \\ \hline \end{tabular} \end{table} Table 1. Fitted parameter values for the STCSF in the HPmH format taken from (Kirshner et al., 2017). Figure 3. Workflow for removing invisible frequency components. The FFT is performed on all three dimensions. Figure 2. A fitted surface representing the spatiotemporal contrast sensitivity using Eq. (4) from (Kirshner et al., 2017). The color indicates the magnitude of the contrast sensitivity. by frame averaging, where we choose integer downscaling factors of two and four. Finally, the resulting video is compressed by a standard video encoder. ## 4. Evaluation We use the 22 sequences from the BVI-HFR dataset (Krizhevsky et al., 2014) which are provided at HD resolution. In the temporal domain, we select 512 frames at the original frame rate of 120 fps. We downsample the videos spatially to a resolution of \(910\times 512\) pixels, which keeps the aspect ratio and simplifies the application of the FFT due to reduced memory requirements (FFT size of \(1024\times 512\times 512\)). Due to the restriction that the FFT is performed on signals with a length that is a power of two, without spatial downsampling we would have to use a FFT size of \(4096\times 2048\times 512\) to allow proper transform of the video with pixel height 1080, which is a sixteen-fold increase in memory and complexity. The horizontal spatial dimension is padded with zeros. For encoding, we use the x265 encoder (Beng et al., 2015) at medium preset with the standard constant rate factors (crf) of \(18,23,28,33\), and \(38\). We encode the sequences at all frame rates and with the scaling factors \(\beta\in\{0,0.01,0.05,0.2\}\), where \(\beta=0\) corresponds to no filtering. Concerning quality evaluation, we select a dedicated quality metric targeting temporally downscaled videos that is called Space-Time Generalized Entropic Differences (ST-GREED) (Krizhevsky et al., 2014). It uses statistics on spatial and temporal bandpass coefficients to come up with a quality estimate using a learned regressor. It was explicitly trained on sequences at varying frame rates. Note that a lower GREED score reflects a higher visual quality. We refrain from using classic metrics such as PSNR or VMAF because they do not consider temporal phenomena. Concerning the decoding energy evaluation, we perform energy measurements for OpenHEVC decoding (Beng et al., 2015) on an Intel Core i5-4670 CPU with the help of running average power limit (RAPL) (Bianchi et al., 2017). To ensure reliable measurements, we measure the decoding energy of each video bit stream multiple times until statistical validity is reached as explained in (Krizhevsky et al., 2014). We evaluate the performance as follows. First, we inspect rate-distortion curves as well as decoding-energy-distortion curves. Afterwards, we evaluate the compression performance in terms of the Bjantegaard-Delta (Bianchi et al., 2017). ### Performance Curves First, we evaluate filtering and temporal downscaling by rate-distortion (RD) curves illustrated in Fig. 4. The figure shows RD curves for three selected sequences, namely Bobblehead, Guitar Focus, and Water Splashing. #### 4.1.1. Impact of Temporal Downscaling Concerning the impact of the frame rate (red \(\hat{=}120\) fps, green \(\hat{=}60\) fps, blue \(\hat{=}30\) fps), we observe that the RD-performance highly depends on the content of the sequence. First, for the Bobblehead sequence (left), which includes highly structured temporal and spatial frequencies (a rotating roulette), temporal downscaling leads to strong quality degradations (the curves for 60 fps and 30 fps are located above the curve for 120 fps). For this sequence, temporal downscaling only leads to a higher RD-performance at very low qualities (above a GREED score of 40). Second, for the Guitar Focus sequence (center of Fig. 4), temporal downscaling always leads to a better RD performance (the 30 fps curve is the lowest). The Guitar Focus sequence is captured with a static camera and shows a static guitar with only slight hand and string movement. Hence, temporal downscaling leads to minor visual artifacts such that it is beneficial for the RD performance. Interestingly, the quality returned by the GREED score even increases for a lower frame rate, which is unexpected. The reason is that rate control in x265 allocates more bits for each frame at lower frame rates, because fewer frames have to be transmitted per second. Consequently, the quality for the reduced number of frames is increased significantly, which is sufficient to outweigh the quality loss of temporal downscaling. Third, we show results for the Water Splashing sequence (Fig. 4, right), which includes highly random spatial and temporal frequencies. In this case, the original sequence at 120 fps can reach highest qualities, similar to the Bobblehead sequence. In contrast, temporal downscaling leads to a better RD performance at a lower GREED score of roughly 30. Summarizing, we find that the impact of temporal downscaling on the RD performance highly depends on the content of the sequence. The method is highly effective when scenes are static and most ineffective when there are highly structured spatial and temporal frequencies. #### 4.1.2. Impact of Spatiotemporal Filtering The impact of filtering is visualized by the line style (solid for no filtering \(\beta=0\), dashed for \(\beta=0.01\), dotted for \(\beta=0.05\), and dashed-dotted for \(\beta=0.2\)). We observe that the impact on the RD performance is much smaller than for temporal downscaling, which can be expected because the number of samples to be encoded is not changed. Similar to downscaling, we can see that the impact of filtering highly depends on the sequence. For Water Splashing at 120 fps and high visual qualities (red lines), filtering increases the RD performance slightly. We also observe improvements for the Guitar Focus sequence at 30 fps between GREED scores of 15 and 16. In some cases, however, filtering leads to a lower RD performance (e.g., Bobblehead and Guitar Focus at 120 fps). #### 4.1.3. Decoding Energy Savings The decoding energy versus the quality is plot in Fig. 5 for the same sequences. For the Bobblehead and the Water Splashing sequence (left and right), we find that at low qualities (GREED score above 35), temporal downscaling leads to a lower energy consumption. For the Guitar Focus sequence, similar to the RD performance, temporal downscaling is always the better choice. With respect to filtering, we again find that the impact is much lower than the impact of temporal filtering. Some decoding energy savings are obtained when also RD savings are observed (e.g., the Guitar Focus sequence at 30 fps and a GREED score of 15.5, Water Splashing at 120 fps and GREED scores below 25). ### Average Savings To assess the amount of savings, we calculate average bitrate savings and average decoding energy savings over a constant visual quality using the Bantegaard-Delta (Bianchi et al., 2017). We use Akima interpolation as suggested in (Krizhevsky et al., 2014) and select the GREED score as the quality metric. To calculate decoding energy savings, we replace the bitrate with the decoding energy in the BD calculus as performed in (Kumar et al., 2017). The reference for BD calculations is the compression of the unfiltered sequence at 120 fps. It is important to mention that the average savings are calculated over the overlapping range of GREED scores. Consequently, the BD values are valid for different quality ranges. For example, in the case of Bobblehead, the overlap at different frame rates ranges from roughly 35 to 40, and for Guitar Focus from roughly 15 to 17. #### 4.2.1. Temporal Downscaling Table 2 lists BD-rate savings and BD-decoding energy savings for temporal downscaling to 60 fps and 30 fps. We neglect sequences where no overlap of GREED scores occured. The table shows that on average, significant bitrate reductions as well as decoding energy reductions can be obtained. Concerning the rate, we find mean rate savings of more than 6% for 60 fps and more than 20% for 30 fps. However, the range of values shows a very high variability. While highest rate savings reach up to 93% (static sequence like Guitar Focus), in some cases the rate even increases (e.g., Bobblehead). This, again, proves that temporal downscaling should only be performed for certain content. Regarding the decoding energy, mean savings are significantly higher (35% for 60 fps and 54% for 30 fps). Also, the variability of savings is lower, but still significant (between 1% and 62% for 60 fps and between 45% and 84% for 30 fps). In general, we find that high decoding energy savings occur when we also observe high bitrate savings. #### 4.2.2. Spatiotemporal Filtering Concerning the filtering, we observe more variability. We calculate BD values of the filtered and compressed sequences with respect to the unfiltered sequences at the \begin{table} \begin{tabular}{c||c|c|c||c|c} & \multicolumn{3}{c||}{BD-rate} & \multicolumn{3}{c}{BD-Decoding Energy} \\ & min & mean & max & min & mean & max \\ \hline 60 fps & \(-76.75\) & \(-6.52\) & \(215.55\) & \(-61.84\) & \(-35.01\) & \(-1.13\) \\ 30 fps & \(-92.88\) & \(-21.41\) & \(125.09\) & \(-83.99\) & \(-54.16\) & \(-45.65\) \\ \end{tabular} \end{table} Table 2. Relative bitrate savings and decoding energy savings for different frame rates in percent. Figure 4. Visual quality in terms of GREED with respect to the bitrate for three sequences with different spatiotemporal characteristics. Figure 5. Visual quality in terms of GREED with respect to the decoding energy for three sequences with different spatiotemporal characteristics. same frame rate. This highlights the pure impact of filtering, independent from frame rate changes. Analyzing these values, we find that in 40% and 42% of cases (i.e. sequences and scaling factors \(\beta\)), the bitrate and the decoding energy, respectively, is reduced. Corresponding maximum rate and energy savings yield 7.7% and 5.6%, respectively, where both are observed for the Sparkler sequence. For the Water Splashing sequence (right of Figs. 4 and 5), maximum savings yield 2% bitrate and 4.5% decoding energy savings, respectively. Averaging over all sequences, mean bitrate and energy savings are marginal for all scaling factors (absolute mean savings smaller than 0.5%). Future work could further investigate this behavior, identify relations between the content and actual savings, and develop a content-adaptive filtering solution. ## 5. Conclusion This paper analyzed a spatiotemporal filtering technique with temporal downscaling on visual quality, bitrate, and decoding energy savings. Our evaluations indicate that frame rate reduction is a powerful method to reduce the bitrate and the decoding energy substantially. When halving the frame rate, we observe mean bitrate savings of 6.5% and mean decoding energy savings of 35%. Concerning filtering, we find that for certain video content, the bitrate and the decoding energy can be further reduced by more than 7% and 5%, respectively. However, these savings highly depend on the content of the sequence and need further investigation. Future work can exploit this knowledge to generate an adaptive frame rate reduction method that, depending on the content and the target quality, decides the optimal frame rate and filtering method. In addition, the proposed global filtering method could be replaced by local filtering. Furthermore, the approach could be combined with spatial downsampling methods to obtain optimal spatiotemporal scaling.
2307.01389
Identification of Causal Relationship between Amyloid-beta Accumulation and Alzheimer's Disease Progression via Counterfactual Inference
Alzheimer's disease (AD) is a neurodegenerative disorder that is beginning with amyloidosis, followed by neuronal loss and deterioration in structure, function, and cognition. The accumulation of amyloid-beta in the brain, measured through 18F-florbetapir (AV45) positron emission tomography (PET) imaging, has been widely used for early diagnosis of AD. However, the relationship between amyloid-beta accumulation and AD pathophysiology remains unclear, and causal inference approaches are needed to uncover how amyloid-beta levels can impact AD development. In this paper, we propose a graph varying coefficient neural network (GVCNet) for estimating the individual treatment effect with continuous treatment levels using a graph convolutional neural network. We highlight the potential of causal inference approaches, including GVCNet, for measuring the regional causal connections between amyloid-beta accumulation and AD pathophysiology, which may serve as a robust tool for early diagnosis and tailored care.
Haixing Dai, Mengxuan Hu, Qing Li, Lu Zhang, Lin Zhao, Dajiang Zhu, Ibai Diez, Jorge Sepulcre, Fan Zhang, Xingyu Gao, Manhua Liu, Quanzheng Li, Sheng Li, Tianming Liu, Xiang Li
2023-07-03T23:02:26Z
http://arxiv.org/abs/2307.01389v1
Identification of Causal Relationship between Amyloid-\(\beta\) Accumulation and Alzheimer's Disease Progression via Counterfactual Inference ###### Abstract Alzheimer's disease (AD) is a neurodegenerative disorder that is beginning with amyloidosis, followed by neuronal loss and deterioration in structure, function, and cognition. The accumulation of amyloid-\(\beta\) in the brain, measured through 18F-florbetapir (AV45) positron emission tomography (PET) imaging, has been widely used for early diagnosis of AD. However, the relationship between amyloid-\(\beta\) accumulation and AD pathophysiology remains unclear, and causal inference approaches are needed to uncover how amyloid-\(\beta\) levels can impact AD development. In this paper, we propose a graph varying coefficient neural network (GVCNet) for estimating the individual treatment effect with continuous treatment levels using a graph convolutional neural network. We highlight the potential of causal inference approaches, including GVCNet, for measuring the regional causal connections between amyloid-\(\beta\) accumulation and AD pathophysiology, which may serve as a robust tool for early diagnosis and tailored care. Causal inference, Amyloid accumulation, Alzheimer's disease, Counterfactual inference. ## I Introduction The differentiation of Alzheimer's disease (AD) from the prodomal stage of AD, which is the mild cognitive impairment (MCI), and normal control (NC) is an important project that interests many researchers making effort on [1, 2]. It is commonly recognized through studies that the progression of AD involves a series of gradually intensifying neuropathological occurrences. The process begins with amyloidosis, followed by neuronal loss and subsequent deterioration in the areas of structure, function, and cognition [3]. As a non-invasive method that could measure the accumulation of amyloid in the brain, 18F-florbetapir (AV45) positron emission tomography (PET) imaging has been widely used for early diagnosis of AD [4]. The use of florbetapir-PET imaging to characterize the deposition of amyloid-\(\beta\) has shown to be of significant diagnostic value in identifying the onset of clinical impairment. In recent years, there has been increasing research in counterfactual causal inference to estimate the treatment effect in various domains such as medicine [5, 6, 7], public health [8, 9, 10], and marketing [11, 12]. Especially, estimating the causal effect of continuous treatments is crucial. For example, in precision medicine, a common question is _"What is the ideal medicine dosage to attain the best result?"_. Therefore, an average dose-response function (ADRF) that elucidates the causal relationship between the continuous treatment and the outcome becomes imperative. Estimating the counterfactual outcome presents a significant challenge in causal effect estimation, as it is inherently unobservable. To provide a clear definition, we use the binary treatment scenario (\(T=1\) or \(T=0\)) for illustration. As depicted in Fig. 1, let us consider a patient with a headache (\(x_{i}\)) who has the option to either take the medicine (\(T=1\)) or not take it (\(T=0\)). The potential outcomes corresponding to these two treatment choices would be being cured (\(Y_{i}(T=1)\)) or not being cured (\(Y_{i}(T=0)\)), respectively. The causal effect is defined as the difference between these two potential outcomes. However, given that a patient can only choose one treatment option, we can observe only one outcome (the observed outcome), while the other outcome that was not observed is considered the counterfactual outcome. Similarly, in the context of a continuous setting, estimating the counterfactual outcome remains a significant challenge. Therefore, a variety of existing works on causal effect estimation focus on counterfactual estimation [13, 14, 15] under the assumption of binary treatments or continuous treatments (ADRF estimation) [16, 17, 18, 19, 20]. Especially, in the context of continuous treatments, the generalized propensity score (GPS), proposed by Hirano and Imbens [16], is a traditional approach to estimate ADRF with counterfactual outcomes. Moreover, as machine learning has gained increasing attention due to its extraordinary ability to solve complex problems, many existing works use machine learning techniques to address the problem. Schwab et al. [17] proposed DRNet to split a continuous treatment into several intervals and built separate prediction heads for them on the latent representation of input. Nie et al. [18] adopted varying coefficient structure to explicitly incorporate continuous treatments as a variable for the parameters of the model, preserving the continuity of ADRF. Other methods, such as GAN [19] and transformer [20], have also been proposed. In this work, we propose a novel model, the Graph Varying Coefficient Neural Network (GVCNet), for measuring the regional causal associations between amyloid-\(\beta\) accumulation and AD pathophysiology. Specifically, by comparing our model with the most advanced model, VCNet, we demonstrate that our model achieves better performance in AD classification. Moreover, we adopt K-Means clustering to group the generated average dose-response function (ADRF) curves from each region of interest (ROI) and then map them onto the cortical surface to identify the amyloid-\(\beta\) positive regions. The main contributions of this work are summarized as follows: 1. To the best of our knowledge, this is the early attempt to utilize the brain structural topology as the graph to measure the regional causal associations between amyloid-\(\beta\) accumulation and AD pathophysiology. Consistent experimental results on AD public dataset not only demonstrate the effectiveness and robustness of the proposed framework, but also support this hypothesis: the AD pathophysiology is deeply associated with amyloid-\(\beta\) accumulation, no matter with which kind of topology graph. 2. Compared with the most advanced approach (i.e., VCNet), the proposed GVCNet experimentally obtains a higher diagnosis accuracy, suggesting that the good performance could be achieved with graph topology. As such our framework, such attempt extends the applications of graph-based algorithms on brain imaging analysis and provides a new insight into the causal inference that combines the phenotype, structural and functional data. 3. Our work demonstrates clearly that there are four brain regions (i.e., pre- & post-central gyrus among cortical area, left & right pallidum among subcortical area) can be as the key ROIs for AD diagnosis. With the quantitative experimental results, with such ROIs, the diagnosis accuracy is better than with the whole brain information. ## II Related Work ### _Counterfactual Outcome Estimation_ The definition of counterfactual outcome is typically framed using the potential outcome framework [21]. To provide a clear definition, we illustrate with the use of binary treatments, which can be extended to multiple treatments by comparing their potential outcomes. Each individual \(x_{i}\) has two potential outcomes: \(Y_{i}(T=1)\) and \(Y_{i}(T=0)\), corresponding to the two possible treatments (\(T=1\) or \(T=0\)). Since an individual can only receive one of the two treatments in observational data, only one potential outcome can be observed (observed outcome), while the remaining unobserved outcome is referred to as the counterfactual outcome. Hence, the major challenge in estimating Individual Treatment Effect (ITE) lies in inferring counterfactual outcomes. Once the counterfactual outcomes are obtained, ITE can be calculated as the difference between the two potential outcomes: \[ITE_{i}=Y_{i}(T=1)-Y_{i}(T=0). \tag{1}\] Many existing approaches have been proposed to estimate the counterfactual outcomes, such as conditional outcome modeling that trains two separate models to predict outcomes for the treatment group and control group and use the predicted value to fill the unobserved counterfactual outcomes. In addition, tree-based and forest-based methods are widely used to estimate ITE [22, 23, 24]. Additionally, matching methods [13, 25], stratification mathods [26], deep representation methods [26, 15] have been proposed to address the problem as well. ### _Continuous Treatment Effect Estimation_ Continuous treatments are of great practical importance in many fields, such as precision medical. Typically, the objective of continuous treatment effect estimation is to estimate the average dose-response function (ADRF), which demonstrates the relationship between the specific continuous treatment and the outcome. Although recent works utilized the representation learning methods for ITE estimation [27, 28, 29, 14], most of the existing works are under the assumption of binary treatments, which cannot be easily extended to continuous treatment due to their unique model design. To address this issue, Schwab et al. [17] extended the TARNet [27] and proposed Dose Response networks (DRNet), which divided the continuous dosage into several equally-sized dosage status, and assigned one prediction head for each strata. To further achieve the continuity of ADRF, Nie et al., [18] proposed a varying-coefficient neural network (VCNet). Instead of the multi-head design, it used a varying coefficient prediction head whose weights are continuous functions of treatment \(t\), which improved the previous methods by preserving a continuous ADRF and enhancing the expressiveness of the model. Hence, in this paper, we adopt it as part of the model to estimate Fig. 1: An Example of counterfactual problem: A patient with a headache who takes medicine and is cured. While the counterfactual scenario, i.e., the outcome had the patient not taken the medicine, is unobserved. the effect of each Regions of Interest (ROI) of the brain on Alzheimer's disease. ### _Traditional Correlation-based PET Image Analysis Methods_ The correlation-based methods on PET images analysis could be used in many clinical applications, such as tumor detection and brain disorder diagnosis. An et al. used canonical correlation analysis-based scheme to estimate a standard-dose PET image from a low-dose one in order to reduce the risk of radiation exposure and preserve image quality [30]. Landau et al. used the traditional corrlation method to compare the retention of the 11-C radiotracer Pittsburgh Compound B and that of two 18-F amyloid radiotracers (florbetapir and fluttenetamol) [31]. Zhu et al. used the cannonical representation to consider the correlations relationship between features of PET and other different brain neuroimage modalities [32]. Li et al. used sparse inverse covariance estimation to reveal the relationship between PET and structural magnetic resonance imaging (sMRI) [33]. And for the AD diagnosis, it has been suggested that brain regions such as the posterior cingulate and lateral temporal cortices are affected more in AD than the NC, with the florbetapir-PET [34]. Some researches on florbetapir-PET imaging have revealed that neurodegeneration does not influence the level of amyloid-\(\beta\) accumulation. Instead, amyloid-\(\beta\) pathophysiology is considered a biologically independent process and may play a "catalyst" role in neurodegeneration [35]. There have also been many theories that highlight the amyloid-\(\beta\) pathologies as the main driving forces behind disease progression and cognitive decline. In order to characterize the relationship between the amyloid-\(\beta\) accumulation and AD pathophysiology, the counterfactual causal inference method will be a useful tool to uncover how the patterns of causality or significant changes in regional or temporal amyloid-\(\beta\) levels can impact the development of AD over time. ### _Graph Neural Network_ Deep learning has revolutionized many machine learning tasks, but challenges arise when data is represented as graphs. The basic idea behind GNNs is to iteratively update the feature vectors of each node by aggregating the feature vectors of its neighboring nodes. The update rule for a GNN can be formalized as follows: \[h_{i}^{l+1}=\sigma(\mathbf{a}_{i}^{l}W^{l}),\mathbf{a}_{i}^{l}=g^{l}(h_{i}^{l},\{h_{u}^{l}:u\in\mathcal{N}(i)\}), \tag{2}\] where \(h_{i}^{(l+1)}\) is the feature vector of node \(i\) at layer \(l+1\), \(\mathcal{N}(i)\) is the set of neighboring nodes of \(i\), \(g^{l}\) is the aggregation function at layer \(l\), and \(W^{(l)}\) is a learnable weight matrix at layer \(l\). The function \(\sigma\) is a non-linear activation function, such as the ReLU function. Graph convolutional networks (GCNs) extend convolutional neural networks [36] to the graph domain, allowing for meaningful feature extraction. GCNs have been applied in various fields, including node classification [37], link prediction [38], and graph generation [39]. Initial work on GCNs was proposed by [40] in 2013, followed by the seminal paper by [41] in 2017. Since then, many extensions and improvements to GCNs have been proposed, including Graph Attention Networks (GATs) [42] and GraphSAGE [43]. Researchers have also studied different graph convolutional layers, such as Message Passing Neural Networks (MPNNs) [44] and Convolutional Graph Neural Networks (ConvGNNs) [45]. Overall, GCNs have shown great potential in graph representation learning and have the potential to revolutionize many applications where data is represented in the form of graphs. ## III Methodology ### _Problem Setting_ VCNet is one of the advanced methods for ADRF estimation, typically it can generate continuous ADRF and provide promising counterfactual estimation. Hence, in this study, we adopt this model to estimate the effect between the amyloid-\(\beta\) level and the probability of gaining AD. Typically, we treat the amyloid-\(\beta\) in a specific brain region as the treatment \(T\) and whether the subject gains AD as the outcome \(Y\). In our study, we used the Harvard-Oxford Atlas (HOA) to divide the entire brain into 69 regions. Since the some regions for tau imaging is not a target binding region, we excluded the following regions: left cerebral white matter, left cerebral cortex, left lateral vertical, right cerebral white matter, right cerebral cortex, right lateral ventricle and brain-stem. For the rest of 62 regions, we treated one region as the treatment and used the other regions as covariates (X) to train a separate model for each setting. We iterated this process 62 times to obtain the causal effect and accuracy estimates for each region. To capture more information, we used graph structures of the whole brain denoted as \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{X})\), where each graph contains 62 nodes representing 62 ROIs, \(\mathcal{V}\) represents the node set and \(\mathcal{E}\) represents the edge set. Let \(X\in R^{N\times F}\) be the input feature matrix, where each row corresponds to a node and each column corresponds to a feature. To estimate the causal effect of one ROI, we removed the corresponding node and all edges related to it and used the rest of the graph as input (61 nodes). Finally, we used the amyloid-\(\beta\) value as the treatment variable \(T\) for the VCNet analysis. In our work, we follow three fundamental assumptions for identifying ADRF: **Assumption 1**: **Stable Unit Treatment Value Assumption (SUTVA)**: There are no unit interactions, and there is only one version of each treatment, which means that various levels or doses of a specific treatment are considered as separate treatments. **Assumption 2**: **Positivity**: Every unit should have non-zero probability of being assigned to every treatment group. Formally, \(P(T=t|X=x)\neq 0,\forall t\in\mathcal{T},\forall x\in X\). **Assumption 3**: **Ignorability**: Given covariates \(x\), all potential outcomes \(\{Y(T=t)\}_{t\in\mathcal{T}}\) are independent of the treatment assignment, implying that there are no unobserved confounders. Mathematically, \(\{Y(T=t)\}_{t\in\mathcal{T}}\perp\!\!\!\perp T|X\). ### _GVCNet_ In our proposed GVCNet framework, as illustrated in Figure 2, there are three main components: ChebNet [46], Deep&Cross Network [47], and VCNet [18]. These components work together to estimate the Average Treatment Effect (ATE) using graph-structured data and demographic information. The ChebNet component takes advantage of the graph structure of the data and utilizes this graph structure to generate features or representations that capture the underlying relationships between entities. The Deep&Cross Network component incorporates demographic data into the framework. The Deep&Cross Network module utilizes these demographic features to learn complex interactions between them, capturing both low-order and high-order feature interactions. This helps to capture additional information beyond what can be learned solely from the graph-structured data. The resulting latent representation, denoted as \(Z^{\prime}\), which is a combination of features from ChebNet and Deep&Cross Network, is then fed into the VCNet component. VCNet infers the treatment distribution from \(Z^{\prime}\) to ensure that it contains sufficient information for accurate ADRF estimation. Finally, the ADRF is estimated based on \(t\) and \(Z^{\prime}\). ### _ChebNet_ In this paper, to preserve the topological information of PET data. We introduce the Chebyshev neural network (ChebNet) [46] to replace the first two fully connected layers in VCNet. ChebNet uses Chebyshev polynomials to approximate the graph Laplacian filter, which is a commonly used filter in GCNs. Chebyshev polynomials are a sequence of orthogonal polynomials that can be used to approximate any smooth function on a given interval, and can be efficiently computed Fig. 2: The framework of GVCNet for AD classification and individual treatment effect estimation. (a) we utilize ChebNet for feature embedding and then integrate treatment in the following dynamic fully connected layer for AD classification task. (b) We employee KMeans cluster algorithm to cluster the individual ADRFs into 3 groups: a\(\beta\)-positive (up), a\(\beta\)-negative (down) and a\(\beta\)-neutral and mapping these groups on the brain. using recursive formulas. The equation of first ChebNet is as follows: \[f_{\mathrm{out}}(\mathcal{L},\mathbf{X})=\sigma\left(\sum_{k=0}^{K-1}\Theta_{k}T_ {k}(\tilde{\mathcal{L}})\mathbf{X}\right) \tag{3}\] where \(\mathbf{X}\in\mathbb{R}^{N\times F}\) is the input matrix of \(N\) nodes, each with \(F\) features, \(\mathcal{L}\) is the graph Laplacian, and \(\tilde{\mathcal{L}}\) is the normalized Laplacian defined as \(\tilde{\mathcal{L}}=2\mathcal{L}/\lambda_{\max}-I_{N}\), where \(\lambda_{\max}\) is the largest eigenvalue of \(\mathcal{L}\). \(T_{k}(\cdot)\) are Chebyshev polynomials of order \(k\) and \(\Theta_{k}\) are the learnable filter coefficients for the \(k\)-th Chebyshev polynomial. Finally, \(\sigma(\cdot)\) is a non-linear activation function such as ReLU or sigmoid that is applied element-wise to the output of the ChebNet. And the binary cross-entropy loss function is utilized to quantify the dissimilarity between the predicted probability of the positive class and its true probability in binary classification tasks. ### _Deep & Cross Network_ The Deep & Cross Network (DCN) [47] is utilized to combine demographic data with topological information from PET data. Instead of conducting task-specific feature engineering, the DCN is capable of automatically learning the interactions between features that contribute to the task. Although deep neural networks (DNNs) are capable of extracting feature interactions, they generate these interactions in an implicit way, require more parameters, and may fail to learn some feature interactions efficiently. The DCN uses an embedding and a stack layer to embed sparse features in the input into dense embedding vectors \(x_{embed,k}^{T}\) to reduce the dimension. These vectors are then stacked with normalized dense features \(x_{dense}^{T}\) in the input as a single vector \(x_{0}=[x_{embed,1}^{T},...,x_{embed,k}^{T},x_{dense}^{T}]\). A cross network and a deep network are adopted to further process this vector in parallel. The hallmark of the paper is the cross network, which applies explicit and efficient feature crossing as shown below: \[x_{l+1}=x_{0}x_{l}^{T}w_{l}+b_{l}+x_{l} \tag{4}\] Here, \(x_{l}\) denotes the output of the \(l\)-th cross layer, and \(w_{l}\) and \(b_{l}\) represent the weight and bias of the \(l\)-th cross layer, respectively. The equation demonstrates that the degree of feature interactions grows with the depth of the layer. For example, the highest polynomial degree of \(x_{0}\) of an \(l\)-layer cross network is \(l+1\). Additionally, the interactions in the deep layer depend on the interactions in shallow layers. In addition to the cross network, a fully-connected feed forward neural network is used to process \(x_{0}\) simultaneously. The outputs of the cross network and the deep network are concatenated and fed into a standard logit layer to conduct the final prediction by the combination layer. ### _VCNet_ Despite the prior endeavours on ITE estimation, most of the work are focused on binary treatment settings and fail to extend to continuous treatment easily. Although some papers propose to estimate the continuous treatment by splitting the range of treatment into several intervals and use one prediction network for each interval, the continuity of ADRF is still an open issue. To address these issues, VCNet is proposed by [18], which is capable of estimating continuous treatment effect and maintaining the continuity of ADRF simultaneously. A fully connected feedforward neural network is trained to extract latent representation \(z\) from input \(x\). To guarantee \(z\) encode useful features, \(z\) is used to estimate the conditional density of the corresponding treatment \(\mathbb{P}(t|z)\) through a conditional probability estimating head. Specifically, \(\mathbb{P}(t|z)\) is estimated based on the \((B+1)\) equally divided grid points of treatment and the conditional density for the remaining t-values is computed using linear interpolation. After obtaining the \(z\) containing valuable information, a varying coefficient neural network \(f_{\theta(t)}(z)\) is adopted to predict the causal effect of \(t\) on the outcome \(y_{i,t}\) based on \(z\) and the corresponding \(t\), where the network parameters are a function of treatment \(f_{\theta(t)}\) instead of fixed parameters. Typically, the B-spline is used to model \(\theta(t)\): \[\theta(t)=[\sum_{l=1}^{L}a_{1,l}\varphi_{l}^{NN}(t),\cdots,\sum_{l=1}^{L}a_{d_{ \theta(t)}:l}\varphi_{l}^{NN}(t)]^{T}\in\mathbb{R}^{d(\theta)}, \tag{5}\] \(\varphi_{l}^{NN}(t)\) denotes the spline basis of the treatment and \(a_{1,l}\) are the coefficients to be learned; \(d(\theta)\) is the dimension of \(\theta(t)\). By utilizing the varying coefficient neural network, the influence of the treatment effect \(t\) on the outcome is integrated via the parameters of the outcome prediction network, thereby preventing any loss of treatment information. Additionally, the incorporation of \(t\) in this manner allows for the attainment of a continuous ADRF. ## IV Experiment ### _Dataset_ In this paper, we conducted an evaluation of their proposed algorithm using two subsets of data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu), specifically ADNI-1 and ADNI-2, as well as the entire dataset. The subjects were divided into three categories, consisting of AD, NC, and MCI, as shown in Table I. In this paper, we take AD as the AD group (298 subjects) and NC+MCI as the non-AD group (607 subjects). All florbetapir-PET images were co-registered with each individual's sMRI and subsequently warped to the cohort-specific DARTEL template. And all subject has demographic features: age, sex, CDR score and MMSE score. All sMRI and florbetapir-PET images in this study are pre-processed by FMRIB Software Library (FSL) 6.0.3 ([https://fsl.fmrib.ox.ac.uk/](https://fsl.fmrib.ox.ac.uk/)). The brain extraction step is based on the BET algorithm firstly [48]. And the skull is stripped from the source image sapce. Secondly, the sMRI images are aligned to Montreal Neurological Institute T1 standard template space (MNI152) with the FLIRT linear registration algorithm [49], which can save computational time during the application stage. All florbetapir-PET images were co-registered with each individual's sMRI and subsequently warped to the cohort-specific DARTEL template. More specifically, after registration, the sMRI and florbetapir-PET images are cropped to the size of 152 x 188 x 152 by removing the voxels of zero values in the periphery of brain. Then, all the images are downsampled to the size of 76 x 94 x 76 that to reduce the computational complexity. And all subject has demographic features: age, sex, CDR score and MMSE score. In order to generate the structural connectivity matrix between different cortical regions, we also used the T1w and diffusion MRI (dMRI) provided in the ADNI database. T1-weighted images were acquired using a 3D sagittal MPRAGE volumetric sequence with TE = 3.0 ms; TI = 900.0 ms; TR = 2300.0 ms; flip angle = 9\({}^{\circ}\); matrix size = 176 x 240 x 256; voxel size = 1.2 x 1.1 x 1.1 mm3. dMRI was acquired with a spin-echo planar imaging (EPI) sequence. 48 noncollinear gradient directions were acquired with a b-value of 1,000 s/mm2. 7 additional volumes were acquired without diffusion weighting (b-value = 0 s/mm2). Other parameters of dMRI were as follows: TE = 56.0 ms; TR = 7200.0 ms; flip angle = 90\({}^{\circ}\); matrix size = 116 x 116 x 80; isotropic voxel size = 2 x 2 x 2 mm3. A subset of 20 subjects was used for generating a group-wise connectivity matrix. For each subject, whole brain tractography was computed using the dMRI data, with the Unscented Kalman Filter (UKF) tractography method [50, 51] provided in the SlicerDMRI [52, 53] software. Structural T1w imaging data was processed using FreeSurfer (version 6.0, [https://surfer.nmt.mgh.harvard.edu/](https://surfer.nmt.mgh.harvard.edu/)), and cortical regions were parcellated with the Desikan-Killiany Atlas [54]. Co-registration between the T1-weighted and dMRI data was performed using FSL [55]. Then, for each pair of cortical regions, streamlines that end in the two regions were extracted and the number of streamlines were computed, followed by the creation of the subject-specific connectivity matrix. For the group-wise connectivity matrix, the mean number of streamlines across the 20 subjects was recorded. In the training process, We randomly split the dataset into a training set (633 subjects) and a testing set (272 subjects). The proposed model was tested on the testing set to calculate the classification accuracy and generate average dose-response function curves (ADRFs) for each ROI. ### _Experiment Setting_ In GVCNet, we designate each one of the 62 ROIs as the treatment and use the other ROIs as patient features. The average amyloid-\(\beta\) level serves as the signal for each ROI. We construct the input graph by defining the ROIs as nodes \(V\) and the DTI structure among the ROIs as edges \(E\). For the sturctural connectivity matrix, we have two alternative constructing options as follows: one is to use the Pearson correlation value among the ROIs' T1-weighted values to construct the structural correlation graph (which is called the Corr graph in this paper to make it simplified); the other is to use the smoothed white fibers among the ROIs based on the 20 subjects (which is called DTI graph). Then treat the graph embedding and demographic data as input of the deep and cross network. Finally, feed the treatment and calculate the counter-factor with our GVCNet. For the hyper-parameters, we set the learning rate to 1e-4 and \(\beta\) to 0.5. During model training, all networks were trained for 600 epochs. Our model is trained using Adam [56] with momentum as 0.9. ### _Prediction Performance_ First, we compare our model, GVCNet with the baselien model, VCNet. As shown in Table III, the prediction performance of our model is around 88.72%, which is 4.7% higher than VCNet. In Table II, we evaluate the model's performance by the accuracy percentage. The table presents the evaluation results of the GVCNet model on different datasets, using different types of graphs, and considering different demographic factors. The first three rows present the evaluation results on the combined ADNI1+ADNI2 dataset, using Corr graphs and again different combinations of demographic factors. The model achieves an average accuracy of 0.8296 when no demographic features are selected, an average accuracy of 0.8675 when age and sex are used, and an average accuracy of 0.8868 when all the demographic features are selected. The last three rows present the evaluation results on the combined ADNI1+ADNI2 dataset, using DTI graphs and again different combinations of demographic factors. The model achieves an accuracy of 0.8698 when no features are selected, an accuracy of 0.8689 when age and sex features are considered, and an accuracy of 0.8872 when all the features are selected. By comparing the last 6 rows, we can see that using DTI as the graph structure is slightly better than using the correlation graph between the ROIs as the graph structure. ### _ADRF Curve Analysis_ Based on the patterns of the estimated ADRF of each region and the premise that different parts of the brain may play different roles during the normal/abnormal aging process, we use KMeans clustering method to cluster the ADRF curves from each region into three groups: upward(up, a\(\beta\) positively respond to the treatment), downward(down, a\(\beta\) negatively respond to the treatment) and unbiased, based on their trend of relationship with AD probability. Brain regions within each cluster were visualized onto the cortex and subcortex mappings in Fig. 3 and Fig. 4. It can be found that there exist strong causal relationships between the AD progression and the PET signal level in the precentral/postcentral gyrus (cortical) and left/right pallidum (subcortical), indicating the potentially important role of these regions in modulating the Amyloid-\(\beta\) protein pathway in AD. It is interesting to observe that both the cortical (precentral gyrus) and subcortical (pallidum) regions responsible for voluntary motor movements [57, 58] are all highly responding to AD, indicating a possible link between the behavior and pathological aspect of AD. In addition, based on Table IV that brain regions in the up group will have a slightly higher prediction power towards the AD probability, we investigated the patterns of ADRF curves and the regions within the up group in Fig. 5, which Fig. 4: The subcortical curve trends clustered by k-means. Fig. 3: The cortical curve trends clustered by k-means. is consistent with Figs. 3 and 4 that pre- and post- central gyrus, left and right pallidum are upward with the increasing treatment. Moreover, we can obtain the same conclusion from both the VCNet and GVCNet, as shown in Fig. 6. Compared with the VCNet, our proposed Graph-VCnet can achieve much better prediction accuracy no matter with which kind of brain regions. And more specifically, with upward brain regions, both VCNet and Graph-VCNet could achieve the best prediction accuracy, compared with the other kinds of brain regions. ## V Conclusion and Discussion In this paper, we propose a novel model called GVCNet, which combines a graph neural network architecture with a targeted regularization approach to estimate varying coefficients of a treatment effect model and improve the model's performance. Experiment results show that GVCNet exhibits promising capabilities in making counterfactual causal inferences for Alzheimer's Disease (AD) progression based on the regional level of Amyloid-beta protein. The rationalization for employing a graph neural network architecture in GVCNet stems from the inherent complexity and interconnectedness of brain regions, both structurally, functionally, and pathologically. The graph structure allows for capturing the potentially long-distance spatial relationships and dependencies among these regions, providing a more comprehensive representation of the underlying proteinopathy dynamics. Furthermore, GVCNet incorporates a targeted regularization approach. Regularization techniques play a crucial role in mitigating model complexity and ensuring robustness. By imposing the proposed regularization constraints, GVCNet can effectively handle the inherent noise and variability in PET imaging data, leading to more reliable, generalizable, and accurate predictions. The potential of GVCNet in patient management, treatment, and drug discovery is substantial. If the model demonstrates sufficient robustness and consistency through rigorous validation studies, it can be ultimately utilized to project personalized AD progression trajectories. By leveraging counterfactual analysis, GVCNet can provide insights into the "what if" scenarios by assessing how the current imaging results would evolve if they were to worsen (due to disease progression) or improve (because of the medications or other types of interventions). This information is invaluable in guiding clinicians and patients in making informed decisions about treatment strategies and long-term care plans. Moreover, GVCNet's ability to predict the personalized treatment effect of a patient after administering a medication targeting Amyloid-beta deposition is of significant clinical importance. It can provide insights into the expected outcomes and help determine the optimal dosage for individual patients. This personalized, regional treatment prediction can aid in tailoring interventions and optimizing therapeutic strategies, leading to improved patient outcomes and more efficient use of resources. Looking ahead, the future of imaging-guided diagnosis, prognosis, and treatment planning for AD is likely to focus on unraveling the underlying mechanisms that link imaging targets, such as Amyloid-beta protein, with the patient's internal and external characteristics (e.g., genetic factors, health conditions, comorbidities, and social determinants of health) to the disease progression. The proposed counterfactual causal inference modeling approach with multi-modal data input, as demonstrated by GVCNet, will play a pivotal role in this pursuit. With more data modalities and holistic patient characterization, we can uncover critical insights into the disease's pathophysiology, identify novel therapeutic targets, and develop more effective interventions. In conclusion, counterfactual causal inference modeling Fig. 5: ADRF for the typical upward ROIs. Fig. 6: Prediction accuracy with VCNet and Graph-VCNet based on different brain regions. such as GVCNet holds immense potential for advancing our understanding of personalized AD management. It will enable personalized projections of disease trajectories and treatment effects, empowering clinicians and patients to make informed decisions. The integration of imaging-guided diagnosis, prognosis, and mechanistic insights will shape the future of AD research and pave the way for improved patient care and therapeutic strategies. ## Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## CRediT authorship contribution statement **Haixing Dai** Conceptualization, Formal analysis, Methodology, Software, Writing - original draft. **Mengxuan Hua:** Formal analysis, Methodology. **Qing Li:** Writing - dartf & review & editing. **Lu Zhang:** Writing - review & editing. **Lin Zhao:** Writing - review & editing. **Dajiang Zhu:** Writing - review & editing. **Jorge Sepulcre:** Writing - review & editing. **Xingyu Gao:** PET imaging and non-imaging data analysis, writing - review & editing. **Manhua Liu:** Writing - review & editing. **Quanzheng Li:** Writing - review & editing. **Sheng Li:** Writing - review & editing. **Fan Zhang:** Diffusion imaging data analysis and tractography, writing - review & editing. **Tianming Liu:** Conceptualization, Writing - review & editing. **Xiang Li:** Conceptualization, Writing - review & editing. ## Acknowledgments Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: Alzheimer's Association; Alzheimer's Drug Discovery Foundation; BioClinica, Inc.; Biogen Idec Inc.; Bristol-Myers Squibb Company; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; F. Hoffmann-La Roche Ltd and its affiliated company Genetech, Inc.; GE Healthcare; Innogenetics, N.V.; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Medpace, Inc.; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Synarc Inc.; and Takeda Pharmaceutical Company. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.
2307.07920
A structural study of Big Tech firm-switching of inventors in the post-recession era
Complex systems research and network science have recently been used to provide novel insights into economic phenomena such as patenting behavior and innovation in firms. Several studies have found that increased mobility of inventors, manifested through firm switching or transitioning, is associated with increased overall productivity. This paper proposes a novel structural study of such transitioning inventors, and the role they play in patent co-authorship networks, in a cohort of highly innovative and economically influential companies such as the five Big Tech firms (Apple, Microsoft, Google, Amazon and Meta) in the post-recession period (2010-2022). We formulate and empirically investigate three research questions using Big Tech patent data. Our results show that transitioning inventors tend to have higher degree centrality than the average Big Tech inventor, and that their removal can lead to greater network fragmentation than would be expected by chance. The rate of transition over the 12-year period of study was found to be highest between 2015-2017, suggesting that the Big Tech innovation ecosystem underwent non-trivial shifts during this time. Finally, transition was associated with higher estimated impact of co-authored patents post-transition.
Yidan Sun, Mayank Kejriwal
2023-07-16T01:57:19Z
http://arxiv.org/abs/2307.07920v1
# A structural study of Big Tech firm-switching of inventors in the post-recession era ###### Abstract Complex systems research and network science have recently been used to provide novel insights into economic phenomena such as patenting behavior and innovation in firms. Several studies have found that increased mobility of inventors, manifested through firm switching or _transitioning_, is associated with increased overall productivity. This paper proposes a novel structural study of such transitioning inventors, and the role they play in patent co-authorship networks, in a cohort of highly innovative and economically influential companies such as the five Big Tech firms (Apple, Microsoft, Google, Amazon and Meta) in the post-recession period (2010-2022). We formulate and empirically investigate three research questions using Big Tech patent data. Our results show that transitioning inventors tend to have higher degree centrality than the average Big Tech inventor, and that their removal can lead to greater network fragmentation than would be expected by chance. The rate of transition over the 12-year period of study was found to be highest between 2015-2017, suggesting that the Big Tech innovation ecosystem underwent non-trivial shifts during this time. Finally, transition was associated with higher estimated impact of co-authored patents post-transition. ## 1 Introduction Innovation has long been recognized in the economics and social sciences for driving long-term productivity and economic measures of aggregate income, such as the Gross Domestic Product (GDP) [1], [2], [3], [4]. For private organizations, especially in a knowledge-based economy [5], [6] such as is prevalent in much of the industrialized world, innovative products and services developed and commercialized by the firm can be an important driver1 of the firm's valuation [4], [8]. Innovation can also have positive societal benefits in the form of higher corporate social responsibility (CSR) by more innovative forms. For example, in an influential recent work, Mishra analyzed a sample of more than 13,000 US 'firm-years' over a 15 year period starting from the early 1990s and found that, mediated by high CSR, more innovative firms ended up achieving significantly higher valuation following a period of innovation [9]. Footnote 1: As with any complex system, it bears noting that the relationship between innovation and valuation is itself a complex one, and not always significant (some instances of negative association for specific sectors and types of innovations may be found in, for example, [7]). Both the sector and geography can play a role. Most studies have found support for the claim that for technology and ‘science-based’ firms (to quote the terminology used in [8]), there is a positive relationship. The positive effects of innovation on valuation are especially strong for technology sectors (e.g., fintech [10]), and for early-stage companies such as start-ups [11]. Although not the only way of measuring _innovation productivity_ in an organization, the number of patents filed by the organization is an objective measure that has been widely studied by researchers and policymakers alike [12], [13], [14]. At the time of writing, there is an enormous body of literature on constructing and studying patent networks, estimation of valuation from patent analysis (including network analysis), textual analysis of patents, as well as a range of qualitative studies on patents [15], [16], [17], [18]. In this paper, we use patent analysis, including construction and study of a patent co-authorship network, to understand the structural properties of _firm-switching_ or _transitioning_ inventors. We focus our study on a cohort of five 'Big Tech' firms (Apple, Google, Meta, Microsoft, and Amazon) that, in the aftermath of the 2008 financial crisis, have ended up outperforming the broader market by a considerable margin and are among the most valuable companies in the world2. Here we define an inventor as co-author on a patent where one of these five organizations serves as the 'assignee' organization i.e., the organization that files the patent, and to whom the intellectual property legally belongs. A transitioning inventor is one who is originally employed by one of the five Big Tech firms (e.g., Apple), and is co-author on patents filed by Apple, but subsequently switches to a different firm (e.g., Amazon) and starts co-authoring patents filed by that firm. Footnote 2: [https://www.cmbc.com/2020/01/28/sp-500-dominated-by-apple-microsoft-alphabet-amazon-facebook.html](https://www.cmbc.com/2020/01/28/sp-500-dominated-by-apple-microsoft-alphabet-amazon-facebook.html) There is increasing evidence that such firm-switching or 'transitioning' inventors may play an under-appreciated role in boosting innovation potential within the technology ecosystem [19]. Intuitively, the ability to transition represents inventor'mobility' [20], and even for general workers (not just inventors), evidence suggests that mobility can lead to productivity gains and better incentive alignment in the overall economy [21]. Highly productive inventors are also more likely to be 'poached' by a rival company that is looking to innovate in a similar area. The rival company may be incentivized to offer greater benefits, including higher pay and more freedom to the inventor in developing and publishing their ideas, many of which the organization can potentially look to patent. Because poaching is usually targeted, in theory, this can motivate the inventor further due to greater alignment between the company's incentives and the inventor's goals. While firm switching behavior has received some theoretical and empirical attention in the economics literature [19, 21, 22], it has never before been studied from a structural perspective (i.e., from the perspective of _economic complexity_[23]), in the five Big Tech organizations in the post-recessionary era (2010-2022). We formulate and investigate three specific research questions (RQs) to better understand the structural role and other properties of these transitioning inventors in the broader Big Tech ecosystem during this period: **RQ1:** How does the degree distribution and other structural properties of the transitioning inventors in the patent co-authorship network compared with those of the other inventors, and does the removal of these inventors from the network lead to lower connectivity and more fragmentation than would be expected through chance? **RQ2:** Does the rate of inventor transition remain constant (or a show a monotonic trend) over time, or is there a specific period during which the majority of transitions occur? **RQ3:** Is transition associated with an increase in estimated impact of co-authored patents (reflected through a subsequently defined measure, such as the normalized number of citations received by their patents) _after_ transition, as compared to the impact of the patents co-authored _before_ transition? The first research question explores the structural properties of transitioning inventors using the usual tools of network science. We consider these properties both with respect to the overall co-authorship network (that includes all patent authors within Big Tech in the period under study, including the vast majority of patent authors in Big Tech who have not transitioned) and also the'sub-network' where only the transitioning inventors are represented as nodes. The second research question is instead considering what happens to the rate of transition over the period under study; do we, for example, see relatively stable and constant rates, steadily increasing or decreasing rates, or a trend that is more complex? We also consider whether some organizations experience greater transitions (whether incoming or outgoing) than others or if the transition-mixture is relatively even among the five Big Tech firms. Finally, the motivation behind studying the third research question is to understand whether transition is associated with an estimated measure of 'excess' impact due to transition. While we only consider one measure of excess impact, our definition attempts to control for potential biases, such as the length of time elapsed since the patent was granted. ## 2 Materials and Methods Our research primarily focuses on analyzing patents filed by five major technology corporations: Google, Meta (formerly known as Facebook), Apple, Amazon, and Microsoft. To gather the relevant patent data, we utilized the Query Builder feature in the PatentsView platform [24]. PatentsView is a comprehensive visualization, data dissemination, and analysis tool specifically designed for intellectual property (IP) data. The platform receives support from the Office of the Chief Economist at the U.S. Patent & Trademark Office (USPTO). The Query Builder feature within PatentsView allows researchers to identify and retrieve specific subsets of patents of interest from the vast collection of all U.S. patents. To obtain patent data for this study, we issued the following query to Query Builder: '_Assignee Organization_ contains3_[Organization Name]_'. Here, the _[Organization Name]_ is a placeholder for each of the five Big Tech companies we targeted and the _Assignee Organization_ is a field in the PatentsView dataset. We also specified a time constraint as we wanted to focus on patents filed by these companies between 2010 and 2022 (inclusive), and their patents granted within this period. Footnote 3: The special query keyword ‘contains’ will return any patent with an assignee organization that has the specified organization name as a substring. For example, a patent filed by ‘Google LLC’ as the assignee organization will be returned if the organization name is specified as ‘Google’. While we could use the special keyword ‘equals’ rather than contains, it is less robust than the former and does not capture subsidiary organizations, as discussed subsequently. During the query process, we noticed that there were instances where other irrelevant assignees included similar keywords. For example, when searching for '_Assignee Organization_ contains _Apple_', entries such as 'Appleton Papers Inc' would also be retrieved, which is unrelated to the company that we intended to study. To maintain the accuracy of our analysis, we cross-verified the patents and inventors associated with each retrieved assignee, using external sources, by identifying the assignees that are genuinely related to the companies we intended to study. Furthermore, these technology corporations often have numerous branches, alternate names, and variations in spelling. For instance, Google may be referred to as Google LLC, Google Technology Holdings LLC, or even encounter misspellings like Google LCC. To ensure a comprehensive analysis, we consolidated all subsidiary branches and alternate names under one unified name. In this example, we merged all variations into 'Google'. The same approach is also applied to the other four companies being studied, including Meta, Apple, Amazon, and Microsoft. The complete set of fields4 available for each patent in the PatentsView Query Builder can be quite extensive (including the patent text). Our analysis is restricted to a selected subset of these fields. These fields of interest include: Footnote 4: A data dictionary describing all of these fields may be accessed at [https://patentsview.org/query/data-dictionary](https://patentsview.org/query/data-dictionary). * app_date: The date when the patent application was filed. * assignee_organization: The organization or company to which the patent rights have been assigned. In our study, these organizations refer to the five companies under investigation: Google, Meta, Apple, Amazon, and Microsoft. Each patent can only have one assignee organization. * inventor_key_id: A unique identifier for each inventor. * inventor_first_name: The first name of the inventor(s) listed on the patent. * inventor_last_name: The last name of the inventor(s) listed on the patent. * patent_title: The title or name given to the invention covered by the patent. * patent_date: The date when the patent was granted. * patent_number: The unique identification number assigned to the patent. * citedby_patent_number: The number of other patents that have cited this particular patent. **Basic descriptive statistics.** After retrieving the data, we found that, since 2010 (and up until 2022), 2,329 individuals have been granted patents with multiple assignee organizations. More details on the patents filed and granted, distributed across the five organizations, are illustrated in Figure 1. Among the 2,329 inventors with multiple assignee organizations, 2,213 inventors have patents associated with exactly two different assignee organizations. In relative terms, the number of individuals in these five companies (between 2010 and 2022) authoring patents with multiple assignee organizations is a small, but non-trivial, fraction of the total number of individuals authoring patents with these five organizations (74,637 inventors). We ordered the patents submitted by each of these 2,213 inventors according to the application date of the patent. Within this group, 873 inventors alternated between two different assignee organizations multiple times, filing patents under one assignee and then switching to the other, and vice versa. On the other hand, 1,340 of these inventors transitioned only once, initially filing patents for one assignee organization and subsequently moving to another. **Qualitative spot-checks.** To gain preliminary qualitative insights into these 1,340 inventors' professional trajectories, we randomly selected a subset of 30 inventors from the group of 1,340 and conducted a preliminary investigation of their profiles using external sources such as LinkedIn. Our findings revealed that all 30 inventors had undergone transitions from one company to another among the five companies under investigation throughout their careers. Importantly, we observed a correlation between the timing of these job transitions and the periods when they started filing patents with their subsequent company, according to their patent application dates (app_date). Furthermore, the companies they were employed by corresponded with the companies they assigned their patents to. This suggests that a change in the patent assignee often indicates a job transition, rather than holding concurrent positions at two different companies. Therefore, in this study, we assumed that for the 1,340 inventors who changed their patent assignees once, this shift likely represented a move from one company to another among the five companies being investigated. **Co-inventor patent network (CPN) construction.** We constructed the co-inventor patent network (CPN), denoted here as \(C_{1}=(V,E)\), by using authorship data from all of the patents collected (\(|V|=74,637\)), as described in the previous section. In other words, each unique inventor was assigned a unique node in the CPN, and an (undirected, unweighted) edge was created between two inventors if they co-authored a patent together as employees in a Big Tech firm during the period of interest (2010-2022). Recall that the subset of these inventors who transitioned just once between different organizations from 2010 to 2022 consists of 1,340 inventors. To understand the structural properties of these inventors with respect to the overall co-inventor network, we also constructed the subgraph \(C_{2}\subset C_{1}\), of which the nodes consists of these 1,340 inventors, and the edges are the subset of edges in \(C_{1}\) that can only exist between these nodes. The goal behind constructing \(C_{2}\) is to study the unique collaboration dynamics of inventors who have experienced a transition in their organizational affiliation. Finally, for investigating the fragmentation aspect of RQ1, we also consider the network \(C_{r}\), which is obtained by removing all nodes in \(C_{2}\) from \(C_{1}\) (in other words, the 1,340 transitioning inventors), and all edges in \(C_{1}\) that were incident on at least one of the 1,340 inventors. ## 3 Results and Discussion ### Rq1 The original CPN (\(C_{1}\)) consisting of 74,637 inventors (as nodes) is connected by 372,245 edges or unique co-authorships5. The degree distribution of \(C_{1}\) follows a power-law distribution with a gamma value of -2.32. Despite the large number of inventors, only a small fraction (\(\sim\)2.5% or 1,900 nodes) are singletons, indicating a relatively well-connected network. The average inventor in this network has co-authored patents with approximately 10 other inventors, with the most prolific inventor having co-authored with 460 others. The network is divided into 3,598 connected components (including singletons), with the largest connected component consisting of 66,818 inventors. The density of the network is relatively low (\(\sim\)0.000134), implying that it is still quite sparse, despite the inventors being divided only among five unique organizations. However, as expected from the social nature of such collaborations, the average clustering coefficient is moderately high (\(\sim\)0.675), which indicates substantial clustering and transitivity among the inventors. Footnote 5: Because the CPN is undirected, a co-authorship relation is counted only once. In the future, we will also consider the weighted version of this network. We also computed the degree distribution of the nodes corresponding to the 1,340 inventors who transitioned only once from 2010 to 2022 within the network graph \(C_{1}\). Figure 2 (a) and (b) illustrate the overall degree distribution (all nodes) in \(C_{1}\) as well as the degree distribution of the subset of 1,340 nodes in \(C_{1}\). We then ranked these degrees and compared the ranks with respect to the overall network. We found that 80.61% of the transitioning inventors are in the top quartile (25%) of all nodes, and 95.07% of the transitioning inventors are within the top half (50%) of all nodes. These are significantly higher than would be expected for a random sample of 1,340 nodes selected from the network. These numbers suggest that the majority of transitioning inventors have high degree centrality, indicating some degree of influence (at least in a local sense) within the network. However, by calculating the clustering coefficient for each of the transitioning inventors in \(C_{1}\) and subsequently averaging these coefficients, we derive an average local clustering coefficient of approximately 0.379 for the these transitioning inventors, which is lower than the network average of 0.675. This suggests that transitioning inventors are structurally embedded in a more star-like structure: all collaborators of transitioning inventors do not themselves collaborate on the same patents as often. In other words, transitioning inventors are co-authoring patents with groups of inventors between whom the only connection is the transitioning inventor themselves. Recall that we had also constructed the subgraph \(C_{2}\) that only contains as nodes the 1,340 inventors who transitioned just once from 2010 to 2022. We found that \(C_{2}\) is denser (\(\sim\)0.000791) than \(C_{1}\) while having lower average clustering coefficient (0.0986) than \(C_{1}\). Among the 1,340 inventors, there are 710 co-authorship connections, but nearly half of the 1,340 inventors have not co-authored patents with any others in \(C_{2}\) (although they have co-authored patents with inventors in the larger network \(C_{1}\)). These transitioning inventors therefore have fewer co-authorship connections on average, as evidenced by an average degree of approxi Figure 1: (a) The distribution of granted patents and filed patents; (b) distribution of patents granted per year, by company; (c) distribution of patents applied for by each company per year. Figure 2: (a) The degree distribution of all nodes in the CPN \(C_{1}\); (b) degree distribution of the subset of 1,340 transitioning inventor nodes in \(C_{1}\); (c) degree distribution of all nodes in \(C_{2}\). mately 1.06 and a maximum degree of 13. The shape of this degree distribution is illustrated in Figure 2 (c). While the data is too small to conclusively interpret the shape as that of a scale-free distribution, it is also not inconsistent with such a distribution. The'remaining' network, \(C_{r}\), which mainly consists of inventors who have not transitioned, has a slightly lower average number of co-authorship connections per inventor compared to \(C_{1}\). Additionally, the number of inventors without any co-authorship increases, suggesting that the removal of transitioning inventors led to some inventors in \(C_{1}\) becoming isolated. We explore fragmentation due to the removal of these nodes in more detail subsequently. As expected, the density of \(C_{r}\) is lower than that of \(C_{1}\), indicating less intense collaboration among the remaining inventors. We evaluated the fragmentation caused by the removal of these 1,340 inventors on the CPN by comparing the number of connected components and the average clustering coefficient between the original network (\(C_{1}\)) and \(C_{r}\) (the network after removing the transitioning inventors), as well as a network generated by randomly removing 1,340 inventors from the original network. Compared to \(C_{1}\), the average number of connected components in \(C_{r}\) increases to 3,952. This indicates that transitioning inventors play a bridging role in the innovation ecosystem, which was also suggested earlier by their lower clustering coefficient (but higher degree centrality) compared with the overall network. There are now (i.e., after removing the transitioning inventors) more isolated groups of inventors who are interconnected within their groups but not connected to other groups in the network. Note that this fragmentation is greater than would be expected by chance, lending an affirmative answer to the second part of RQ1. To quantify this, we performed an experiment where we _randomly_ removed an equivalent number of inventors (1,340) from the network \(C_{1}\), as noted above. We repeated this process independently 100 times, and averaged the number of connected components across these 100 iterations. This average (3,897.1 connected components) is found to be significantly lower than 3,952 (N=100, p = \(1.46\times 10^{-9}\), one-sided Student's t-test). This suggests that transitioning inventors play a greater role in maintaining the network's global connectivity than a random group of inventors embedded in the same environment (the five Big Tech firms), as their removal results in slightly increased fragmentation compared to the removal of a random set of inventors. Similarly, the average clustering coefficient exhibits interesting behavior upon the removal of inventors. The average clustering coefficient in the remaining network (\(C_{r}\)) is 0.674. This suggests that, on average, the remaining inventors in \(C_{r}\) still tend to form moderately interconnected groups. Despite the network fragmentation caused by the removal of the 1,340 inventors, collaboration among groups of inventors remains largely unaffected, as indicated by the similar average clustering coefficient in \(C_{r}\) and \(C_{1}\). In contrast, when we repeatedly remove a random set of 1,340 inventors from the network, the average clustering coefficient of the nodes in the remaining network is slightly lower, at 0.660. This value is also found to be significantly lower than 0.674 (N=100, p = \(4.074\times 10^{-20}\), one-sided Student's t-test). This indicates that the removal of the specific group of transitioning inventors results in a network (\(C_{r}\)) that maintains slightly higher transitivity as compared to the network resulting from the random removal of inventors. ### Rq2 The primary focus of RQ2 is to examine if the rate of transition of the 1,340 inventors from 2010 to 2022 is relatively stable over the period of study, or if it peaks (or shows other irregularities) during some periods. We also seek to investigate if transitioning occurred uniformly often between the five organizations, or if some organizations are, in fact, over-represented in terms of out-transitions or in-transitions than would be expected through chance alone. As a first step toward these investigations, we determined an estimator for the transition time \(T_{t}\) when an inventor transitioned from the first organization to the second organization. To do so, we identified when a transitioning inventor filed their _last_ patent with the first assignee organization and when they filed their _first_ patent with the second assignee organization. The estimate of \(T_{t}\) was then calculated the midpoint between these two points in time6 Footnote 6: In a slight abuse of terminology, we continue to refer to this estimator as \(T_{t}\), rather than the unknown variable itself (mathematically representing a more ‘exact’ time when the inventor transitioned from the first to the second organization) that is being estimated. Figure 3 illustrates the number of inventors transitioning from their initial assignee organization to their second assignee organization each year. We show company-segregated distributions for both incoming and outgoing transitions (and for each year) to gain deeper insights into the trend. Among the 1,340 inventors who transitioned just once from 2010 to 2022, Microsoft, on average, had the most significant outflow of inventors, while Meta had the lowest. Regarding transitions into the companies, Google, Amazon, and Meta have the highest average influx of inventors. Therefore, outflowing transitions were significantly more skewed than incoming transitions, which were more evenly divided. Furthermore, the annual rate of inventor transitions suggests that there is a peak from 2015 to 2017 for both outgoing and incoming transitions. Together, the figure shows a concentrated transitioning period between the assignee organizations, which further implies that the innovation ecosystem within these organizations underwent a non-trivial shift (with the greatest changes suggested for Microsoft) during a relatively brief 3-year period. While the results in Figure 3 are illustrative, they are tabulated at the granularity of years. To gain further insight into these transitions at a finer granularity (which is also useful because the rate of transition increases during a relatively narrow time period of 2-3 years), we compute two additional variables for each transitioning inventor: first, we record the time when they first filed a patent with their first assignee organization as their 'innovation start' time \(T_{s}\). Similarly, we marked the time when they last filed a patent with their second assignee organization as their 'innovation end' time \(T_{e}\). This period between the start and end times is referred to as the 'innovation period'7. Given these additional variables, we sought to identify the periods during which transitions occurred most frequently at a finer granularity (quarters or 3-month periods, rather than years). To achieve this, we utilized a moving three-month window that starts from January 2010 and is incrementally rolled (in increments of one day) all the way until December 2022. Within each such window, we determined how _long_ each inventor was associated with either the first or second assignee. Footnote 7: It bears noting that these terms should only be interpreted in the context of this study and time period e.g., some inventors may have been innovating in other organizations (e.g., startups acquired by some of these organizations, or research labs) before or after the time period of study. The ‘innovation end’ time therefore should not be ‘literally’ taken to imply that the inventor has stopped innovating or filing patents. Formally, this calculation may be described as follows: first, for every inventor \(i\) in each window, we calculated the length of time spent innovating in the first assignee organization. We do so by measuring the time from the window's start (or from \(T_{s}\) if it commenced after the window began) to either the window's end or \(T_{t}\) (the time of transition), whichever was sooner. This quantity represents the duration that \(i\) filed patents with, when they were employed by their first assignee organization. We denote this duration as \(D_{i}^{1}\). Similarly, we can compute such a duration (\(D_{i}^{2}\)) where \(i\) filed patents with their second assignee organization, by measuring the length of time from the later of the window's start, or \(T_{t}\), and ending at the earlier of the window's end or \(T_{e}\). An example of this procedure using actual data is illustrated for a small set of transitioning inventors in Figure 4 (a). Using this procedure, we calculate the total 'duration' of innovation in the first organization within a window, by summing all individual durations (namely., the 'blue' bars in Figure 4 (a) within a'red' window, but for all transitioning inventors rather than just the ones shown in the figure) for the first organization. We denote this total duration as \(D1\). Similarly, we can compute the total duration for the second job (the sum of all the green bars in a red window) for all inventors within a window, as \(D2\). Denoting the number of all transitioning inventors as \(I=1340\), we can now compute the proportion \(\frac{D2}{D1}\) within each window, and plot this proportion against the mid-point of the window (which spans 3 months). The results are illustrated in Figure 4 (b). We can use the plot to determine when transition, as a measure of this relative proportion of innovation activity in the first versus the second assignee organization, exhibits the most significant change. We do so by examining the slope of this curve. We are specifically interested in identifying a two-year span with the steepest slope. A steeper slope indicates a more rapid change in the ratio \(\frac{D2}{D1}\). Our analysis indicated that the derivative of the segment from December 2014 to November 2016 remains high (with a short blip in late summer 2016) before declining secularly afterward. This supports the previous result that the most pronounced transition happened in this period, but it also shows that rates had been increasing leading up to this period, before declining continuously after this period. Further investigation of this period, as well as hypothetical causes for a sustained high transition rate, could be an interesting area for further sociological research exploring innovation patterns in Big Tech. ### Rq3 RQ3 aims to investigate whether the transition between assignee organizations has had a positive impact on the inventors' ability to produce more high-impact inventions. We note at the outset that determining impact of an innovation, especially in the technology industry, is a controversial Figure 3: The number of inventor-transitions from their first assignee organization to their second assignee organization each year from 2010 to 2022. (a) The number of inventor-transitions outflow by company, with the color in each bar indicating the destination company the inventor transitioned to. (b) The number of inventor-transitions inflow by company, with the color in each bar indicating the source company the inventor transitioned from. topic [25], because the impact could occur over long time horizons, be an indirect enabler of other technologies, and is not always measurable using simple metrics. Nevertheless, one way of examining the impact is by considering the citation count of an inventor's patents before and after an inventor transitions. Done in aggregate across all patents filed by the 1,340 transitioning inventors from 2010 through 2022, this count allows us to gauge (in a limited, but still reasonable, manner) the potential association between transition and innovation impact. One bias that must be accounted for before computing such an association is that patents filed earlier will obviously be expected to receive higher citation counts. Hence, we standardize or 'normalize' the citation count \(P_{t,o}\) for a patent filed in year \(t\) and by an inventor currently in organization \(o\) by adopting the following methodology: for each unique combination of year (\(t\)) and organization (\(o\)), we calculate the mean (\(\bar{P}_{t,o}\)) and standard deviation (\(s_{t,o}\)) of citation counts for all the patents filed within that timeframe and in that organization. We can then normalize \(P_{t_{o}}\) and represent it as a new variable \(Z_{t,o}\) using standard Z-score normalization: \[Z_{t,o}=\frac{P_{t,o}-\bar{P}_{t,o}}{s_{t,o}} \tag{1}\] By implementing such a Z-score normalization, we are now able to derive a normalized citation count, or estimated impact, for each patent within our dataset. Similarly, we can calculate the estimated impact for an _inventor_ over a given time period by averaging the estimated impacts of all patents filed in that time period where that inventor was a co-author. With these measures of estimated impact, we investigated RQ3 by first calculating the average estimated impact for each inventor using the patents filed in the entire time period _before_ the inventor transitioned8, as well as the inventor's average estimated impact _after_ the transition. Because these two estimates are obtained for each of the 1,340 transitioning inventors, we obtain two 'paired lists' (representing before and after impacts for the same inventor). Subtracting the 'before transition' estimated impact from the 'after transition' impact, we obtain a single vector containing the estimated _excess_ impact (due to transition) for each of the 1,340 inventors. We plot the distribution of this estimated excess impact as a histogram in Figure 5. A paired Student's t-test showed evidence in favor of the research hypothesis that the after-transition impact is greater than the before-transition impact (i.e., we were able to reject the null hypothesis convincingly; N=1,340; p= 0.00025). Footnote 8: Using the terminology introduced earlier for RQ2, this would be the average estimates impact of all patents where the inventor was a co-author in the period \(T_{t}-T_{s}\); recall that both of these estimators apply individually to each inventor. While this result would need to be replicated for other measures of estimated impact, and other normalization procedures, it does suggest that the innovation potential of Big Figure 4: (a) An illustration of the 3-month moving window for a set of 20 inventors. The start of the blue bar (\(T_{s}\)) marks the beginning of the inventor’s ‘innovation start’ time. The transition time to the second organization is denoted by \(T_{t}\), where the blue bar ends and the green bar begins. The end of the green bar (\(T_{e}\)) marks the ‘innovation end’ time; (b) The trend of the ratio \(\frac{D2}{D1}\) (‘proportion of the job 2 duration’) between 2010 and 2022, and its first derivative. Figure 5: Distribution of estimated excess impact, using the methodology described in the main text, for the 1,340 transitioning inventors. Note that the y-axis is on the log scale. Tech has likely been improved, rather than diminished, due to such switching activity (whether voluntarily intended9 by the worker or not). Footnote 9: Transitioning or switching does not always imply volition on the worker’s part. In some cases, the first assignee organization may have terminated the worker, while in other cases, the worker may have been induced to leave and join the second organization. Controlling for volition in such analyses is an important, but difficult, study that future sociological research may want to consider. ## 4 Conclusion and Future Work In investigating RQ1, we found that transitioning inventors can play key structural roles in patent co-authorship networks in Big Tech firms in the period of study, much greater than would be predicted by chance. For example, their removal leads to greater fragmentation, as measured by the change in the number of connected components, compared to the chance removal of an equivalent number of nodes. Results for RQ2 showed that transitioning over this period did not exhibit a regular or'secular' trend: rather, there was a significant outflow of inventors from Microsoft and a prominent inflow into Google, Amazon, and Meta. The rate of transition peaked between 2015 and 2017 and stayed high consistently during this period, suggesting a marked shift in the innovation ecosystem of these organizations during this period. Following this period, the rate of transition declined and ultimately reached a low level by the time of the pandemic. Finally, in investigating RQ3, we found that estimated excess impact, as measured using a normalized citation score of patents that a transitioning inventor co-authored (before and after transition), was positive and statistically significant. This finding suggests that the productivity of transitioning innovators, on average, across Big Tech, was not diminished by transition. One interesting area of future research is to investigate some of the same research questions but through a causal lens, using additional data (e.g., survey and employment records) and causal inference techniques. Investigating questions like RQ3 by considering a wider body of estimated measure impacts is also a fruitful area for further investigation. Finally, using similar techniques to investigate post-pandemic effects on some of these measures (which will likely only manifest a few years from now) is also of interest.
2302.13153
Directed Diffusion: Direct Control of Object Placement through Attention Guidance
Text-guided diffusion models such as DALLE-2, Imagen, eDiff-I, and Stable Diffusion are able to generate an effectively endless variety of images given only a short text prompt describing the desired image content. In many cases the images are of very high quality. However, these models often struggle to compose scenes containing several key objects such as characters in specified positional relationships. The missing capability to ``direct'' the placement of characters and objects both within and across images is crucial in storytelling, as recognized in the literature on film and animation theory. In this work, we take a particularly straightforward approach to providing the needed direction. Drawing on the observation that the cross-attention maps for prompt words reflect the spatial layout of objects denoted by those words, we introduce an optimization objective that produces ``activation'' at desired positions in these cross-attention maps. The resulting approach is a step toward generalizing the applicability of text-guided diffusion models beyond single images to collections of related images, as in storybooks. Directed Diffusion provides easy high-level positional control over multiple objects, while making use of an existing pre-trained model and maintaining a coherent blend between the positioned objects and the background. Moreover, it requires only a few lines to implement.
Wan-Duo Kurt Ma, J. P. Lewis, Avisek Lahiri, Thomas Leung, W. Bastiaan Kleijn
2023-02-25T20:48:15Z
http://arxiv.org/abs/2302.13153v3
# Directed Diffusion: Direct Control of Object Placement through Attention Guidance ###### Abstract Text-guided diffusion models such as DALLE-2, IMAGEN, and Stable Diffusion are able to generate an effectively endless variety of images given only a short text prompt describing the desired image content. In many cases the images are very high quality as well. However, these models often struggle to compose scenes containing several key objects such as characters in specified positional relationships. Unfortunately, this capability to "direct" the placement of characters and objects both within and across images is crucial in storytelling, as recognized in the literature on film and animation theory. In this work we take a particularly straightforward approach to providing the needed direction, by injecting "activation" at desired positions in the cross-attention maps corresponding to the objects under control, while attenuating the remainder of the map. The resulting approach is a step toward generalizing the applicability of text-guided diffusion models beyond single images to collections of related images, as in storybooks. To the best of our knowledge, our Directed Diffusion method is the first diffusion technique that provides positional control over multiple objects, while making use of an existing pre-trained model and maintaining a coherent blend between the positioned objects and the background. Moreover, it requires only a few lines to implement 1. Footnote 1: Our project page [https://hohomu-vicml.github.io/DirectedDiffusion.Page](https://hohomu-vicml.github.io/DirectedDiffusion.Page) 1 Footnote 1: Our project page [https://hohomu-vicml.github.io/DirectedDiffusion.Page](https://hohomu-vicml.github.io/DirectedDiffusion.Page) 2023 ## CCS Concepts Computing methodologies Neural networks; Computer graphics. ## Keywords denoising diffusion, text-to-image generative models, artist guidance, storytelling ### ACM Reference Format Wan-Duo Kurt Ma, J.P. Lewis, W. Bastiaan Kleijn, and Thomas Leung. 2023. Directed Diffusion: Direct Control of Object Placement through Attention Guidance. In _Proceedings of ArXiv_. ACM, New York, NY, USA, 9 pages. ## 1. Introduction Text-to-image models such as DALL-E 2 (Dall et al., 2017), Imagen (Maswani et al., 2017) and others have revolutionized image generation, and platforms such as Stable Diffusion (Beng et al., 2017) and similar systems have democratized this capability, as well as presenting new ethical challenges. These systems promise to generate arbitrary images simply by typing a "prompt" or description of the desired image. It is not always highlighted, however, that experimentation and practical experience are often needed if the user has a particular result in mind. Text-to-image diffusion methods can fail to produce the desired results, requiring repeated trial-and-error experiments with "prompt engineering" including negative prompts, different random seeds, and hyperparameters including the classifier-free guidance scale, number of denoising steps, and scheduler (Srivastava et al., 2016). This is particularly true for complex prompts involving descriptions of several objects. For example, a prompt such as "_a bird flying over a house_" fails to generate the house with some seeds, and in other cases renders both the bird and house but without the "on" relationship (Fig. 2). The time required for experimentation is made worse if optimization or fine-tuning computation is required, especially on a per-image basis. These difficulties have led to the creation of "prompt marketplaces" (Beng et al., 2017) where expert users share and sell successful settings. This experimentation becomes prohibitive when the goal is to use the images for _storytelling_, since text-to-image methods offer no control over the required positioning of characters and objects. Prompts describing the content of an image do not indicate _where_ objects should be placed, and indicating desired positions in the prompt completely fails (Figs. 1, 2). This is probably because the training data of text-image pairs is gathered from public sources, and people rarely annotate the location of objects in an image ("_A photo of grandmother and her cat. Grandmother is in the center of the picteu, and the cat is to her left._") because it is generally obvious to the viewer. As a consequence, extensive and tedious repeated trials are needed to obtain an image where the desired objects exist and their randomly generated positions are acceptable. A story generally involves a character interacting with the environment or with other characters. Conveying a story with images requires creating not just images with the desired semantic content, but images with objects in suitable relative _positions_ ("The thief looked back at the princess before hurrying toward the door"). The subject of "Film Grammar" (Beng et al., 2017) seeks to codify established principles for positioning characters relative to each other and the camera. Film Language guides include principles such as "_To make things look natural, put lines, edges or faces about a third of the way across, up or down the picture 'frame'. To make them look formal, put them in the middle; and to make things seem uncomfortable, make the shot unbalanced or put it at on a slant_" (Krause et al., 2017). The staging principle in traditional animation (Beng et al., 2017) similarly recognizes the importance of controlling the placement of characters and objects. Research is addressing some limitations of text-guided diffusion methods, including methods that define new text tokens to denote specific and consistent character or object identities, provide mask-guided inpainting of particular regions, manipulate text guidance representations, and mitigate the common failure of guidance using CLIP-like models (Srivastava et al., 2016) to understand the target of attributes such as colors. Existing methods still generally struggle to synthesize _multiple_ objects with desired positional relationships (Section 4). Our work is a further step toward guiding text-based diffusion, by introducing _coarse positional control for multiple objects_ as needed for storytelling. In this application only _coarse_ positional control is needed - for example a director might instruct an actor to "start from over there and walk toward the door", rather than specifying the desired positions to floating point precision as is done in animation software packages. We take inspiration from the observation that position is established early in the denoising process (Krause et al., 2017) and from the fact that the cross-attention maps have a clear spatial interpretation (see Fig. 3). One proposal is to edit the cross-attention maps during the early denoising steps, for example, to simply attenuate the cross-attention map for a particular word at all locations where the corresponding object should not appear. While this sometimes works, it can also fail if the random initialization is such that little energy appears in the retained region. Instead, we fully control the cross-attention maps for objects of interest by _injecting_ attention into the desired spatial locations. See Section 3 for details. Our method is implemented using the Python Diffusers (Beng et al., 2017) implementation of latent diffusion (LDM) and makes use of the available pre-trained model. Our method makes the following contributions: * **Storytelling.** Our method is a first step towards storytelling by providing consistent control over the positioning of multiple objects. * **Compositionality.** It provides an alternate and direct approach to "compositionality" by providing explicit positional control. Figure 2. Stable Diffusion (Beng et al., 2017) failure cases (please enlarge figures to see details). (a) _A bird flying over a house_: no house, bird is not over house, flying house. (b) _A bear watching a bird_: no bird, bear is not watching bird. (c) _A cat watching a dog eat cake_: cat is eating, dog is missing. (d) _A nature photo with a bird in the upper left_: “upper left” is ignored. * **Consistency.** The positioned objects seamlessly and consistently fit in the environment, rather than appearing as a splice from another image with inconsistent interaction (shadows, lighting, etc.) This consistency is due to two factors. First, we use a simple bounding box to position and allow the denoising diffusion process to fill in the details, whereas specifying position with a pixel-space mask runs the risk that it may be inconsistent with the shape and position implicit in the early denoising results. Second, subsequent diffusion steps operate on the entire image and seamlessly produce consistent lighting and interactions such as the water splash of a dog jumping in a pool (Fig. 8). * **Simplicity.** Image editing methods for text-to-image models often require providing detailed masks, depth maps, or other precise guidance information. This information is not available when synthesizing images _ab initio_, and it would be laborious to create. Our method allows the user to control the desired locations of objects simply by specifying approximate bounding boxes. From a computational point of view, our method requires no fine tuning or other optimization, and can be added to an existing text-driven diffusion model with cross-attention guidance with only a few lines of code. Storytelling also requires the ability to generate particular objects rather than generic instances (_this_ cat rather than "any cat"), and our algorithm is complementary to methods (Golovolovolov et al., 2012; Krizhevsky et al., 2014) that address this. For clarity, _the examples in the paper do not make use of these other algorithms_. ## 2. Related Work Denoising diffusion models (Golovolovolov et al., 2012; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) add Gaussian noise to the data in a number of steps and then train a model to incrementally remove this noise. Aspects of the mathematics are presented in most papers and good tutorials are available (Krizhevsky et al., 2014), so we will simply mention several high-level intuitions. The end-result of the noising process is effectively a high-dimensional Gaussian, which is easy to sample. Denoising random samples from this distribution results in novel images (or other data). The mathematical derivation in (Golovolovolov et al., 2012; Krizhevsky et al., 2014) is somewhat similar to a hierarchical version of a VAE (Golovolovolov et al., 2012), although with a fixed encoder and a "latent" space with the same dimensionality as the data. The fixed encoder provides an easy closed-form posterior for each step, allowing the overall loss to split into a sum over uncoupled terms for each denoising step, resulting in faster training. (Krizhevsky et al., 2014) introduced an alternate derivation building on score matching (Golovolovolov et al., 2012). From this perspective adding noise is equivalent to convolving the probability density of the noise with the data, thus blurring the data distribution and providing gradients toward the data manifold from distant random starting points. The denoising process in (Golovolovolov et al., 2012; Krizhevsky et al., 2014) is stochastic, with an interpretation as Langevin sampling (Krizhevsky et al., 2014). A deterministic variant of DDM (Denoising Diffusion Implicit Models) was introduced in (Krizhevsky et al., 2014) and has advantages for editing applications. Text-to-image models condition the image generation process on the text representation from joint text-image embedding models such as CLIP (Krizhevsky et al., 2014), thereby providing the ability to synthesize images simply by typing a phrase that describes the desired image. While text-to-image models have employed GANs as well as autoregressive models and transformers (Bengio et al., 2015; Chen et al., 2016; Li et al., 2017), a number of successful approaches use diffusion models as the underlying image generation mechanism (Golovolovolov et al., 2012; Krizhevsky et al., 2014; Krizhevsky et al., 2014). This choice is likely motivated both by the stable training of these models and their ability to represent diverse multi-subject datasets. Among these systems, latent diffusion (LDM) (Krizhevsky et al., 2014) has released both code and trained weights (Chen et al., 2016) under permissive license terms, resulting in widespread adoption and a large ecosystem of related tools. This approach runs denoising diffusion in the latent space of a carefully trained autoencoder, providing accelerated training and inference. Latent diffusion implements classifier-free guidance using a cross-attention scheme. A projection of the latent image representation from LDM's U-net is used as the query, with projections of the CLIP embedding of the prompt supplying the key and value. While the key and value are constant, the query is changing across denoising steps, allowing it to iteratively extract needed information from the text representation (Krizhevsky et al., 2014). The literature on applications of diffusion models is difficult to fully survey, with many new papers appearing each week. A number of works have noted that the capabilities of text-guided diffusion can be extended with relatively simple modifications (sometimes requiring only a few lines of code). For the case of image editing, SDEdit (Golovolov et al., 2012) runs a given image part way through the noising process and then denoises the result. This has an interpretation of "projecting on the image manifold", and allows crude sketches to be denoised into sophisticated pictures. However there is a trade-off in the extent of the noising/denoising - using the full process "forgets" all knowledge of the input image and just produces an unrelated random sample, whereas denoising for too few steps stays close to the input guidance image and inherits any of its imperfections. Blended Diffusion (Difman et al., 2016) uses a user-provided mask to blend a noised version of the background image with the CLIP-guided synthesized inpainted region at each step. The authors point out an analogy to classic frequency-selective pyramidal blending, with the low frequency features being merged earlier in the denoising process. Magicmix (Magicmix, 2016) produces convincing novel images such as _"an espresso machine in the shape of a dog"_. Their technique simply starts the denoising process guided by the text prompt corresponding to the desired overall shape ("dog") and switches to the prompt describing the content ("espresso machine") at some step in the denoising process. This exploits the observation that the early denoising steps establish the overall position and shape of an object while the later steps fill in the "semantic details" (Fig. 3). Prompt-to-prompt (Golovolov et al., 2012) demonstrates that powerful text-driven image editing can be obtained by substituting or weighting the cross-attention maps corresponding to particular words in the prompt. The paper also clearly illustrated the fact that the cross-attention maps have a spatial interpretation. Structured diffusion guidance (Chen et al., 2016) addresses the attribute binding problem in which text-to-image models often fail to associate attributes such as colors with the correct objects. Their approach passes the prompt to obtain noun phrases with their associated attributes, and then these additional text embeddings are combined with that for the original prompt in generating the cross-attention. Other research goes beyond manipulations of pretrained diffusion models by introducing optimization or fine tuning to produce additional effects (Golovolov et al., 2012; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). Closer to our work, (Kraemer et al., 2017) merges the results of denoising guided by multiple prompts using conjunction and negation operations. The result reliably allows producing an image composed of multiple text prompts with different subjects (a common failure case of text-to-image models), but provides no positional control over the elements. We approach this problem more directly, by specifying the position of multiple objects through their corresponding attention maps. Concurrent with our work, (Kraemer et al., 2017) demonstrate high-quality inpainting driven by a combination of text and detailed masks. They edit the cross-attention map for the subject to be inpainted to correspond to a low-resolution version of the mask. This addresses a problem with some mask-based editing methods where the shape of the mask can be inconsistent with the shape implicitly defined by the early noise layout, thus resulting in poor alignment of the synthesized region to the mask. Our work also uses editing of the cross attention maps, but we inject activation in the desired spatial region to guarantee it is non-zero. Our method does not have an issue with mask vs. noise alignment since masks are not used. We also target full synthesis rather than inpainting. The work (Kraemer et al., 2017) introduces an alternate approach to synthesizing images for storytelling, using an auto-regressive formulation of latent diffusion in which each new image is conditioned on previous captions and images. This approach produces good results when given caption fragments that make sense only in the context of the preceding captions and images. The positions of subjects are not controlled however, making it difficult to satisfy the position-based storytelling principles. ## 3. Method Our objective is to create a controllable synthetic image from a text-guided diffusion model without any training by manipulating the attention from cross-attention layers. We use the following notation: Bold capital letters (e.g., **M**) denote a matrix or a tensor, vectors are represented with bold lowercase letters (e.g., **m**), and lowercase letters (e.g., _m_) denotes a scalar. The DD procedure controls the placement of objects corresponding to several groups of selected words in the prompt; we refer to these as _directed objects_ and _directed prompt words_ respectively. The term _trailing attention_ is defined in Section 3.3. DD builds on LDM (Kraemer et al., 2017) and it is assumed that the reader is familiar with that work. Our method is inspired by the intermediate result shown in Figure 3 (Top). As shown in this figure, the overall position and shape of a synthesized object appears near the beginning of the denoising process, while the final denoising steps do not change this overall position but add details that make it identifiable as a particular object (cat). This observation has also been exploited in previous work such as (Kraemer et al., 2017). An additional phenomenon can be found in the cross-attention maps, as shown in Figure 3 (bottom two rows). The cross-attention map has a spatial interpretation, which is the "correlation" between locations in the image and the meaning of a particular word in the prompt. For instance, we can see the cat shape in the cross-attention map associated with the word "cat". DD utilizes this key observation to spatially guide the diffusion denoising process to satisfy the user's requirement. In this section, we first give an overview of the Directed Diffusion pipeline in Section 3.1, then we detail our core algorithm in Section 3.2, and finally, we show an investigation of cross-attention that inspires our method. ### Pipeline Given the prompt \(\mathcal{P}\) and the associated information \(\mathcal{U}\) describing the directed objects, DD synthesizes an image \(\mathbf{x}_{0}\) such that it considers the consistency of the semantics of the \(\mathcal{U}\), and the \(\mathcal{P}\) based on a pre-trained Latent Diffusion Model (LDM) (Kraemer et al., 2017) without further training. The associated information comprises a set of parameters \(\mathcal{U}=\{\mathcal{B},\mathcal{I},\mathcal{T}\}\), respectively containing the list of bounding boxes to position the directed objects, a list of associated word indices in the prompt, and a list of indices of the trailing attention maps. We will detail these shortly in Sec. 3.2 and Sec. 3.3. Following conventional notation we define \(\mathbf{x}_{t}\) and \(\mathbf{z}_{t}\) as the synthesized LDM image and the predicted latent noise at time step \(t\), respectively, where \(t\in\{T,\cdots,0\}\), and \(\mathbf{x}_{T}\), and \(\mathbf{x}_{0}\) are the Gaussian noise, and the reconstructed image. The principle of this work is based on this concept: First position the objects, then refine the results. This is reflected in the overall Directed Diffusion pipeline (Figure 4) which can be divided into two stages: _Attention Editing_ and _Conventional SD Denoising_. _Attention Editing._ From a high-level perspective, this stage focuses on spatially editing the cross-attention map used for conditioning in latent diffusion. It operates during the diffusion steps \(t\in[T,T-N)\) that establish the object location, where \(N\) is a parameter determining the number of steps in this stage. As described Figure 3. (Top, from left to right): The reverse LDM denoising process from the initial stage to the end of process. Note that the position of the catís evident early in the process (red box), however the details that define it as a cat are not yet clear. (Bottom): The cross-attention maps associated with each word in prompt, where the first and second row are from the first denoising step, and the final step, respectively. The cat’s shape is clearly visible in the cross-attention maps for the word ”cat” (and in the maps for ”sitting” and ”on” in the final step). in Sec. 3.2, this stage modifies the cross-attention map during the first \(N\) denoising steps by amplifying the region inside \(\mathcal{B}\) while down-weighting the surrounding areas. _Conventional SD Denoising._ Following the attention editing stage, this stage runs the standard LDM process using classifier-free guidance [13] over the remainder of the reverse diffusion denoising steps \(t\in[T-N,\cdots,0)\). Note that the only difference between the two stages is the cross-attention editing. Algorithm 1 describes our DD pipeline. The key is the function DDCrossAttnEdit(\(\cdot\)) (line 8) that is called during the initial attention editing stage. This function is found in Algorithm 2 in Sec. 3.2. The rest of the pseudo-code is similar to the conventional LDM algorithm. ``` 1:Input: A diffusion model \(\text{DM}(\cdot)\), prompt \(\mathcal{P}\), side information \(\mathcal{U}\), the number of editing steps \(N\) 2:Parameters: Classifier-free guidance scale \(w_{g}\) 3:Output: An synthesized image \(\mathbf{x}_{0}\) 4: 5:procedureDirectedDiffusion(\(\text{DM}(\cdot)\), \(\mathcal{P}\), \(\mathcal{U}\), \(N\)) 6:for\(t\) from T down to 0 do 7:if\(t\in[T,T-N)\)then 8:\(\mathbf{z}_{\text{cond}}\leftarrow\text{DDCrossAttnEdit(DM}(\cdot),\mathcal{U}, \mathbf{z}_{t},\mathcal{P})\) 9:else 10:\(\mathbf{z}_{\text{cond}}\leftarrow\text{DM}(\mathbf{z}_{t},\mathcal{P})\) 11:endif 12:\(\mathbf{z}_{\text{uncond}}\leftarrow\text{DM}(\mathbf{z}_{t},\mathbf{0})\) 13:\(\mathbf{z}_{t-1}\leftarrow\mathbf{z}_{\text{cond}}+w_{g}*(\mathbf{z}_{\text{ uncond}}-\mathbf{z}_{\text{cond}})\) 14:endfor 15:return\(\mathbf{x}_{0}\) 16:endprocedure ``` **Algorithm 1** Directed Diffusion Denoising Pipeline To achieve our goal, in addition to the text prompt (\(\mathcal{P}\)), the user specifies "direction" information \(\mathcal{U}=\{\mathcal{B},\mathcal{I},\mathcal{T}\}\) to guide the reverse LDM process, where \(\mathcal{B}=\{\mathcal{B}_{r}\}_{r=1}^{R}\), \(\mathcal{I}=\{I_{r}\}_{r=1}^{R}\), \(\mathcal{T}=\{\mathcal{T}_{r}\}_{r=1}^{R}\), and \(R\) denotes the number of directed locations. For example, if the prompt is _'A bear watching a flying bird_," we aim to provide control over spatial relationship between the bird and the bear in the generated image, with \(R=2\) in this case. This example will be revisited at the end of Sec. 3.3. Specifically, a bounding box \(\mathcal{B}_{r}=\big{\{}(x,y)\,|\,b_{\text{left}}\times w\leq x\leq b_{\text{ right}}\times w,\,\,\,b_{\text{top}}\times h\leq y\leq b_{\text{bottom}} \times h\big{\}}\) is the set of all pixel coordinates inside the bounding box of resolution \(w\times h\). This region will approximately guide the location of the subject indicated by the \(i\)th prompt word. The exact location and shape of the subject results from the interaction of the Gaussian activation window inside this box with the activations \(\mathbf{z}_{i}\) in the U-net, hence the object is not restricted in shape and may extend somewhat outside the box. In our implementation, \(\mathcal{B}_{r}\) is generated based on a tuple of four scalars representing the boundary of the bounding box, denoted as \(\mathbf{b}=(b_{\text{left}},b_{\text{right}},b_{\text{top}},b_{\text{bottom}})\), where \(b_{\text{left}},b_{\text{right}},b_{\text{top}},b_{\text{bottom}}\in[0,1]\) describe the bounding box of the directed object position expressed as fractions of the image size, For instance, \(\mathbf{b}=(0.0,0.5,0.0,1.0)\) denotes the left half of the image. We define the set \(\mathcal{I}_{r}\subset\{k|k\in\mathbb{N},1\leq k\leq|\mathcal{P}|\}\) as the indices of the cross-attention map associated with the words in \(\mathcal{P}\) of region \(r\). Similarly, \(\mathcal{T}_{r}\subset\{k|k\in\mathbb{N},|\mathcal{P}|<k\leq 77\}\) are the indices of trailing attention maps to be edited for the \(r\)th region, as will be described in Sec. 3.3. Algorithm 2 is the core of DD. LDM implements denoising with a U-net architecture, where text guidance from the prompt is mapped to layers of the U-net using cross-attention layers. In each of these cross attention layers, DD modifies the selected indices \(\text{M}_{\{\mathcal{K}_{\dots}\}}\) Figure 4. Directed Diffusion pipeline overview: Our pipeline replaces the early steps \(\{T,\cdots,T-N\}\) of the latent diffusion reverse (denoising) process to “inject attention” into the cross-attention maps for directed prompt words in order to direct the position of the corresponding objects. Please see the detail in the main text. Figure 5. Directed Diffusion pipeline in detail: The attention maps are divided into prompt attention maps \(\text{M}_{\mathcal{I}}\), and trailing attention maps \(\text{M}_{\mathcal{T}}\), according to the words in the prompt. Then, we edit selected maps by injecting activation in a Gaussian window while down-weighting the surrounding area. Note that the figure shows only a single region for the phrase “a car”. The same mechanism scales to multiple words/phrases and regions. Please see the main text for details. (using Python slicing notation) of the cross-attention maps \(\mathbf{M}=\text{Softmax}(\mathbf{Q}_{l}(\mathbf{z}_{t})\times\mathbf{K}_{l}( \mathcal{P})^{T})\), where \(\mathcal{K}=\mathcal{I}\cup\mathcal{T}\), and \(\mathbf{Q}_{l}(\cdot)\), \(\mathbf{K}_{l}(\cdot)\) are the query and key in the cross-attention module \(l\). These then weight the corresponding value \(\mathbf{V}_{l}(\cdot)\). For pedagogical simplicity, Algorithm 2 handles directing only a _single_ region (\(R=1\)). The extension to multiple regions is a straightforward modification requiring an additional loop. Given \(\mathcal{B}_{r}\) for the region \(r\), we generate a modified cross-attention map using these two functions: \[\text{WeakenMask}(\mathcal{B}_{r}^{\prime})_{xy}=\begin{cases}1,&(x,y)\in \mathcal{B}_{r}^{\prime}\\ c,&\text{otherwise,}\end{cases}\] where \(\mathcal{B}_{r}^{\prime}\) is the complement of \(\mathcal{B}_{r}\), and \[\text{StrengthenMask}(\mathcal{B}_{r})_{xy}=\begin{cases}f(x,y),&x,y\in \mathcal{B}_{r}\\ 0,&\text{otherwise,}\end{cases}\] where \(f(\cdot)\) denotes the function that "injects attention" to amplify the region \(\mathcal{B}_{r}\). In our implementation, we use a Gaussian window of size \(\sigma_{x}=b_{w}/2,\sigma_{y}=b_{h}/2\) to generate the corresponding weight, where \(b_{w}=\text{ceil}((b_{\text{tight}}-b_{\text{left}})\times w),b_{h}=\text{ceil }((b_{\text{top}}-b_{\text{bottom}})\times h)\) is the width and the height of \(\mathcal{B}_{r}\). The \(\text{WeakenMask}(\cdot)\) and \(\text{StrengthenMask}(\cdot)\) functions "direct" selected subjects from the prompt toward specific locations of the image in the LDM denoising process. \(\text{StrengthenMask}(\cdot)\) inserts higher activation into the provided bounding box with a Gaussian window, while \(\text{WeakenMask}(\cdot)\) attenuates the region outside \(\mathcal{B}\) by multiplying by \(c<1\). ``` 1:Input: A diffusion model \(\text{DM}(\cdot)\), prompt \(\mathcal{P}\), directions \(\mathcal{U}=\{\mathcal{B},\mathcal{I},\mathcal{T}\}\), number of editing steps \(N\), current step \(t\) 2:Output: Predicted conditional noise \(\mathbf{z}_{\text{cond}}\) at time \(t\) from \(\text{DM}(\cdot)\) 3:Parameters: Gaussian weighting scalar \(c_{g}\) 4:procedure\(\text{DDCrossAttnEdit}(\text{DM}(\cdot),\mathcal{P},\mathcal{B}_{r})\) 5:for\(l\in\text{layer}(\text{DM}(\mathbf{z}_{t},\mathcal{P}))\)do 6:if\(\text{type}(l)\in\text{CrossAttn}\)then 7:\(\mathbf{M}=\text{Softmax}(\mathbf{Q}_{l}(\mathbf{z}_{t})\cdot\mathbf{K}_{l}( \mathcal{P})^{T})\) 8:\(\mathcal{K}\leftarrow\mathcal{I}\cup\mathcal{T}\) 9:\(\mathbf{W}\leftarrow\text{WeakenMask}(\mathcal{B}^{\prime},r)\) 10:\(\mathbf{S}\leftarrow\text{StrengthenMask}(\mathcal{B}_{r})\) 11:\(\mathbf{M}_{[\mathcal{K},\dots]}\leftarrow\mathbf{M}_{[\mathcal{K},\dots]} \odot\mathbf{W}+c_{g}\cdot\mathbf{S}\) 12:\(\mathbf{z}_{t}\leftarrow\mathbf{M}\cdot\mathbf{V}_{l}(\mathcal{P})\) 13:else 14:\(\mathbf{z}_{t}\gets l(\mathbf{z}_{t})\) 15:endif 16:endfor 17:return\(\mathbf{z}_{t}\) 18:endprocedure ``` **Algorithm 2** Directed Diffusion Cross-Attention Editing Algorithm ### Training Attention Maps A number of text-to-image models use CLIP (Zhou et al., 2017) embeddings for the text guidance. CLIP accepts an input text representation with up to 77 tokens, including the initial [CLS] token representing the sentence level classification. The number of tokens \(|\mathcal{P}|\) in the prompt \(\mathcal{P}\) is generally less than this number. We call the remaining \(77-|\mathcal{P}|-1\) attention maps corresponding to the non-prompt tokens the _trailing attention maps_\(\mathbf{T}\). Note that the output embedding for [CLS] is not used in our approach. The trailing attention maps have recently attracted interest for other purposes. For instance, the work (Zhou et al., 2017) finds that these trailing maps generally govern the environment or background pixels that are outside the region guided by the prompt. In this paper, we empirically find that the trailing attention maps play a crucial rule to control the consistency of object interactions in DD. Figure 6 shows the key motivation for editing the trailing attention maps. When the number of edited trailing attention maps is small (e.g., first row, \(|\mathcal{T}_{0}|=5\)), the image \(\mathbf{x}_{0}\) is barely modified. In contrast, when the number of edited trailing attention maps is too large (e.g., last row, \(|\mathcal{T}_{0}|=20\)), \(\mathbf{x}_{0}\) changes significantly and is missing all semantics and satisfies only the bounding box information. Using an intermediate number of edited trailing attention maps results in the image simultaneously portraying the desired semantic content at the desired location(s). To give an example summarizing the DD guidance information, consider the prompt we mentioned earlier: _'A bear watching a flying bird_'. The arguments of DD for this particular case are \(\mathcal{I}_{1}=\{1,2\},\mathcal{I}_{2}=\{4,5,6\}\), indicating the words "a bear" and "a flying bird". If the number of trailing attention maps to edit is selected as 3, and 5, then \(\mathcal{T}_{1}=\{7,8,9\},\mathcal{T}_{2}=\{7,8,9,10,11\}\). Note that this example is given to illustrate the most general case, however in our experiments we chose the same set of trailing attention maps for Figure 6. The number of trailing attention maps (5, 10, 15, 20, maps on the vertical axis) versus the number of attention map editing steps (1, 3, 5, 10, and 15 steps on the horizontal axis). The prompt is “_A photo of a squirrel eating a burger_” with the directed object “burger” positioned at the bottom left. The best results are obtained with an intermediate number of editing steps and edited trailing attention maps, as in the case of the image with the red border. each region. Also note that 1-based indexing is used, since index zero corresponds to the ignored the [CLS] token. ## 4. Experiments We now present examples of the results of Directed Diffusion. As described elsewhere, unmodified text-to-image methods such as latent diffusion are fallible and may fail to portray objects mentioned in the prompts. For example, images from the prompt "_A man eating a hamburger_" may fail to show a hamburger, or portray a man with extra limbs. We inherit this variability, and results shown here are representative but not guaranteed. Figure 7 shows the result of moving an object from left to right. All the results are generated with a bounding box sized to the full height and 40% of the image width. The prompts are _"A cat sitting on a car"_, _"A stone castle surrounded by lakes and trees_", and _"A dog hiding behind the chair"_, where the directed objects are "cat", "castle", and "dog", respectively. Figure 8 shows examples of directing objects to lie in each of four image quadrants. The prompts are _"The sun shines on the house"_, _"A dog diving into the pool"_, and _"a diver swimming through a school of fish"_, where the directed objects are "sun", "dog", and "diver", respectively. Note that the examples of the "sun" are synthesized correctly in different contexts. The 1st and 2nd quadrants shows the real sun at the edge of the house roof. The 3rd quadrant shows the reflection of the sun such as from a window. The experiment in Figure 9 shows results from the prompt _"A red cube above a blue sphere"_ across different random seeds from left to right. This is the well known "compositionality" failure case for SD,2, however, our method is able to handle this case. Additionally, unlike (Deng et al., 2018), which addresses this problem using a dataset-specific SD model, we are able to use the generic pre-trained LDM model without further modification, with the result that the backgrounds contain a variety of natural textures such as forest, floor, grass, and flowers. Footnote 2: This failure case is highlighted on the huggingface (Beng et al., 2018) stable diffusion website [https://huggingface.co/CompVis/stable-diffusion-v-1-1-original](https://huggingface.co/CompVis/stable-diffusion-v-1-1-original) ## 5. Limitations and Conclusion ### Limitations Directed Diffusion inherits most of the limitations of latent diffusion, including the need for trial-and-error exploration over the prompt, random seeds, and hyperparameters (Fig. 2). In common with methods such as (Han et al., 2017; Deng et al., 2018), it is necessary to specify the number of steps over which editing is active. It is also necessary specify a number of edited trailing attention maps that is not too small or large (Fig. 6). With a prompt such as _a horse in front of a castle_ the direction rarely succeeds in putting the horse in the upper part of the image. We believe this is because typical training images show the horse in the lower half of the image, as in the (synthetic) example shown above. While our method allows the position of specified objects to be controlled, it does not offer control over the orientation of the objects. This requirement comes up in a film language principle that "an actor on a journey should be portrayed in a consistent orientation." Our approach is likely to fail when the provided bounding box \(\mathcal{B}\) is too small. It also struggles to handle more than three or four simultaneously directed objects, which may be due to underlying limitations of the current latent diffusion model - for example, while LDM can produce good portraits it sometimes struggles to generate more complex scenes such as fall-body views of humans with the correct number of limbs. We also considered implementing the attention editing as an optimization objective. While this might have advantages, it requires sufficient memory to propagate gradients in the trained LDM model, placing the approach beyond most current consumer GPUs. ### Conclusion Stortyelling with images requires directing the _placement_ of important objects in each image. Our algorithm, directed diffusion, is a first step toward this goal. The algorithm is simple to implement, requiring only a few lines of code modification of a widely used library (Beng et al., 2018). The algorithm has significant limitations, in particular (and in common with many other text-to-image methods) there is a need to experiment with prompts and hyperparameters. Significant additional advances will be needed before video storytelling is possible. However, our method may be sufficient for the creation of storybooks, comic books, etc. when used in conjunction with other diffusion image guidance and editing techniques. This paper continues a graphics research tradition in which an artist is guiding and interacting with generative tools. On the other hand, recent advances in generative models (of both images and text) suggest that a future in which anyone can make a movie using only high level direction ("make me a comedy movie about a cat that saves the world by..") is no longer science fiction. Several factors suggest this future is still distant, however. For one, the overall generation of stories is not a fully solved problem. This includes aspects such as the narrative arc and synthesis of the film language needed to convey the story to a viewer. Second, neural video generation experiments generally trade spatial for temporal resolution due to memory limits, and current results generally show a few seconds of video at fairly low resolution (recall that mainstream movies are 4K resolution), while sometimes suffering from flickering and distorted objects. In conclusion, there remains a role for artist-directed graphics in the near future. **Societal Impact.** The aim of this project is to extend the capability of text-to-image models to visual storytelling. In common with most other technologies, there is potential for misuse. In particular, generative models reflect the biases of their training data, and there is a risk that malicious parties can use text-to-image methods to generate misleading images. The authors believe that it is important that these risks be addressed. This might be done by penalizing malicious behavior though the introduction of appropriate laws, or by limiting the capabilities of generative models to serve this behavior. ###### Acknowledgements. We thank Jason Baldridge, Avisek Lahiri, and Arkanath Pathak for helpful feedback.
2308.06263
ACCESS, LRG-BEASTS, & MOPSS: Featureless Optical Transmission Spectra of WASP-25b and WASP-124b
We present new optical transmission spectra for two hot Jupiters: WASP-25b (M = 0.56~M$_J$; R = 1.23 R$_J$; P =~3.76 days) and WASP-124b (M = 0.58~M$_J$; R = 1.34 R$_J$; P = 3.37 days), with wavelength coverages of 4200 - 9100\AA\ and 4570 - 9940\AA, respectively. These spectra are from the ESO Faint Object Spectrograph and Camera (v.2) mounted on the New Technology Telescope (NTT) and Inamori-Magellan Areal Camera & Spectrograph on Magellan Baade. No strong spectral features were found in either spectra, with the data probing 4 and 6 scale heights, respectively. \texttt{Exoretrievals} and \texttt{PLATON} retrievals favor stellar activity for WASP-25b, while the data for WASP-124b did not favor one model over another. For both planets the retrievals found a wide range in the depths where the atmosphere could be optically thick ($\sim0.4\mu$ - 0.2 bars for WASP-25b and 1.6 $\mu$ -- 32 bars for WASP-124b) and recovered a temperature that is consistent with the planets' equilibrium temperatures, but with wide uncertainties (up to $\pm$430$^\circ$K). For WASP-25b, the models also favor stellar spots that are $\sim$500-3000$^\circ$K cooler than the surrounding photosphere. The fairly weak constraints on parameters are owing to the relatively low precision of the data, with an average precision of 840 and 1240 ppm per bin for WASP-25b and WASP-124b, respectively. However, some contribution might still be due to an inherent absence of absorption or scattering in the planets' upper atmospheres, possibly because of aerosols. We attempt to fit the strength of the sodium signals to the aerosol-metallicity trend proposed by McGruder et al. 2023, and find WASP-25b and WASP-124b are consistent with the prediction, though their uncertainties are too large to confidently confirm the trend.
Chima D. McGruder, Mercedes López-Morales, James Kirk, Erin May, Benjamin V. Rackham, Munazza K. Alam, Natalie H. Allen, John D. Monnier, Kelly Meyer, Tyler Gardner, Kevin Ortiz Ceballos, Eva-Maria Ahrer, Peter J. Wheatley, George W. King, Andrés Jordán, David J. Osip, Néstor Espinoza
2023-08-11T17:59:00Z
http://arxiv.org/abs/2308.06263v2
# ACCESS, LRG-BEASTS, & MOPSS: Featureless Optical Transmission Spectra of WASP-25b and WASP-124b ###### Abstract We present new optical transmission spectra for two hot Jupiters: WASP-25b (M = 0.56 M\({}_{J}\); R = 1.23 R\({}_{J}\); P = 3.76 days) and WASP-124b (M = 0.58 M\({}_{J}\); R = 1.34 R\({}_{J}\); P = 3.37 days), with wavelength coverages of 4200 - 9100A and 4570 - 9940A, respectively. These spectra are from the ESO Faint Object Spectrograph and Camera (v.2) mounted on the New Technology Telescope (NTT) and Inamori-Magellan Areal Camera & Spectrograph on Magellan Baade. No strong spectral features were found in either spectra, with the data probing 4 and 6 scale heights, respectively. Exoretrievals and PLATON retrievals favor stellar activity for WASP-25b, while the data for WASP-124b did not favor one model over another. For both planets the retrievals found a wide range in the depths where the atmosphere could be optically thick (\(\sim\)0.4 \(\mu\) - 0.2 bars for WASP-25b and 1.6 \(\mu\) - 32 bars for WASP-124b) and recovered a temperature that is consistent with the planets' equilibrium temperatures, but with wide uncertainties (up to \(\pm\)430K). For WASP-25b, the models also favor stellar spots that are \(\sim\)500-3000K cooler than the surrounding photosphere. The fairly weak constraints on parameters are owing to the relatively low precision of the data, with an average precision of 840 and 1240 ppm per bin for WASP-25b and WASP-124b, respectively. However, some contribution might still be due to an inherent absence of absorption or scattering in the planets' upper atmospheres, possibly because of aerosols. We attempt to fit the strength of the sodium signals to the aerosol-metallicity trend proposed by McGruder et al. (2023), and find WASP-25b and WASP-124b are consistent with the prediction, though their uncertainties are too large to confidently confirm the trend. keywords: planets and satellites: atmospheres -- stars: activity; starspots -- techniques: spectroscopic; WASP-25b; WASP-124b ## 1 Introduction Exoplanet science is at a new frontier, where the need to repurpose telescopes to observe planetary atmospheres is being replaced with telescopes that are designed with the goal of exoplanet atmosphere characterization in mind. Exoplanet scientists have made plenty of advancements with the current generation of telescopes. Specifically with low-resolution transmission spectroscopy, strives have been made with ground-based telescopes (e.g. Sing et al., 2012; Nikolov et al., 2016; Diamond-Lowe et al., 2018; Todorov et al., 2019; Spyratos et al., 2021), the Hubble Space Telescope (HST; e.g. Charbonneau et al., 2002; Kulow et al., 2014; Tsiaras et al., 2016; Sing et al., 2016; Wakeford et al., 2020; Rathcke et al., 2021), and Spitzer (e.g. Knutson et al., 2011; Pont et al., 2013; Alam et al., 2020; Alderson et al., 2022). However, the advancements with the next generation of telescopes will be revolutionary. This has already begun with the launch and utilization of the JWST, where novel science has been conducted with outstanding quality of data and newly discovered molecular features (e.g. Tsai et al., 2022; Ahrer et al., 2023; Feinstein et al., 2023; Rustamkulov et al., 2023; Alderson et al., 2023). Furthermore, soon-to-be-launched telescopes like Pandora (Quintana et al., 2021; Hoffman et al., 2022), the Atmospheric Remote-sensing Infrared Exoplanet Large-survey (ARIEL; Tinetti et al., 2018), and the next generation of ground-based telescopes: the Extremely Large Telescope (ELT) 1, Thirty Meter Telescope (TMT) 2, and Giant Magellan Telescope (GMT) 3 will have designs and instruments specific for exoplanet atmospheric studies. These telescopes will undoubtedly be cornerstones in advancing our understanding of exoplanet atmospheres. Footnote 1: ELT:[https://elt.eso.org/](https://elt.eso.org/) Footnote 2: TMT:[https://www.tmt.org/](https://www.tmt.org/) Footnote 3: GMT:[https://giantmagellan.org/](https://giantmagellan.org/) There is still much about exoplanet atmospheres that alludes us. For example, we have no direct link with physical conditions that cause high-altitude aerosols to form in the upper atmosphere of some observed planets (e.g. Alam et al., 2018; Chachan et al., 2019; Estrela et al., 2021) but not others (e.g. Sing et al., 2016; Kirk et al., 2019; Alam et al., 2021; Ahrer et al., 2022; McGruder et al., 2022). Here we refer to aerosols as clouds-- condensation material due to specific atmospheric conditions--or hazes--material formed from photochemical reactions. The formation of aerosols likely occurs in most (if not all) atmospheres, just like in our solar system; however, high-altitude aerosols are normally the main concern when probing atmospheres. This is because the most used method to probe exoplanet atmospheres is transmission spectroscopy, which more easily probes the upper atmospheric limbs of planets, due to geometry and opacities (Lecavelier Des Etangs et al., 2008; Sing, 2018; Kreidberg, 2018). There are a number of studies aimed at understand the composition and formation of high-altitude hazes (e.g. Moses et al., 2011, 2013; Fleury et al., 2019) and clouds (e.g. Helling, 2019; Gao et al., 2020; Estrela et al., 2022), and though there has been a lot of headway toward this, there is little observational support for leading theories. Additionally, finding observational trends in aerosol formation has proven illusive, with many contradicting or inconclusive studies (Heng, 2016; Stevenson, 2016; Fu et al., 2017; Tsiaras et al., 2018; Fisher and Heng, 2018; Dymont et al., 2022; Estrela et al., 2022). A possible reason why no correlation has been clearly identified is the limited number of observed planets relative to the parameter space, where tens of planetary atmospheres have been used for studies, but tens of parameters (host star, orbital, and planetary parameters) could be correlated to aerosol formation rates. Possible solutions are to either increase the number of planetary atmospheres observed or to reduce the parameter space by observing select targets with many parameters that nearly match. The observations presented here aim to address both methods. We observed the atmospheres of WASP-25b (M = 0.6 M\({}_{J}\), R = 1.2 R\({}_{J}\), P = 3.764 d, host star = G4, V\({}_{mag}\) = 11.9 Enoch et al., 2011; Brown et al., 2012; Southworth et al., 2014), and WASP-124b (Maxted et al., 2016, M = 0.6 M\({}_{J}\), R = 1.3 R\({}_{J}\), P = 3.373 d, host star = F9, V\({}_{mag}\) = 12.7). We obtained three spectroscopic transits of WASP-25b and five of WASP-124b as part of AC-CESS.4 We obtained one additional transit of WASP-25b with the ESO Faint Object Spectrograph and Cam era (v.2; EFOSC2) instrument on the ESO New Technology Telescope (NTT) as part of LRG-BEASTS5. There was also a full and partial transit of WASP-124b obtained by the MOPSS team6 that we add to our dataset. Neither of these planets have atmospheric observations published. Furthermore, they have very similar parameters to one another and are part of a sample of seven planets proposed by McGruder et al. (2023) that could be systems key for identifying correlations with high-altitude aerosols and observed parameters. Footnote 5: The Low Resolution Ground-Based Exoplanet Atmosphere Survey using Transmission Spectroscopy (Kirk et al., 2017, 2018, 2019, 2021; Louden et al., 2017; Alderson et al., 2020; Ahrer et al., 2022, 2023a) Footnote 6: Michigan/Magellan Optical Planetary Spectra Survey (May et al., 2018, 2020) The observation and reduction of all data are described in Section 2, followed by their light curve analysis in Section 3. In Section 4 we introduce the combined transmission spectra of both targets, discuss our retrieval analysis of the data (Section 4.1), and interpret the retrieval results (Section 4.2). We then compare our results with expectations from the tentative aerosol-metallicity trend proposed by McGruder et al. (2023) in Section 5. Finally, in Section 6 we recapitulate and provide conclusions. ## 2 Observations and Data Reduction ### Magellan/IMACS Transits We observed three transits of WASP-25b (UTYYM-MDD: UT180620, UT210306, UT220325) and five transits of WASP-124b (UT190826, UT210809, UT210905, UT211002, UT220605) with the Inamori-Magellan Areal Camera & Spectrograph (IMACS; Dressler et al., 2011) mounted on Baade, one of the twin 6.5-m Magellan telescopes. Those transits were observed as part of ACCESS and used a setup similar to previous ACCESS observations (i.e. Weaver et al., 2021; Kirk et al., 2021; McGruder et al., 2022; Allen et al., 2022). Our general setup uses the \(8\times 8\)K CCD mosaic camera at the f/2 focus, a 300 line/mm grating at blaze angle of 17.5\({}^{\circ}\), and a GG455 filter. This gave a wavelength coverage of (4550-9900 A) and dispersed the spectra across two chips, but we managed to fit the spectrum of the target from 4550-9100 A on one CCD, preventing gaps from occurring at wavelengths of particular interest (see Figure 1). We used \(2\times 2\) binning and the FAST readout mode to reduce readout time and improve observational duty cycle. We used 10\({}^{\prime\prime}\) by 90\({}^{\prime\prime}\) slits (0.5\({}^{\prime\prime}\) by 90\({}^{\prime\prime}\) for HeNeAr wavelength calibration lamps), putting the observations in a seeing-limited regime, with an average resolving power of R = 1350. The number of exposures, range of airmasses, instrument setup, and resolution per night can be found in Table 1. We combined our ACCESS observations of WASP-124b with two observations obtained by the MOPSS team on UT180915 and UT190615 (UT190615 was a partial transit). We used Magellan/IMACS with a similar observational set up to ACCESS's, but with the 300 line/mm grating at a blaze angle of 26.7\({}^{\circ}\) and a slit size of 15\({}^{\prime\prime}\) by 20\({}^{\prime\prime}\) (1\({}^{\prime\prime}\) by 1\({}^{\prime\prime}\) for HeNeAr wavelength calibration lamps). We also had different comparison stars, which are highlighted in Table 1. All IMACS observations utilize the multiobject spectrograph (MOS) mode to observe multiple comparison stars simultaneously. The best comparison stars were selected based on the procedure outlined in Rackham et al. (2017), where we consider a nearby star suitable if it had a color difference of \(D<1\) with the target. \(D\) is defined as \[D=\sqrt{[(B-V)_{c}-(B-V)_{t}]^{2}+[(J-K)_{c}-(J-K)_{t}]^{2}},\] where the uppercase letters correspond to the Johnson-Cousin apparent magnitudes of the stars, and the subscripts \(t\) and \(c\) indicate the target and potential comparison, respectively. The sky coordinates of each comparison, and their \(D\) relative to the target, can be found in Table 1. ### IMACS Reduction The reduction process for all IMACS data was the same, and uses a custom ACCESS pipeline which has been described in detail by Jordan et al. (2013); Espinoza (2017), Rackham et al. (2017) and, Bixel et al. (2019). In general, this includes wavelength calibration using the HeNeAr arc lamp measurements, bias subtraction with the overscan region, pixel tracing, and sky background subtraction utilizing the median counts outside the science aperture. The radius of the science aperture was determined by taking the average full width half max (FWHM) over time and wavelength. This value was then added to three times the standard deviation (STD) of all FWHM values (all calculated wavelength and time dependent FWHM values), i.e. aperture = <FWHM> + 3\(\times\)STD\({}_{FWHM}\). Allen et al. (2022) found that optimal extraction (Marsh, 1989) has the potential to be a more effective way to identify and correct for bad-pixels and cosmic-rays in ACCESS data. When testing the effectiveness of this reduction step versus the traditional pipeline steps, we see slightly less scatter in the resulting white light curve with optimal extraction, so we adopt the results of this method. The final reduced spectra for the targets from each night is shown in Figure 1. ### NTT/EFOSC2 Transit WASP-25b had an additional transit observation with the European Faint Object Spectrograph & Camera 2 (EFOSC2; Buzzoni et al., 1984) mounted on the 3.6-m New Technology Telescope (NTT) as part of LRGBEASTS and ESO program 0100.C-0822(A) (PI: Kirk). The observation was taken on the night of UT180329 with the same instrument and setup used to detect Na in the atmosphere of WASP-94Ab (Ahrer et al., 2022) and clouds in the atmosphere of HATS-46b (Ahrer et al., 2023). This was a 27 ''\(\times\) 3.53 '' slit and Grism #11, which provided spectra from 3960-7150 A and an average seeing-limited resolution of R = 150, which was dispersed on a 2048 \(\times\) 2048 pixel Loral/Lesser CCD. Our long slit allowed us to simultaneously observe the comparison star 2MASS J13011275-2730485 with a \(V=12.5\) and \(B-V=1.5\), where WASP-25 has a \(V=11.9\) and \(B-V=0.7\). We used \(2\times 2\) binning and the fast readout mode. We also obtained 54 biases, 7 lamp flats, 17 morning twilight sky flats7 and 3 arc lamps at the beginning and end of the observations. The flats were taken with the same 27 ''-wide slit as the science observations but the arc lamps were taken with a 1 '' slit to avoid saturation and ensure narrow lines for calibration. This meant that the arc lamps were only used for an initial wavelength calibration with the final wavelength calibration performed using absorption lines in the stellar spectra. Footnote 7: A communication problem with the instrument prevented us from obtaining additional lamp flats. ### EFOSC2 Reduction We reduced and processed the data using the LRGBEASTS pipeline which is described in more detail in Kirk et al. (2017, 2018, 2021). For the flats, we created two sets of reductions, one without a flat-field correction, as is standard for LRGB-BEASTS observations (e.g., Alderson et al., 2020; Kirk et al., 2021; Ahrer et al., 2022), and one with a flat-field correction. This flat-field correction was performed in a novel way whereby we used the master sky flat without removing the sky spectrum from the sky flat. The motivation behind this approach was to avoid uncertainties associated with fitting out the sky spectrum (with a running median for example) while also capitalizing on the higher number of blue photons from the sky flat compared to a lamp flat. Since we did not remove the sky spectrum from the sky flat, this meant that the stellar spectra (\(F_{1}\), \(F_{2}\)) we extracted were contaminated by the sky background imprinted into the sky flat (\(F_{\rm sky}\)). Therefore, the stellar spectra we extracted were actually \(F_{1}/F_{\rm sky,1}\) and \(F_{2}/F_{\rm sky,2}\). The target and reference stars drifted across the course of the observations, leading to changes in \(F_{\rm sky,1}\) and \(F_{\rm sky,2}\). This meant that the \(F_{\rm sky}\) terms did not fully cancel out after dividing the target's light curve (\(\Sigma(F_{1}/F_{\rm sky,1})\)) by the comparison's light curve (\(\Sigma(F_{2}/F_{\rm sky,2})\)). However, we found that using the sky flat led to light curves and transmission spectra that deviated by \(<<1\sigma\) from the same reduction without the flat field, while also decreasing the white noise in the light curves by 6 % and leading to a small improvement the precision in the transmission spectrum (\(\sim\)3 %). This demonstrates that impacts on the spectrophotometry due to \(\Delta(F_{\rm sky,1})\) and \(\Delta(F_{\rm sky,2})\) are insignificant, likely due to the fact that the stellar traces drift by \(<3\) pixels, which is corrected for by cross-correlation, and this constitutes only 8 % of our average bin width used to make the transmission spectrum. Due to this test, we adopted the reduction using the sky flat for the rest of our analysis. To extract the stellar spectra we experimented with different aperture widths and background widths and compared the noise of the resulting white light curve in each case. We found the lowest white light noise resulted from a combination of a 22-pixel-wide aperture, with two 15-pixel-wide background regions on either side of the aperture, each separated from the aperture by 15 pixels. We fit a linear polynomial across these two background regions to remove the sky background from our spectra. Following the extraction of the stellar spectra, we clipped cosmic ray hits via identifying 5\(\sigma\) outliers in the spectral time series and replaced these with a linear interpolation between the nearest two neighboring pixels. We then corrected for shifts in the stellar spectra of \(\pm 2\) pixels across the night. This was done by cross-correlating each spectrum with a spectrum taken in the middle of the observations and then performing a flux-conserving resampling of each spectrum onto the reference wavelength grid. Figure 1 (top right) shows the final spectrum extracted for WASP-25 with this pipeline. ## 3 Light Curve Analysis The general steps in our light curve analysis are the same as what has been implemented in previous AC-CESS papers (e.g. McGruder et al., 2022; Allen et al., 2022). This includes first creating a photometric (white) light curve by combining all counts of the entire spectrum for each exposure. Systematics are removed from the white light curve and a transit is fit to obtain transit parameters. These parameters are then used to constrain the priors for fitting the spectrophotometric (binned) light curves. The binned light curves are produced by summing the light within a band of wave lengths. The appropriate widths and centering of bins were determined by considering spectrophotometric precision, the overlap of spectral bands from different observations, high telluric absorption regions, and the desire to properly probe for atmospheric features. The average bin widths were \(\sim\) 150A for WASP-25b and 160 A for WASP-124b, with 90A bins centered on the Na and K doublets and wider bins on low throughput edges of the spectra. The final binning scheme used is demarcated with dotted vertical lines in Figure 1 and written for each bin in the first column of the Figures in Appendix A. ### White light Curve Fitting We detrended the white light curve (WLC) using a combination of Principal Component Analysis (PCA) and Gaussian Processes (GPs), which we refer to as _PCA+GP_8. This routine is identical to what was used by Yan et al. (2020); McGruder et al. (2020); Weaver et al. (2021); McGruder et al. (2022). It involves first performing singular value decomposition on a matrix composed of the comparison stars' light curves in magnitude space, which yield eigenvectors and values that allow us to identify principal components that capture features common to all comparison light curves. How \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Transit Date & Instrument & Airmass & Exposure & Frames & resolution & Comparisons’ Coordinates & Color Difference, \\ (UTC) & set-up & [range] & Times (s) & & [min/max] & [RA, Dec] & (\(D\)) \\ \hline **WASP-25b:** & & & & & & & & \\ 2018 Mar 29 & EFOSC2 - Grism \#11, 1.56-1.0-2.08 & 160 & 170 & 97/225 & 13:01:12.763, -27:30:48.59 (1) & 0.964 \\ \hline 2018 Jun 20 & IMACS - 300mm @ 17.5\({}^{\circ}\) & 1.01-2.1 & 20 – 40 & 203 & 684/1552 & 13:01:12.763, -27:30:48.59 (1) & 0.964 \\ & & & & & & 13:02:14.300, -27:48:06.60 (2) & 0.457 \\ & & & & & & 13:00:50.950, -27:42:79 (3) & 0.205 \\ & & & & & & 13:01:54.318, -27:42:21.97 (4) & 0.191 \\ \hline 2021 Mar 06 & IMACS - 300mm @ 17.5\({}^{\circ}\) & 1.54-1.0-1.08 & 30 & 321 & 1386/2093 & 13:01:54:32.27, -27:42:21.80 (4) & 0.191 \\ & & & & & & 13:02:40.089, -27:43:47.65 (5) & 0.074 \\ & & & & & & 13:02:01.341, -27:46:21.51 (6) & 0.15 \\ & & & & & & 13:01:40.804, -27:35:03.73 (7) & 0.051 \\ \hline 2022 Mar 25 & IMACS - 300mm @ 17.5\({}^{\circ}\) & 1.17-1.0-1.36 & 30 & 330 & 1635/2228 & Same as on 06.03.0221 & — ever, we still needed to reduce systematics unique to the target star, for which we use GPs. We used george(Ambikasaran et al., 2015) to construct and evaluate the likelihoods of a multidimensional squared-exponential kernel dependent on the auxiliary observables of airmass, FWHM, sky flux, trace, and wavelength solution drift. Details of the GP hyperparameter priors are shown in Table 2. We used nested sampling (PyMultiNest; Buchner et al., 2014) to explore the posteriors of applying each principle component, combined with the GP regression. Lastly, we performed Bayesian model averaging (BMA; Gibson, 2014) to combine those posteriors and produce the final detrended light curves. The analytical transit model was produced using batman(Kreidberg, 2015), where the priors on period (P), semi-major axis (relative to stellar radius, a/R\({}_{s}\)), impact parameter (b), and time of mid-transit (t\({}_{0}\)) were all normal distributions with means and standard deviations set by the values found in McGruder et al. (2023, Table 1). The priors for the radius of the planet relative to the star (R\({}_{p}\)/R\({}_{s}\)) and the parameterized quadratic limb darkening (LD) parameters q\({}_{1}\) and q\({}_{2}\)(Kipping, 2013) were set wider. Table 2 also provides information on those and the other transit priors. After the transit parameters of the WLC were acquired for each night, we weight-averaged the P, a/R\({}_{s}\), and b values from each night to obtain more constrained terms for each. These means were held fixed for an additional pass of the PCA+GP run, to obtain final values Figure 1: Median extracted spectra of WASP-25 (top) and WASP-124 (bottom). Each spectra is plotted in the same order in which it is printed in the legend (top to bottom). The shaded regions of the same color extending past the median lines are the 1\(\sigma\) range of counts extracted for that night. Each spectroscopic bin used is demarcated by dotted vertical lines, where the only gaps in the binning scheme are the strong telluric region from 7594–7638Å (lightly red shaded region), the CCD gap for the first two WASP-124 observations (bottom left) at 6317–6424Å, and the CCD gap for the other five WASP-124 observations (bottom right) at 9100–9225Å(gray shaded region). The specific instrument and setup used for each spectra is printed, where EFOSC2, IMACS-17.5\({}^{\circ}\), and IMACS-26.7\({}^{\circ}\) refer to the LRG-BEASTS (see Section 2.3), ACCESS, and MOPSS (see Section 2.1) setups, respectively. The differing throughputs between the grism used for the ACCESS and MOPSS data explains the different spectral shapes between the two WASP-124 spectra on the bottom left and the five spectra on the bottom right. Note the plotted spectrum of the NTT/EFOSC2 data (top right), was created without the sky flat, in order to visually compare with the other spectra. However, the final spectrum used for data analysis did use the sky flat as discussed in Section 2.4. for t\({}_{0}\), R\({}_{p}\)/R\({}_{s}\), q\({}_{1}\) and q\({}_{2}\)9. Figure 2 shows the final detrended light curves of all transits and Table 3 shows the values obtained when all parameters were fitted for (first four columns) and when the common parameters were fixed (last four columns). From the table, one can see that only the LRG-BEASTS transit depth and the MOPSS partial transit depth differ by more than 2\(\sigma\) between each other transit of a specific target. However, those two outliers are likely due to the LRG-BEASTS observations being bluer than those from ACCESS and the lack of full transit coverage from MOPSS hindering the detrending process. Footnote 9: We did not include a transit fitting pass with the common parameters free for the partial transit of UT190615 and just used the weighted means of the other six WASP-124b transits for fitting t\({}_{0}\), R\({}_{p}\)/R\({}_{s}\), q\({}_{1}\) and q\({}_{2}\). ### Spectrophotometric Light Curves We used two separate detrending routines to reduce the binned light curves (BLC). Both routines were discussed and tested in McGruder et al. (2022). For all WASP-25b transits and WASP-124b transits on UT190826, UT210809, and UT211002 we used common-mode correction (CMC) followed by polynomial correction (Poly), _CMC+Poly_. This method assumes that most of the systematics are captured when detrending the WLC (following the procedures of Section 3.1). As such it uses the quotient of the final detrended WLC and the "raw" light curve, produced from the normalized target divided by the sum of comparison star light curves, as a common-mode term. The raw light curve also had a moving average 4-sigma clipping, similarly to what was done in McGruder et al. (2022). Each BLC is then divided by this common mode term. We then apply polynomial regression models dependent on the auxiliary observables (i.e. airmass, fwhm, see Section 3.1), to remove any additional chromatic systematics unique to each bin. For each auxiliary observable we allowed the polynomial to go up to fourth order, aside for airmass, which only went up to second order because of the smooth correlation with it and the light curves. We tested all combinations of each auxiliary parameter and order polynomials 10 using scipy.optimize.minimize(Virtanen et al., 2020). We then ranked each combination by polynomial corrections based on a standardized sum of \(\chi^{2}\), Bayesian Information Criterion (BIC), Akaike Information Criterion (AIC), and root mean squared of the residuals (rRMS), and took the highest ranked 100 models (lower sums) to do a final pass of fitting with PyMultiNest. The priors on polynomial coefficients were normal with mean set by the scipy.optimize.minimize fits and standard deviation of one. We used BMA to combine posteriors and arrive at our final binned transit depths. Footnote 10: 1875 combinations for 5 observables and up to fourth order for everything but airmass, which is up to second order. Footnote 10: 1875 combinations for 5 observables and up to fourth order for everything but airmass, which is up to second order. The detrending method used for transits UT180915, UT190615, and UT210905 (three WASP-124b transits) was PCA+GP, which is the same algorithm used for our WLC analysis (see section 3.1). The reasoning for using this routine instead of the CMC+Poly is because the systematics found in each bin are significantly different from one another (see the corresponding first columns of figures in Appendix A), so the CMC term poorly corrects the bins and likely introduces more systematics, which simple polynomials have trouble modeling. In fact, the transits that required PCA+GP had obvious issues with their observations such as missing ingress, scattered cloud coverage, and poor seeing. However, we also tested the performances of both methods by examining the binned spectra each produced, in particular we compared the variance of their transmission spectra and the average residual red noise of each bin \(\beta\)(McGruder et al., 2022, see appendix D). We found that for the three aforementioned WASP-124b transits, \(\beta\) and the variance were substantially worse. This supports a finding of McGruder et al. (2022), in which the appropriate detrending method should be considered by testing multiple approaches for each dataset. For both fitting methods all transit parameters were fixed to the WLC best fit (columns 2-5 of Table 3), but the LD parameters were uniform from 0 to 1 and R\({}_{p}\)/R\({}_{s}\) had a normal prior with mean set by the WLC best fits and a standard deviation of 0.02. ## 4 Transmission Spectra We produced our transmission spectra by comparing the best fit R\({}_{p}\)/R\({}_{s}\) found for each detrended wavelength bin. We then combined the transmission spectra from each night to form a global transmission spectrum for each target. The spectra from each night had an offset applied so the mean depths across overlapping spectral bins were the same, as was done by McGruder et al. (2022). The spectra were then combined by weight averaging each overlapping bin. The new points were obtained using the python numpy.average(Harris et al., 2020) function where the weight of each R\({}_{p}\)/R\({}_{s}\) was given by the inverse squared R\({}_{p}\)/R\({}_{s}\) errors ("_inverse variance weighting_"). The error of the resulting weighted R\({}_{p}\)/R\({}_{s}\) were calculated as the square root of weighted variance (again using inverse squared R\({}_{p}\)/R\({}_{s}\) error as weights). For WASP-25b the overlapping bins were from 4570-7113A and had a mean depth of 0.14040 R\({}_{p}\)/R\({}_{s}\), which corresponded to weighted offsets of 0.00584, 0.00016, 0.00265, and \(-\)0.00552 R\({}_{p}\)/R\({}_{s}\) for each night in chronological order. For WASP-124b the mean depth was 0.12811 R\({}_{p}\)/R\({}_{s}\) where all wavelengths overlapped aside for 6317-6424 and 9100-9225 A. This yielded offsets of \(-\)0.00047, 0.00122, \(-\)0.00494, 0.00049, 0.00093, and 0.00295, respectively. For WASP-124b, the partial transit on UT190615 was not included because the scatter of that transmission spectrum was too high to positively contribute to the combined spectrum. This is likely because the lack of ingress prevented both detrending methods (PCA+GP and CMC+Poly) from accurately constraining the systematics. Figure 4 shows the transmission spectra of each night (except transit UT190615) and the combined transmission spectra. \begin{table} \begin{tabular}{c c|c|c} \hline \hline & & **WASP-25b** & **WASP-124b** \\ \hline **parameter** & **function** & **bounds** & **bounds** \\ \hline \(\alpha\) & log-uniform & 0.01–0.100 [ppm] & 0.01–0.100 [ppm] \\ \(\xi\) & log-uniform & 0.01–0.100 [mmag] & 0.01–0.100 [mmag] \\ \(1/\lambda\) & gamma & \(a\) = 1 & \(a\) = 1 \\ \(P\) & normal & m=3.7648337, \(\sigma_{n}\)= 1.2e-6 & m=3.3726511, \(\sigma_{n}\)= 3.4e-6 \\ \(t_{0}\) & normal & m=2455274.99649, \(\sigma_{n}\)= 0.021 & m=2457028.58329, \(\sigma_{n}\)= 0.021 \\ \(R_{p}\)/\(R_{s}\) & normal & m=0.139, \(\sigma_{n}\)= 0.02 & m=0.125, \(\sigma_{n}\)= 0.02 \\ \(b\) & normal & m=0.357, \(\sigma_{n}\)= 0.042 & m=0.619, \(\sigma_{n}\)= 0.033 \\ \(a/R_{s}\) & normal & m=11.33, \(\sigma_{n}\)= 0.14 & m=9.22, \(\sigma_{n}\)= 0.13 \\ \(q_{1}\) & uniform & 0–1 & 0–1 \\ \hline \end{tabular} **Note:** The priors for the GP hyperparameters are amplitude (\(\alpha\)), jitter (\(\xi\)), and inverse squared length scale (1/\(\lambda\)). For 1/\(\lambda\), when the \(a\) parameter of a gamma function is set to 1, it becomes an exponential function (i.e. _e_\({}^{-x}\)). The mean and standard deviation (\(\sigma_{n}\)) values of the transit parameters (variables defined in Section 3.1) were obtained directly from McGruder et al. (2023). With the exception of t\({}_{0}\), which was deduced for a given night from the period and t\({}_{0}\) obtained from McGruder et al. (2023). The \(\sigma_{n}\) for this parameter was set to 30 minutes for each night. \end{table} Table 2: White Light curve fitting priors \begin{table} \begin{tabular}{c|c c|c c c|c c|c c} \hline \hline \multicolumn{1}{c|}{Transit} & P [days] & b & a/R\({}_{s}\) & i [deg.] & R\({}_{p}\)/R\({}_{s}\) & t\({}_{0}\) (-2450000) [d] & q\({}_{1}\) & q\({}_{2}\) \\ \hline UT180329 & 3.7648337\({}^{\pm 1.2e-6}\) & 0.360\({}^{+0.025}_{-0.020}\) & 11.28\({}^{+0.10}_{-0.10}\) & 88.17\({}^{+0.15}_{-0.15}\) & 0.1343\({}^{+0.0017}_{-0.0019}\) & 8207.29525\({}^{+0.0002}_{-0.00026}\) & 0.523\({}^{+0.006}_{-0.00028}\) & 0.36\({}^{+0.12}_{-0.11}\) \\ \hline UT180620 & 3.7648337\({}^{\pm 1.2e-6}\) & 0.329\({}^{+0.025}_{-0.025}\) & 11.199\({}^{+0.081}_{-0.075}\) & 88.32\({}^{+0.14}_{-0.14}\) & 0.1403\({}^{+0.0013}_{-0.0014}\) & 8290.62850\({}^{+0.00016}_{-0.00017}\) & 0.352\({}^{+0.006}_{-0.0058}\) & 0.35\({}^{+0.12}_{-0.11}\) \\ \hline UT210306 & 3.7648337\({}^{\pm 1.2e-6}\) & 0.341\({}^{+0.022}_{-0.025}\) & 11.193\({}^{+0.075}_{-0.077}\) & 88.25\({}^{+0.15}_{-0.12}\) & 0.1374\({}^{+0.0025}_{-0.021}\) & 8290.77911\({}^{+0.00012}_{-0.0021}\) & 0.531\({}^{+0.058}_{-0.049}\) & 0.199\({}^{+0.088}_{-0.008}\) \\ \hline TT202325 & 3.7648337\({}^{+1.1e-6}_{-1.2e-6}\) & 0.326\({}^{+0.021}_{-0.023}\) & 11.252\({}^{+0.088}_{-0.083}\) & 88.34\({}^{+0.13}_{-0.13}\) & 1.1452\({}^{+0.0025}_{-0.0024}\) & 9664.79213\({}^{+0.00022}_{-0.00022}\) & 0.458\({}^{+0.105}_{-0.093}\) & 0.42\({}^{+0.14}_{-0.11}\) \\ \hline mean & 3.76483368\({}^{\pm 2.9e-7}\) & 0.338\({}^{+0.012}_{-0.0165}\) & 11.223\({}^{+0.042}_{-0.042}\) & 88.275\({}^{+0.065}_{-0.065}\) & \(---\) & \(---\) & \(---\) & \(---\) \\ \hline UT180915 & 3.372651\({}^{\pm 3.3e-6}_{-0.14}\) & 0.6302\({}^{+0.0139}_{-0.0165}\) & 9.154\({}^{+0.094}_{-0.093}\) & 86.055\({}^{+0.130}_{-0.118}\) & 0.1289\({}^{+0.0030}_{-0.0028}\) & 8377.64384\({}^{+0.0002}_{-0.00021}\) & 0.312\({}^{+0.042}_{-0.054}\) & 0.534\({}^{+0.217}_{-0.216}\) \\ \hline UT190615 & \(---\) & \(---\) & \(---\) & \(---\) & \(---\) & \(---\) & \(---\) & \(---\) & \(---\) & \(---\) \\ \hline UT190826 & 3.3726511\({}^{\pm 3.2e-6}_{-0.26}\) & 0.6267\({}^{+0.0187}_{-0.0187}\) & 9.130\({}^{+0.106}_{-0.005}\) & 86.067\({}^{+0.144}_{-0.146}\) & 1.265\({}^{+0.0004}_{-0.00028}\) & 8721.65346\({}^{+0.00042}_{-0.00024}\) & 0.450\({}^{+0.014}_{-0.011}\) & 0.277\({}^{+0.242}_{-0.24}\) \\ \hline UT210809 & 3.372650\({}^{\pm 3.3e-6}_{-0.46}\) & 0.648\({}^{+0.0010}_{-0.0012}\) & 9.102\({}^{+0.082}_{-0.0028}\) & 85.91\({}^{+0.002}_{-0.0020}\) & 0.133\({}^{+0.002}_{-0.0028}\) & 9436.6575\({}^{+0.0001}_{-0.00017}\) & 0.409\({}^{+0.053}_{-0.003}\) & 0.411\({}^{+0.017}_{-0.171}\) \\ \hline UT The average precision (68% confidence interval) of the combined spectra per bin11 in R\({}_{p}\)/R\({}_{s}\) for the WASP-25b data was 0.00301 (841 ppm in depth) and 0.00485 (1238 ppm in depth) for WASP-124b. With these precisions, we can only probe as low as 4.03 and 5.71 atmospheric pressure scale heights for WASP-25b and WASP-124b, respectively, for which the scale heights are 453.2 km and 634.9 km. Thus, it is difficult to determine if the relatively flat spectra seen in both targets (see Figure 4) is due to a true dearth of planetary features or a precision limitation. Footnote 11: The average bin size was \(\sim\) 150Å and 160Å for WASP-25b and WASP-124b, respectively. It should also be noted that even though there are more transits for WASP-124b, its transmission spectrum is less precise than the four transits of WASP-25b because WASP-124 (V\({}_{mag}\) = 12.7) is dimmer than WASP-25 (V\({}_{mag}\) = 11.9) and the overall quality of the WASP-124b observations were not as high, implied in section 3.2, where we apply the PCA+GP detrending routine for 2 out of the 6 used transits because of aggressive systematics. ### Retrieval Analysis We ran a series of retrieval models with PLATON(Zhang et al., 2019) and Exoretrievals(Espinoza et al., 2019) on our final combined transmission spectra of WASP-25b and WASP-124b. Our analysis process was to run models including stellar activity, scattering features, or common atomic/molecular species observed in planets of this type (i.e., H\({}_{2}\)O, Na, K) and the different combinations of each with Exoretrievals. Concurrently, we run models including scatters, stellar activity, neither, and both with PLATON, which assumes equilibrium chemistry and fits for the C/O ratio and planetary metallicity to extrapolate the abundances. This Figure 2: The final detrended white-light curves for each transit, utilizing the principal component analysis and Gaussian process detrending routine. The residuals obtained by the difference of the data from the best-fit model are shown below each light curve. The standard deviation of residuals are given by \(\sigma_{r}\). retrieval analysis workflow is the same as was done by McGruder et al. (2020) for WASP-31b, McGruder et al. (2022) for WASP-96b, and McGruder et al. (2023) for WASP-6b and WASP-110b. To determine which models are preferred over another we used the Bayesian evidences (Z), given for each model because both retrieval routines use nested sampling to explore the posterior space (PyMultiNest with Exoretrievals and dynesty (Speagle, 2020) with PLATON). Following the reasoning of Trotta (2008) and Benneke & Seager (2013), we considered a \(\Delta\)lnZ less than 2.5 not significantly favoring one model over another, \(\Delta\)lnZ between 2.5 and 5 moderately favoring the higher evidence model, and greater than 5 strongly supports the higher lnZ model12. A table of the difference in natural-log evidences (\(\Delta\)lnZ) of a given model relative to the least complex model for the given retrieval is shown in Table 4. The term we use for when there are no scatters in the planetary atmosphere or activity in the stellar photosphere added to the model is _plain_. Given that PLATON assumes equilibrium chemistry and the atomic/molecular abundances are inherently determined through this, a _plain_ model is the least complex model used for PLATON. In the case of Exoretrievals, its least complex model is when the presence of species is turned off, i.e. a _plain_ model (no scatters or activity), but also without atomic/molecular species included. In that case the spectrum is completely _flat_. Footnote 12: A loose conversion of this to frequentist terms is \(\Delta\)lnZ of 2.5 \(\sim\) 2.7\(\sigma\) favoring and \(\Delta\)lnZ of 5 \(\sim\) 3.6\(\sigma\) favoring. When determining the priors for the stellar activity parameter in our retrieval analysis, we used \(\log_{10}(R^{\prime}_{HK})\) and stellar rotational periods obtained from high resolution spectra and photometric observations in McGruder et al. (2023, Table 1). WASP-124 has a \(\log_{10}(R^{\prime}_{HK})\) of -4.765\(\pm\)0.056, consistent with WASP-96b's (\(\log_{10}(R^{\prime}_{HK}\) = -4.781\(\pm\)0.028), which has been established to be quiet (Nikolov et al., 2022; McGruder et al., 2022). However, WASP-124 has a faster rotational period (10.65\({}^{+3.27}_{-3.01}\) days), which has been found to be correlated to activity levels (e.g., Pizzolato et al., 2003; Wright et al., 2011, 2013). With this in consideration we allow the covering fraction of unocculted inhomogeneities to vary uniformly from 0 to 6.8%, which is consistent with the 2\(\sigma\) upper level of activity for stars of this type found by Rackham et al. (2019, see their Tables 2 & 3 and Equation 2). For WASP-25, the \(\log_{10}(R^{\prime}_{HK})\) of -4.507\(\pm\)0.119 and faster stellar rotational period of 16.93\({}^{+2.02}_{-1.55}\) days suggests it is a somewhat active star; as such, we do not limit its stellar inhomogeneity coverage and set the covering fraction priors to be uniform from 0 to 50%. The priors used for each retrieval run are given in Appendix B. ### Retrieval Interpretation The best-fit parameters from the retrieval analysis can be seen in Table 5. Overall, the PLATON and Exoretrievals results agree with each other well for each target. The lack of prominent features in either spectrum makes it difficult for planetary atmospheric properties to be constrained, which is outlined by the wide 1\(\sigma\) range given for every parameter in Table 5. For WASP-25b, the best-fit models were plain atmospheres (i.e., no scatterers or atomic/molecular features) for the exoplanet and an inhomogeneous photosphere for the stellar host, with \(\sim\)20% coverage of cold spots at a temperature contrasts of \(\Delta\)T \(\sim\) -2000 with respect to the quiescent photosphere. However, the uncertainty of these inhomogenity parameters is large (see Table 5), with the most extreme case being the retrieved PLATON inhomogenity temperature of -2001\({}^{+1473}_{-531}\) K. For WASP-124b, using Exoretrievals, the highest-evidence models were those with low levels of stellar activity or a plain planetary atmosphere. Even still, those evidences were indistinguishable from a flat line model. The PLATON retrievals had the same issue where no model was significantly favored over another. This emphasizes the difficulty of constraining the atmosphere of WASP-124b with the data at hand. The limb temperatures obtained for WASP-25b and WASP-124b using both retrieval methods (see Table 5) are in agreement with their corresponding effective temperatures of 1217\(\pm\)101K and 1481\(\pm\)123K, respectively (McGruder et al., 2023, Table 1). However, again the uncertainties in the retrieved values are quite large. The pressures where the atmospheres are optically thick were also poorly constrained, with values ranging from \(\log_{10}\)[bars] of -3.0\({}^{+3.8}_{-3.4}\) and -2.8\({}^{+3.7}_{-3.0}\) for WASP-25b and from -3.1\({}^{+3.4}_{-2.0}\) and -0.7\({}^{+2.2}_{-3.3}\) for WASP-124b, with Exoretrievals and PLATON, respectively. Thus pressures from \(\sim\)0.4 \(\mu\) - 0.2 bars are all within a 1-sigma interval for WASP-25b and from 1.6 \(\mu\) - 32 bars for WASP-124b. Figure 3, shows the corner plot obtained for the PLATON best fit of WASP-25. It also highlights the difficulty in retrieving precise atmospheric parameters. We compared these results to analysis of WASP-31b (McGruder et al., 2020), WASP-96b (McGruder et al., 2022), WASP-6b, and WASP-110b (McGruder et al., 2023). These planets were chosen because they underwent the same retrieval analysis, minimizing differences that may arise from varying model assumptions and pri ors (e.g., Kirk et al., 2019; Barstow et al., 2020)13. The upper bounds of the pressure ranges for both targets are consistent with the 68% interval of the WASP-96b fit, which strongly indicates the absence of aerosols in the observed wavelength range of 0.4-1.24 \(\mu\)m. However, their lower bounds are also consistent with the cloud top pressure found for WASP-110b, which has the highest retrieved cloud top altitude (i.e. largest amount of high-altitude aerosols) of the four planets. Thus, we reaffirm that further observations are needed to constrain the atmospheres of WASP-25b and WASP-124b. Footnote 13: Barstow et al. (2020) generally find consistency amongst the models and data tested, but do find cases where different models retrieve different parameters. Figure 3: The corner plot obtained for the PLATON best fit of WASP-25. The best fit was one with activity and no additional scatters. However, it was only slightly favored over other models. As the posteriors outline, the lack of significant features makes it difficult to strongly retrieve properties of the planetary atmosphere. When attempting to interpret the relatively featureless spectra, we can deduce that it is unlikely that hazes are prominently present in the upper atmospheres of the planets, because there is no scattering slope observed. Strong scattering slopes in the optical have been suggested to signify hazes and could have signals as high as 15 scale heights (Ohno and Kawashima, 2020), well within the precision of the data. Therefore, the more likely cause for the observed features is either high-altitude clouds or observational limitations due to the lower precision, with one scenario not necessarily explaining both atmospheres. To obtain a better understanding of these atmospheres, higher precision optical observations with HST and longer wavelength observations, ideally with JWST, are required. ## 5 Similar Seven McGruder et al. (2023) proposed that there is a tentative trend with observed high-altitude aerosols and the host star metallicity for a group of seven planets with very similar system properties, which includes WASP-25b and WASP-124b. They claim if this trend is true, then WASP-25b and WASP-124b would sit on opposite ends, where WASP-25b would be obscured by aerosols (like WASP-6b and WASP-110b), and WASP-124b would be relatively clear (like WASP-96b). We explore how well this trend holds here, using the sodium signal as a proxy for aerosol levels, as was done in McGruder et al. (2023). We find a small hint of Na in the spectrum of WASP-124b, which is stronger than for WASP-25b, even though the data precision of WASP-25b probes deeper. However, when looking at the retrieval analysis results of the WASP-124b data (see Table 4), we see no strong favoring of the Exoretrievals model which includes Na. Furthermore, the log mixing ratio of Na found with the Exoretrievals fit that included it is not well constrained and suggests marginal amounts of Na (-9.7\({}^{+6.8}_{-14.1}\)). As such, we have no detection of Na in WASP-124b. This seems inconsistent with the proposed trend, but the scale heights probed with the WASP-124b data is 5.71. This is three times higher than what was able to be probed with WASP-6b, WASP-96b, and WASP-110b (2.004, 1.998, 2.212, respectively), which were used to identify the tentative trend. The precision of the other targets are likely higher because they include observations from larger (VLT) or space-based (HST) telescopes. In Figure 5 we plot a linear aerosol-metallicity trend, where the Na feature was used as a proxy for aerosols, similar to what was done by McGruder et al. (2023). However, here we divide the Na amplitude values by their theoretical Na signal when no aerosols are present in the atmosphere, \(\Delta\)R\({}_{p}\)/R\({}_{s}\) (see equation 10 of Heng, 2016). This was done, because though the planets are twin like, they are not exactly the same and would have slightly different maximum possible signals. Yet, because of the planets' similarity, this modification had little effect on the previous trend found by McGruder et al. (2023), as shown in Figure 5. In Figure 5, we plot a linear fit with and without the WASP-25b and WASP-124b Na signals, and find that given the errors both WASP-25b and WASP-124b are consistent with the trend found using just WASP-6b, WASP-96b, and WASP-110b. In Figure 5 our linear trend is fit with scipy.odr(Virtanen et al., 2020) where we used the inverse of each Na signals' errors as weights. The regression score, R\({}^{2}\), of the weighted linear fit with the Na signals from WASP-6b, WASP-96b, and WASP-110b is 0.755. When including the Na signals from WASP-25b and WASP-124b, the score was 0.754, showing that the trend continues to hold. We also compare the data to a flat line fit, i.e. no trend, and find a mean Na amplitude of 0.083 and R\({}^{2}\) of -0.051, emphasizing that a linear trend is much more appropriate given the data. Though there is no strong support for WASP-124b having high-altitude aerosols, if it does and is inconsistent with the tentative trend found by McGruder et al. (2023), a possible explanation might be due its difference in equilibrium temperature. All the planets' equilibrium temperatures lay around \(\sim\)1250 K, and though the equilibrium temperature of WASP-124b (1481\(\pm\)123 K) is less than 2-sigma from the coolest planet in our sample (WASP-6b, T = 1167\(\pm\)96 K), it is possible that this temperature difference is important. Many studies find that equilibrium temperature is important in aerosol formation (e.g. Stevenson, 2016; Fu et al., 2017; Gao et al., 2020; Estrela et al., 2022), but this literature is not consistent on if an increase in temperature for these class of planets would produce more or less aerosols. Thus, there is no strong support in the literature that the \(\sim\)300 K difference in equilibrium temperature of WASP-124b is an important factor in the tentative trend found. Still, if metallicity does not strongly correlate with aerosol formation rates, then perhaps an unobserved parameter, such as high energy emissions, is correlated to the higher aerosol rates observed in some of the _Similar Seven_ planets. Alternatively, the complex nature of aerosol formations in their extreme environments might make it difficult to correlate one or two parameters to the observed aerosol rates. To have a better grasp of if the aerosol-metallicity trend truly holds, at minimum, more optical observations of WASP-124b are needed to precisely probe its atmosphere. This is achievable with HST and/or larger ground-based telescope observations. Further Magellan/IMACS observations would also improve precision, especially if such observation were of good quality (i.e. the full transit, sufficient baselines, and good night conditions). ## 6 Summary and Conclusions We observed four transits of WASP-25b, one with NTT/EFOSC2 and three with Magellan/IMACS, and seven transits of WASP-124b with Magellan/IMACS. We combined the transmission spectra from each night for each target (excluding the partial transit of WASP-124b on June 15th 2019) to produce near continuous final transmission spectra from 4200 - 9100A for WASP-25b and from 4570 - 9940A for WASP-124b. Our transmission spectra have an average precision in depth of 841 ppm for WASP-25b and 1238 ppm for WASP-124b, corresponding to 4.03 and 5.71 scale heights, respectively. The spectrum of both targets lacked significant features. Nevertheless, we ran a set of retrieval models utilizing Exoretrievals and PLATON on each final transmission spectrum. In doing so, we found that the retrievals most favored model (with \(\Delta\ln Z<5\)) for WASP-25b is one that included \(\sim\)20% covering fraction of unocculted cold spots at \(\sim\)2000 K cooler than the surrounding photosphere, but no molecular/atomic features. For WASP-124b there is no model strongly favored over another, but there are marginal hints of Na and K fea Figure 4: The final transmission spectra of WASP-25b (top) and WASP-124b (bottom). The final weighted-average spectra are shown in violet for WASP-25 and blue for WASP-124, with their individual transmission spectra used for the combined spectra plotted in transparent colors. The best fit PLATON retrieval models are plotted as a black line with the 1\(\sigma\) confidence interval highlighted in light-blue. For both targets the best fit models are the ones that just include activity, but in the case of WASP-124b, this model is not significantly preferred over any other model. tures and low levels of activity in the host star. The retrieved atmospheric parameters from the best-fit models have wide uncertainties for both planets, but the retrieved limb temperatures are consistent with the calculated equilibrium temperatures. Given that there are no strong atomic or molecular features in either spectrum, the pressure levels where the atmosphere is optically thick and the atmospheric metallicities are poorly constrained. The lack of features is possibly due to low precision being unable to probe the required depths for feature detection and/or high altitude clouds obscuring the spectra. We then put these planets' atmospheres in context with the aerosol-metallicity trend proposed by McGruder et al. (2023), and plot the sodium signal of these targets relative to their host stars' metallicities. We find that the uncertainties of the Na signal, caused by lower data precision, makes it difficult to provide clear insight towards the existence of such a trend. We believe that further observations with higher quality data in the optical is necessary to confirm a trend. This could be done with HST and/or further ground-based telescopes. JWST observations would provide broader context of the nature of these planets atmospheres. Proving this trend has the potential to drastically direct theoretical understandings of aerosol formation and could yield more efficient target selection criteria. ## Appendix A Light Curves The detrending steps for the spectrophotometric bins are shown in Figures 6 - 11, where all WASP-25b transits were detrended with the CMC+Poly detrending routine, transits UT180915, UT190615, UT210905, and UT220605 for WASP-124b were detrended with the PCA+GP detrending method, and the remaining WASP-124b transits were with CMC+Poly. Table 6 has the combined transmission spectra of WASP-25b (left) and WASP-124b (right). Figures Figure 5: Sodium amplitude from the transmission spectra of WASP-6b (red diamond Nikolov et al., 2015; Carter et al., 2020), WASP-96b (blue star Nikolov et al., 2018; McGruder et al., 2022), and WASP-110b (green circle Nikolov et al., 2021) versus their host star metallicities. In McGruder et al. (2023), ”Na I Amplitude” "was the depth of the bin centered around the Na feature minus the average depth of all bins in the surrounding continuum. Here we take that value and divide by the theoretical difference of the peak depth from Na and the continuum, in order to incorporate slight differences in the planets’ scale heights and temperature. scipy.odr was used to obtain a weighted linear fit with these three planetary signals and is shown as a dashed blue line. Its regression score, R\({}^{2}\), is 0.75. The 1\(\sigma\) interval of the fit is shaded around the dashed line, and was calculated including the uncertainties in both metallicity and Na signal. Using the same method for WASP-6b and WASP-110b (i.e., R\({}_{p}\)/R\({}_{s}\) bin centered at 5892.9Å minus average R\({}_{p}\)/R\({}_{s}\) within 5340–5820Å and 5960–6440Å), we calculated the Na signals of WASP-25b (purple square) and WASP-124b (purple triangle). A weighted linear fit with all 5 planet signals is shown as a purple dash-dotted line, with a R\({}^{2}\) of 0.71. Although our new measurements have larger uncertainties they are consistent with the trend identified in McGruder et al. (2023). The metallicity range of WASP-55b and HATS-29b are plotted as yellow shaded regions. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \hline \multicolumn{1}{|c|}{**Exorettrevadis**} & \multicolumn{3}{|c|}{**PLATON**} \\ \hline & _WASP-25b_ & _WASP-124b_ & _WASP-25b_ & _WASP-124b_ \\ \hline T\({}_{\rm p}\) & \(1350^{+300}_{-34}\) & \(1160^{+430}_{-340}\) & T\({}_{\rm p}\) & \(1520^{+150}_{-150}\) & \(1090^{+240}_{-160}\) \\ \hline \(\log_{10}(P_{0})\) & \(-3.0^{+2.8}_{-3.4}\) & \(-2.8^{+3.7}_{-3.0}\) & \(\log_{10}(P_{0})\) & \(-3.1^{+3.4}_{-2.0}\) & \(-0.7^{+2.2}_{-3.3}\) \\ \hline \(\log_{10}(K)\) & \(-17.1^{+6.0}_{-8.5}\) & \(-12.3^{+8.0}_{-12.0}\) & & & & \\ \hline \(\log_{10}(Na)\) & \(-18.4^{+8.5}_{-7.6}\) & \(-9.7^{+6.8}_{-7.1}\) & \(\log_{10}(Z/Z_{\odot})\) & \(1.6^{+0.88}_{-1.60}\) & \(1.2^{+1.0}_{-1.3}\) \\ \hline \(\Delta T_{het}\) & \(-2382^{+372}_{-350}\) & \(810^{+1280}_{-1700}\) & \(\Delta T_{het}\) & \(-2001^{+1473}_{-1373}\) & \(820^{+1000}_{-1830}\) \\ \hline \(t_{het}\) & \(0.249^{+0.057}_{-0.067}\) & \(0.023^{+0.028}_{-0.015}\) & f\({}_{het}\) & \(0.168^{+0.08}_{-0.087}\) & \(0.025^{+0.026}_{-0.048}\) \\ \hline \end{tabular} * For WASP-124b the heterogeneity parameters with Exorretrievals were obtained using the model that only included activity and the other parameters were obtained using the plain model that included K and Na. According to the evidences, both of those models were indistinguishable from each other (see Table 4). We used the model with activity, sodium, and potassium to obtain best fit parameters for WASP-25b. This model was used because all models including activity were indistinguishable from each other and this model obtained elemental mixing ratios. The difference in obtained overlapping parameters (i.e. T\({}_{\rm p}\), \(\log_{10}(P_{0})\), \(\Delta T_{het}\), \(t_{het}\)) with that model and the one with just activity and water, were well within their uncertainties. The obtained water abundance was not constrained given there are no water features in the data, so we do not show its best fit here. Given that there are no carbon or oxygen bearing species in the wavelength coverage of our data, we do not report the C/O ratios retrieved by PLATON. The pressure in \(\log_{10}(P_{0})\) is given in bars for both retrievals. \end{table} Table 4: \(\Delta\)ln Z for Exoretrievals (left) and PLATON (right) retrievals \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \hline \multicolumn{1}{|c|}{**Exorettrevadis**} & \multicolumn{3}{|c|}{**PLATON**} \\ \hline & flat & \(H_{2}O\) & \(K\) & \(Na\) & \(K+Na\) & \(H_{2}O+Na\) & \(H_{2}O+K+Na\) & **Model:** & \\ \hline **WASP-25b:** & & & & & & & & \\ plain & \(0.0\) & \(-0.81\) & \(-0.94\) & \(-1.01\) & \(-1.11\) & \(-1.0\) & \(-1.34\) & plain & \(0.0\) \\ scattering & \(---\) & \(-4.15\) & \(-3.91\) & \(-3.8\) & \(-3.78\) & \(-3.85\) & \(-3.94\) & scattering & \(2.89\) \\ activity & \(5.0\) & \(\bf{6.28}\) & \(6.25\) & \(6.16\) & \(5.81\) & \(5.85\) & \(5.61\) & activity & \(\bf{4.13}\) \\ Both & \(---\) & \(-1.82\) & \(0.55\) & \(-0.35\) & \(-0.13\) & \(-2.07\) & \(-0.93\) & Both & \(3.75\) \\ \hline **WASP-124b:** & & & & & & & & \\ plain & \(0.0\) & \(-0.83\) & \(-0.55\) & \(-0.34\) & \(-0.1\) & \(-0.35\) & \(-0.48\) & plain & \(0.0\) \\ scattering & \(---\) & \(-3.17\) & \(-3.01\) & \(-3.12\) & \(-3.38\) & \(-3.41\) & \(-3.62\) & scattering & \(-0.22\) \\ activity & \(\bf{0.52}\) & \(-0.95\) & \(-0.88\) & \(-0.62\) & \(-0.64\) & \(-0.88\) & \(-1.01\) & activity & \(\bf{0.09}\) \\ scattering+activity & \(---\) & \(-3.42\) & \(-3.64\) & \(-3.63\) & \(-3.9\) & \(-3.93\) & \(-4.3\) & Both & \(-0.27\) \\ \hline \end{tabular} * The \(\Delta\)ln Z values are relative to a plain (and flat for Exoretrievals’s case) spectrum with the combined WASP-25b (**top**) spectrum and the combined WASP-124b (**bottom**) spectrum. For WASP-25b the retrievals with activity were favored with both Exoretrievals and PLATON. Where as for WASP-124b no model had evidences strongly favoring it, but the plain models or the models with activity had higher evidences. The models with the highest evidences are highlighted in bold. In these models the ln Z values for the flat (Exore retrievals, \(0\)\(\Delta\)ln Z) and plain (PLATON, \(0\)\(\Delta\)ln Z) models are -238 and 192 for WASP-25b, and -250 and 206 for WASP-124. We include those values for completeness, though the \(\Delta\)ln Z is what is needed for model selection. \end{table} Table 5: Parameters obtained by the best-fit retrievals for each system and retrieval code. 6 - 11 and the transmission spectra of each individual night, including the unused partial transit of UT190615, can be obtained via zenodo.org/record/8047731. ## Appendix B Atmospheric Retrieval Priors Table 7 has the priors used for each retrieval model. We thank the anonymous referee for helpful comments that improved the manuscript. This work has been supported by the National Aeronautics and Space Administration's Exoplanet Research Program via grant No. 20-XRP20_2.0091. AJ acknowledges support from ANID - Millennium Science Initiative - ICN12_009 and from FONDECYT project 1210718. JK acknowledges financial support from Imperial College London through an Imperial College Research Fellowship grant. KNOC acknowledges support from a Ford Foundation Predoctoral Fellowship. NHA acknowledges support by the National Science Foundation Graduate Research Fellowship under Grant No. DGE1746891. B.V.R thanks the Heising-Simons Foundation for support. _Facilities:_ Magellan:Baade (IMACS), Smithsonian Institution High Performance Cluster (SI/HPC), and the New Technology Telescope (EFOSC2) \begin{table} \begin{tabular}{|c|c|c|} \hline \hline \multicolumn{2}{|c|}{**WASP-25b**} & \multicolumn{2}{c|}{**WASP-124b**} \\ \hline **Wavelength (Å)** & \(\mathbf{R_{p}/R_{a}}\) & **Wavelength (Å)** & \(\mathbf{R_{p}/R_{a}}\) \\ \hline \(4200.0-4410.0\) & \(0.1392^{+0.0025}_{-0.0026}\) & \(4570.0-4730.0\) & \(0.1251^{+0.0047}_{-0.0048}\) \\ \hline \(4410.0-4570.0\) & \(0.1332^{+0.0028}_{-0.0024}\) & \(4730.0-4890.0\) & \(0.1279^{+0.0043}_{-0.0047}\) \\ \hline \(4570.0-4730.0\) & \(0.1405\pm 0.0022\) & \(4890.0-5050.0\) & \(0.1276\pm 0.0006\) \\ \hline \(4730.0-4890.0\) & \(0.1402^{+0.0011}_{-0.0012}\) & \(5050.0-5210.0\) & \(0.1269^{+0.0008}_{-0.0007}\) \\ \hline \(4890.0-5050.0\) & \(0.1409\pm 0.0003\) & \(5210.0-5370.0\) & \(0.1277^{+0.0013}_{-0.0012}\) \\ \hline \(5050.0-5210.0\) & \(0.1409^{+0.0022}_{-0.0024}\) & \(5370.0-5530.0\) & \(0.1273\pm 0.0018\) \\ \hline \(5210.0-5370.0\) & \(0.1402^{+0.0012}_{-0.0013}\) & \(5530.0-5690.0\) & \(0.1269\pm 0.0016\) \\ \hline \(5370.0-5530.0\) & \(0.1399\pm 0.0021\) & \(5690.0-5847.9\) & \(0.128^{+0.0021}_{-0.0021}\) \\ \hline \(5530.0-5690.0\) & \(0.1409^{+0.0019}_{-0.0018}\) & \(5847.9-5937.9\) & \(0.13\pm 0.001\) \\ \hline \(5690.0-5847.9\) & \(0.1401^{+0.002}_{-0.0019}\) & \(5937.9-6097.9\) & \(0.1922^{+0.0016}_{-0.0017}\) \\ \hline \(5847.9-5937.9\) & \(0.1403\pm 0.0008\) & \(6097.9-6317.0\) & \(0.1283\pm 0.0005\) \\ \hline \(5937.9-6082.9\) & \(0.1401^{+0.002}_{-0.0021}\) & \(6317.0-6424.0\) & \(0.1283\pm 0.0019\) \\ \hline \(6082.9-6227.9\) & \(0.14\pm 0.0006\) & \(6424.0-6542.86\) & \(0.1282^{+0.0023}_{-0.0023}\) \\ \hline \(6227.9-6372.87\) & \(0.1403^{+0.001}_{-0.0011}\) & \(6542.86-6662.86\) & \(0.1298^{+0.0022}_{-0.0023}\) \\ \hline \(6372.87-6517.86\) & \(0.1408\pm 0.0015\) & \(6662.86-6752.86\) & \(0.1299\pm 0.002\) \\ \hline \(6517.86-6662.86\) & \(0.1403\pm 0.0016\) & \(6752.86-6872.86\) & \(0.1281\pm 0.0019\) \\ \hline \(6662.86-6752.86\) & \(0.1404^{+0.0012}_{-0.0013}\) & \(6872.86-6992.86\) & \(0.1305^{+0.0022}_{-0.0022}\) \\ \hline \(6752.86-6872.86\) & \(0.1407^{+0.0013}_{-0.0014}\) & \(6992.86-7113.0\) & \(0.1293^{+0.002}_{-0.0019}\) \\ \hline \(6872.86-6992.86\) & \(0.1398^{+0.0013}_{-0.0012}\) & \(7113.0-7723.0\) & \(0.1279^{+0.0012}_{-0.0014}\) \\ \hline \(6992.86-7113.0\) & \(0.1412\pm 0.0017\) & \(7273.0-7433.0\) & \(0.1267^{+0.0017}_{-0.0016}\) \\ \hline \(7113.0-7273.0\) & \(0.1402\pm 0.0002\) & \(7433.0-7597.0\) & \(0.1265^{+0.0021}_{-0.0012}\) \\ \hline \(7273.0-7433.0\) & \(0.1398^{+0.0016}_{-0.0017}\) & \(7636.5-7726.5\) & \(0.135^{+0.0032}_{-0.0031}\) \\ \hline \(7433.0-7597.0\) & \(0.1391^{+0.0014}_{-0.0012}\) & \(7726.5-7886.5\) & \(0.1289\pm 0.0023\) \\ \hline \(7636.5-7726.5\) & \(0.1419^{+0.0025}_{-0.0020}\) & \(7886.5-8046.5\) & \(0.128\pm 0.0011\) \\ \hline \(7726.5-7886.5\) & \(0.1376\pm 0.0008\) & \(8046.5-8206.5\) & \(0.1273^{+0.0017}_{-0.0018}\) \\ \hline \(7886.5-8046.5\) & \(0.1388\pm 0.0008\) & \(8206.5-8366.5\) & \(0.1282^{+0.0019}_{-0.0018}\) \\ \hline \(8046.5-8206.5\) & \(0.1396\pm 0.0023\) & \(8366.5-8566.0\) & \(0.1288^{+0.0014}_{-0.0023}\) \\ \hline \(8206.5-8366.5\) & \(0.1393^{+0.0016}_{-0.0017}\) & \(8566.0-8800.0\) & \(0.1269\pm 0.0029\) \\ \hline \(8366.5-8566.0\) & \(0.1376\pm 0.001\) & \(8800.0-9100.0\) & \(0.1251^{+0.0011}_{-0.0043}\) \\ \hline \(8566.0-8800.0\) & \(0.1379^{+0.0014}_{-0.0013}\) & \(9100.0-9225.0\) & \(0.1292^{+0.0042}_{-0.0033}\) \\ \hline \(8800.0-9100.0\) & \(0.1389\pm 0.0016\) & \(9225.0-9425.0\) & \(0.1239^{+0.0081}_{-0.0088}\) \\ \hline & & \(9425.0-9640.0\) & \(0.127 Astropy (Astropy Collaboration et al., 2013), corner (Foreman-Mackey, 2016), Matplotlib (Hunter, 2007), NumPy (Harris et al., 2020), Multinest (Ferroz et al., 2009), PyMultiNest (Buchner et al., 2014), SciPy (Virtanen et al., 2020), batman (Kreidberg, 2015), george (Ambikasaran et al., 2015) dynesty (Speagle, 2020), PLATON (Zhang et al., 2019)
2303.03069
Hidden scale invariance in the Gay-Berne model. II. Smectic B phase
This paper complements a previous study of the isotropic and nematic phases of the Gay-Berne liquid-crystal model [Mehri et al., Phys. Rev. E 105, 064703 (2022)] with a study of its smectic B phase found at high density and low temperatures. We find also in this phase strong correlations between the virial and potential-energy thermal fluctuations, reflecting hidden scale invariance and implying the existence of isomorphs. The predicted approximate isomorph invariance of the physics is confirmed by simulations of the standard and orientational radial distribution functions, the mean-square displacement as a function of time, as well as the force, torque, velocity, angular velocity, and orientational time-autocorrelation functions. The regions of the Gay-Berne model that are relevant for liquid-crystal experiments can thus fully be simplified via the isomorph theory.
Saeed Mehri, Jeppe C. Dyre, Trond S. Ingebrigtsen
2023-03-06T12:30:06Z
http://arxiv.org/abs/2303.03069v2
# Hidden scale invariance in the Gay-Berne model. II. ###### Abstract This paper complements a previous study of the isotropic and nematic phases of the Gay-Berne liquid-crystal model [Mehri _et al_., Phys. Rev. E **105**, 064703 (2022)] with a study of its smectic B phase found at high density and low temperatures. We find also in this phase strong correlations between the virial and potential-energy thermal fluctuations, reflecting hidden scale invariance and implying the existence of isomorphs. The predicted approximate isomorph invariance of the physics is confirmed by simulations of the standard and orientational radial distribution functions, the mean-square displacement as a function of time, as well as the force, torque, velocity, angular velocity, and orientational time-autocorrelation functions. The regions of the Gay-Berne model that are relevant for liquid-crystal experiments can thus fully be simplified via the isomorph theory. ## I Introduction Liquid crystals involve molecules with a high degree of shape anisotropy [1; 2]. This interesting state of matter is relevant in many different contexts, ranging from display applications to biological systems [3; 4; 5]. Depending on temperature and pressure, the molecular anisotropy leads to different structural phases, e.g., nematic and smectic phases with long-range orientational ordering [1]. Gay-Berne (GB) models describe molecules of varying shape anisotropy spanning from elongated ellipsoids to thin disks, and GB models have become standard liquid-crystal models [6]. The GB pair potential depends on four dimensionless parameters. This is reflected in the notation GB(\(\kappa,\kappa^{\prime},\mu,\nu\)) in which the four parameters quantify the shape of the molecules and the strength of their interactions. A previous paper studied the isotropic and nematic phases of a GB model with parameters corresponding to rod-shaped elongated molecules [7]. It was found that this model has isomorphs in the isotropic and nematic phases, which are curves in the thermodynamic phase diagram along which the physics is approximately invariant. This paper presents a study of the same GB model in its smectic B phase, demonstrating that isomorphs exist also here. ## II The Gay-Berne potential and simulation details The GB(\(\kappa,\kappa^{\prime},\mu,\nu\)) pair potential is characterized by the following four dimensionless parameters: \(\kappa\equiv\sigma_{e}/\sigma_{s}\) where \(\sigma_{e}\) and \(\sigma_{s}\) are lengths, \(\kappa^{\prime}\equiv\varepsilon_{ss}/\varepsilon_{ee}\) where \(\varepsilon_{ss}\) and \(\varepsilon_{ee}\) are energies, and two exponents \(\mu\) and \(\nu\). The GB pair potential \(v_{\rm GB}\) is defined as follows [6] \[v_{\rm GB}({\bf r}_{ij},\hat{\bf e}_{i},\hat{\bf e}_{j}) =4\varepsilon(\hat{\bf r},\hat{\bf e}_{i},\hat{\bf e}_{j})\left[ (\sigma_{s}/\rho_{ij})^{12}-(\sigma_{s}/\rho_{ij})^{6}\right], \tag{1a}\] \[\rho_{ij} =r_{ij}-\sigma(\hat{\bf r},\hat{\bf e}_{i},\hat{\bf e}_{j})+ \sigma_{s}\,. \tag{1b}\] Here, \(r_{ij}\) is the distance between molecules \(i\) and \(j\), \(\hat{\bf r}\equiv{\bf r}_{ij}/r_{ij}\) is the unit vector from molecule \(i\) to molecule \(j\), and \(\hat{\bf e}_{i}\) and \(\hat{\bf e}_{j}\) are unit vectors along the major axes of the molecules. The GB molecule mimics an ellipsoid of two diameters \(\sigma_{s}\) and \(\sigma_{e}\). Specifically, one defines \[\sigma(\hat{\bf r},\hat{\bf e}_{i},\hat{\bf e}_{j}) =\sigma_{s}\biggl{[}1-\frac{\chi}{2}\biggl{(}\frac{(\hat{\bf e}_{i }\cdot\hat{\bf r}+\hat{\bf e}_{j}\cdot\hat{\bf r})^{2}}{1+\chi(\hat{\bf e}_{i }\cdot\hat{\bf e}_{j})}+\frac{(\hat{\bf e}_{i}\cdot\hat{\bf r}-\hat{\bf e}_{j} \cdot\hat{\bf r})^{2}}{1-\chi(\hat{\bf e}_{i}\cdot\hat{\bf e}_{j})}\biggr{)} \biggr{]}^{-1/2}, \tag{2a}\] \[\chi =\frac{\kappa^{2}-1}{\kappa^{2}+1}\,. \tag{2b}\] Here \(\chi\) is a shape anisotropy parameter and \(\kappa\) quantifies the molecular asymmetry such that \(\kappa=1\) (\(\chi=0\)) represents spherical molecules, \(\kappa\rightarrow\infty\) (\(\chi\to 1\)) corresponds to very long rods, and \(\kappa\to 0\) (\(\chi\rightarrow-1\)) corresponds to very thin disks. The energy term is given by \[\varepsilon(\hat{\mathbf{r}},\hat{\mathbf{e}}_{i},\hat{\mathbf{e}}_{j})= \varepsilon_{0}\,\left(\varepsilon_{1}(\hat{\mathbf{e}}_{i},\hat{\mathbf{e}}_{j })\right)^{\nu}\left(\varepsilon_{2}(\hat{\mathbf{r}},\hat{\mathbf{e}}_{i}, \hat{\mathbf{e}}_{j})\right)^{\mu}\] (3a) in which \[\varepsilon_{1}(\hat{\mathbf{e}}_{i},\hat{\mathbf{e}}_{j}) =\left(1-\chi^{2}(\hat{\mathbf{e}}_{i}\cdot\hat{\mathbf{e}}_{j}) ^{2}\right)^{-1/2}, \tag{3b}\] \[\varepsilon_{2}(\hat{\mathbf{r}},\hat{\mathbf{e}}_{i},\hat{ \mathbf{e}}_{j}) =1-\frac{\chi^{\prime}}{2}\bigg{(}\frac{(\hat{\mathbf{e}}_{i} \cdot\hat{\mathbf{r}}+\hat{\mathbf{e}}_{j}\cdot\hat{\mathbf{r}})^{2}}{1+\chi^ {\prime}(\hat{\mathbf{e}}_{i}\cdot\hat{\mathbf{e}}_{j})}+\frac{(\hat{\mathbf{ e}}_{i}\cdot\hat{\mathbf{r}}-\hat{\mathbf{e}}_{j}\cdot\hat{\mathbf{r}})^{2}}{1- \chi^{\prime}(\hat{\mathbf{e}}_{i}\cdot\hat{\mathbf{e}}_{j})}\bigg{)}\,. \tag{3c}\] Here \[\chi^{\prime}=\frac{\kappa^{\prime 1/\mu}-1}{\kappa^{\prime 1/\mu}+1} \tag{3d}\] is an energy anisotropy parameter. The energies \(\varepsilon_{ss}\) and \(\varepsilon_{ee}\) are the well depths of the potential in the side-side and end-end configurations, respectively. Unless otherwise stated, \(\sigma_{s}\) defines the length and \(\varepsilon_{0}\) the energy units used below. We simulated a system of 1372 particles of the GB\((3,5,2,1)\) model studied previously in Ref. [7]. The GB pair potential was cut and shifted at \(r_{c}=4.0\) and the time step used was \(\Delta t=0.001\). The standard \(NVT\) Nose-Hoover algorithm was used for the center-of-mass motion and the Fincham algorithm was used for the rotational motion [8; 9]. Different thermostats were applied for the translational and the rotational motions [7]. The molecular moment of inertia was set to unity. A home-made code for GPU computing was used; at each simulated state point 20 million time steps were taken to equilibrate the system before the production run of 67 million time steps. If \(\mathbf{R}\equiv(\mathbf{r}_{1},...,\mathbf{r}_{N})\) is the vector of particle coordinates and \(\rho\equiv N/V\) is the particle density, the microscopic virial \(W(\mathbf{R})\) is defined as \(W(\mathbf{R})\equiv\partial U(\mathbf{R})/\partial\ln\rho\) in which the density is changed by a uniform scaling all particle coordinates. For an inverse power-law pair potential, \(v(r)=\varepsilon(r/\sigma)^{-n}\), it is easy to see that this implies that \(W(\mathbf{R})\) is a sum of pair virial contributions equal to \((n/3)v(r)\). Because the vectors \(\hat{\mathbf{r}}\), \(\hat{\mathbf{e}}_{i}\), and \(\hat{\mathbf{e}}_{j}\) do not change under a uniform expansion, a related result applies for the GB pair potential. Specifically, the GB pair virial is equal to \(4\varepsilon(\hat{\mathbf{r}},\hat{\mathbf{e}}_{i},\hat{\mathbf{e}}_{j})[4 \left(\sigma_{s}/\rho_{ij}\right)^{12}-2\left(\sigma_{s}/\rho_{ij}\right)^{6}] (r/\rho_{ij})\), and the total microscopic virial \(W(\mathbf{R})\) is calculated as the sum of all pair virials. The GB(3,5,2,1) phase diagram is shown in Fig. 3 of Ref. [7]. Figure 1 shows a snapshot of the system at equilibrium in the smectic B phase. Figure 1: Snapshot of the smectic B phase at density 0.4 and temperature 1.2. A color coding is introduced here to visualize the individual planes. ## III Properties studied The quantities evaluated numerically in this study are: the standard radial distribution function \(g(r)\)[10; 11], the below defined orientational radial distribution function \(G_{l}(r)\) (\(l=2\)) [11; 12; 13; 14], and a number of single-molecule time-autocorrelation functions [15; 16]. The latter two observables are defined by \[G_{l}(r)\equiv\langle P_{l}(\hat{\bf e}_{i}\cdot\hat{\bf e}_{j})\rangle, \tag{4}\] \[\phi_{A}(t)=\langle{\bf A}(t_{0})\cdot{\bf A}(t_{0}+t)\rangle. \tag{5}\] Here \(P_{l}\) is the \(l\)'th Legendre polynomial, \({\bf A}(t)\) is a vector defined for each molecule, and the angular brackets denote an ensemble and particle average, which in the case of \(G_{l}(r)\) is restricted to pairs of particles the distance \(r\) apart. We study the cases of \({\bf A}\) being the velocity, angular velocity, force, and torque. We also study the first- and second-order molecular orientational order parameter time-autocorrelation functions defined by \[\phi_{l}(t)=\langle P_{l}(\hat{\bf e}_{i}(t_{0})\cdot\hat{\bf e}_{i}(t_{0}+t)) \rangle\,. \tag{6}\] ## IV R-simple systems and isomorphs The virial \(W\) quantifies the part of the pressure \(p\) that derives from molecular interactions via the defining identity \(pV=Nk_{B}T+W\). Liquids and solids may be classified according to the degree of correlation between the constant-volume thermal-equilibrium fluctuations of virial \(W\) and potential energy \(U\)[17]. "R-simple systems" are those with strong \(WU\) correlations; such systems are simple because their thermodynamic phase diagram is basically one-dimensional in regard to structure and dynamics [17; 18; 19; 20]. The "isomorph theory" of R-simple systems was developed over the last decade [21; 22]. The \(WU\) Pearson correlation coefficient (which depends on the state point in question) is defined by \[R=\frac{\langle\Delta W\Delta U\rangle}{\sqrt{\langle(\Delta W)^{2}\rangle \langle(\Delta U)^{2}\rangle}}\,. \tag{7}\] Here \(\Delta\) gives the deviation from the equilibrium mean value. Many systems, including the standard Lennard-Jones and Yukawa fluids, have strong \(WU\) correlations in their liquid and solid phases, whereas \(R\) usually decreases significantly for densities below the critical density [23]. A system is considered to be R-simple whenever \(R>0.9\) at the state points of interest [21]. This is a pragmatic criterion, however, and, e.g., the simulations presented in this paper go below this value at high temperatures without significantly affecting the degree of isomorph invariance. As mentioned, R-simple systems have curves in the phase diagram along which structure and dynamics are approximately invariant. These curves are termed _isomorphs_. Isomorph invariance applies when data are presented in so-called reduced units. These units, which in contrast to ordinary units are state-point dependent, are given by letting the density \(\rho\) define the length unit \(l_{0}\), the temperature define the energy unit \(e_{0}\), and density and thermal velocity define the time unit \(t_{0}\), \[l_{0}=\rho^{-1/3},\ \ e_{0}=k_{\rm B}T,\ \ t_{0}=\rho^{-1/3}\sqrt{m/k_{\rm B }T}\,. \tag{8}\] Here \(m\) is the molecule mass. Quantities made dimensionless by application of these units are termed "reduced" and marked with a tilde. Strong virial potential-energy correlations arise whenever hidden scale invariance applies. This is the condition that the potential-energy ordering of same-density configurations is maintained under a uniform scaling of all coordinates [24]. This is formally expressed as follows \[U({\bf R}_{\rm a})<U({\bf R}_{\rm b})\Rightarrow U(\lambda{\bf R}_{\rm a})<U (\lambda{\bf R}_{\rm b}) \tag{9}\] in which \(\lambda\) is a scaling factor. Consider two configurations with the same potential energy, i.e., \(U({\bf R}_{\rm a})=U({\bf R}_{\rm b})\). After a uniform scaling one has by Eq. (9) \(U(\lambda{\bf R}_{\rm a})=U(\lambda{\bf R}_{\rm b})\). By taking the derivative of this with respect to \(\lambda\) one derives \(W({\bf R}_{\rm a})=W({\bf R}_{\rm b})\)[24]. Thus same potential energy implies same virial, resulting in a 100% correlation between the \(W\) and \(U\) constant-volume fluctuations. For realistic systems Eq. (9) is fulfilled only approximately, however, and in practice one rarely experiences perfect virial potential-energy correlations (this only applies when \(U({\bf R})\) is an Euler-homogeneous function). Recall that a system's entropy \(S\) is equal to that of an ideal gas at the same density and temperature plus an "excess" term deriving from the intermolecular interactions: \(S=S_{\rm id}+S_{\rm ex}\). It can be shown that Eq. (9) implies that the reduced structure and dynamics are invariant along the lines of constant excess entropy; these are by definition the system's isomorphs [24]. The so-called density-scaling exponent \(\gamma\) is defined by \[\gamma\equiv\left(\frac{\partial\ln T}{\partial\ln\rho}\right)_{S_{\rm ex}}= \frac{\langle\Delta W\Delta U\rangle}{\langle(\Delta U)^{2}\rangle}\,. \tag{10}\] The second equality here is a general identity [22], which is useful when the system is R-simple because Eq. (10) can then be applied for tracing out isomorphs without knowing the equation of state. For the simple Euler algorithm this is done by proceeding as follows. At a given state point \((\rho_{1},T_{1})\), by means of Eq. (10) one calculates \(\gamma\) from the equilibrium fluctuations of potential energy and virial. From Eq. (10) one then predicts the temperature \(T_{2}\) with the property that \((\rho_{2},T_{2})\) is on the same isomorph as \((\rho_{1},T_{1})\). If \(\gamma=7\), for instance, for a one percent density increase a seven percent temperature increase will ensure that the new state point is on the same isomorph. In the simulations of this paper, however, in order to increase the accuracy of the generated isomorph, following Ref. [25] we used instead the fourth-order Runge-Kutta algorithm for solving numerically Eq. (10) (involving density changes of order 1% ). The resulting isomorph state points are given in Table 1. We note that the density-scaling exponent is generally significantly larger than for point-particle Lennard-Jones models where it is usually in the range 5-6. This must be a consequence of the spherical asymmetry because the same increase has been seen, e.g., for the asymmetric dumbbell and Lewis-Wahnstrom ortho-terphenyl models built of Lennard-Jones particles [18; 21; 26]. A quantitative explanation of this is missing, however, because a full isomorph theory of molecules is still not available. \begin{table} \begin{tabular}{c c c c} \hline \(\rho\) & \(T\) & \(R\) & \(\gamma\) \\ \hline \hline 0.400 & 0.400 & 0.956 & 9.46 \\ 0.416 & 0.578 & 0.946 & 9.04 \\ 0.433 & 0.823 & 0.936 & 8.74 \\ 0.451 & 1.160 & 0.925 & 8.50 \\ 0.469 & 1.619 & 0.905 & 8.28 \\ 0.488 & 2.240 & 0.887 & 8.06 \\ 0.508 & 3.079 & 0.868 & 7.92 \\ 0.529 & 4.211 & 0.854 & 7.85 \\ 0.550 & 5.770 & 0.854 & 8.00 \\ \hline \end{tabular} \end{table} Table 1: Variation of density \(\rho\), temperature \(T\), virial potential-energy correlation coefficient \(R\) (Eq. (7)), and density-scaling exponent \(\gamma\) (Eq. (10)) for nine state points on the isomorph generated from the reference state point \((\rho,T)=(0.4,0.4)\). ## V Structure and dynamics monitored along an isochore and an isomorph We begin the study by presenting results for the mean-square displacement as a function of time, which is predicted to be isomorph invariant in reduced units. Figure 2 shows the results along the \(\rho=0.4\) isochore (upper panel) and along the isomorph generated from the reference state point \((\rho,T)=(0.4,0.4)\) (lower panel), in both cases for the same nine temperatures. The isomorph data involve state points of more than a third density change and more than a factor of ten temperature change (Table 1). Note that the smectic B phase of the GB(5,3,2,1) model is found at higher densities than those of the isotropic and nematic phases studied in Ref. [7]. The low-temperature state points along the isochore of Fig. 2 are in the solid state as evident from the fact that the long-time mean-square displacement is constant. The high-temperature isochore state points, on the other hand, show diffusive long-time behavior and are consequently liquid. The fact that all mean-square displacement data collapse at short times in the ballistic regime for both the isochore and the isomorph is a straightforward consequence of the use of reduced units, because this leads to a reduced-unit thermal velocity that is the same at all state points. For the isomorph data, we see a fairly good collapse at all times, not just at short times. The minor deviations from perfect collapse are consistent with the fact that the virial potential-energy correlation coefficient \(R\) is not very close to unity; in fact, \(R\) goes below 0.9 at the four highest temperatures, compare Table 1. This feature might have to do with the short-time librational motion of the rods, which as shown below does not scale well in the isomorph sense. Figure 3 shows reduced-unit data for the radial distribution function \(g(r)\) and the orientational radial distribution function \(G_{2}(r)\) (Eq. (4)) along the same isochore and isomorph. Figure 3 shows no invariance along the isochore, but fair invariance along the isomorph. An exception to this is the highest temperature isomorph radial distribution function that deviates notably from the eight others. We have found that at this (and higher) temperatures, the smectic B phase undergoes a further transition likely involving a tilt of the average molecular orientation with respect to the smectic layers, similar to what has been reported by de Miguel _et al._[11]. Interestingly, this does not affect the isomorph invariance of other quantities than the radial distribution function, compare the \(G_{2}(r)\) data of Fig. 3, as well as the data of later figures. Figure 2: Reduced mean-square displacement as a function of reduced time along the \(\rho=0.4\) isochore and along the isomorph generated from the reference state point \((\rho,T)=(0.4,0.4)\) (Table 1). The colors here for the different temperatures are also used in Figs. 3-6. Returning to dynamic properties, the normalized force and torque time-autocorrelation functions, i.e., the functions \(\phi_{A}(t)/\phi_{A}(0)\) of Eq. (5) for \(\mathbf{A}\) equal to the force and torque on the individual particles, respectively, are shown in Fig. 4 as functions of the reduced time. Near-perfect scaling is observed for both functions along the isomorph, but not along the isochore. Figure 5 shows the first- and second-order orientational time-autocorrelation functions along the isochore and the isomorph. These functions both decay to zero at the highest density studied on the isochore, which is not the case for the isomorph along which invariant dynamics is observed. Figure 4: Normalized force (upper panels) and torque (lower panels) time-autocorrelation functions along the same isochore and isomorph as in the previous figures, plotted as functions of reduced time \(\tilde{t}\). Figure 3: Structure along the isochore and the isomorph probed via the standard radial distribution function (upper panels) and the orientational radial distribution function defined in Eq. (4) (lower panels), in both cases plotted as a function of the reduced pair distance \(\tilde{r}\). The colors used here and henceforth for the different temperatures are the same as those of Fig. 2. We finish the study by showing the normalized velocity and angular velocity time-autocorrelation functions in Fig. 6. Again, good isomorph invariance is observed at all times, though with minor deviation at intermediate times for the velocity time-autocorrelation function. ## VI Summary We have shown that the isomorph theory can be used to understand GB liquid crystals in the smectic B phase, because the thermodynamic phase diagram is here effectively one-dimensional in the sense that the reduced-unit structure and dynamics are approximately invariant along the isomorphs. Our previous paper [7] showed that the same applies for the isotropic and nematic phases of the GB(3,5,2,1) model. This means that most of the GB(3,5,2,1) phase diagram is effectively one-dimensional in regard to structure and dynamics. We note that this property is not limited to a particular GB model; thus an earlier publication demonstrated the existence of isomorphs in the GB(0.345,0.2,1,2) model that forms a discotic liquid-crystal phase at low temperatures [27]. - The GB potential is unique in the field of liquid-crystal models in that through a gradual reduction of the parameters \(\chi\) and \(\chi^{\prime}\) of Eq. (2) and Eq. (3), the Lennard-Jones potential is recovered. It is an interesting question whether one would find isomorph invariance behavior in other models of rods, such as a rigid line of Lennard-Jones interaction centers. We demonstrated above that the GB(3,5,2,1) model exhibits good invariance of the reduced-unit structure and dynamics along the studied isomorph. In conjunction with our previous study [7], the existence of isomorphs in the GB model can now be used to explain the observed behavior of liquid crystals, for instance the so-called density Figure 5: First- and second-order orientational order parameter time-autocorrelation functions along the isochore and isomorph, plotted as functions of reduced time. Figure 6: Normalized velocity and angular velocity time-autocorrelation functions along the isochore and isomorph, plotted as functions of reduced time. scaling, which is the fact that the reduced dynamics is invariant along lines of constant \(\rho^{\gamma}/T\)[28; 29]. Studies remain to investigate whether other smectic phases of the GB model also exhibit strong virial potential-energy correlations and thus the existence of isomorphs. It would be interesting, in particular, to investigate the effect of varying the moment of inertia, given the fact that the fixing of this quantity upon a density change formally violates isomorph invariance of the dynamics, but was found above to have little effect in practice. Also, it would be interesting to investigate systematically the vast parameter space of the GB potential from the hidden-scale-invariance perspective. ###### Acknowledgements. This work was supported by the VILLUM Foundation's _Matter_ grant (16515).
2303.14515
Specific investments under negotiated transfer pricing: effects of different surplus sharing parameters on managerial performance: An agent-based simulation with fuzzy Q-learning agents
This paper focuses on a decentralized profit-center firm that uses negotiated transfer pricing as an instrument to coordinate the production process. Moreover, the firm's headquarters gives its divisions full authority over operating decisions and it is assumed that each division can additionally make an upfront investment decision that enhances the value of internal trade. On early works, the paper expands the number of divisions by one downstream division and relaxes basic assumptions, such as the assumption of common knowledge of rationality. Based on an agent-based simulation, it is examined whether cognitively bounded individuals modeled by fuzzy Q-learning achieve the same results as fully rational utility maximizers. In addition, the paper investigates different constellations of bargaining power to see whether a deviation from the recommended optimal bargaining power leads to a higher managerial performance. The simulation results show that fuzzy Q-learning agents perform at least as well or better than fully individual rational utility maximizers. The study also indicates that, in scenarios with different marginal costs of divisions, a deviation from the recommended optimal distribution ratio of the bargaining power of divisions can lead to higher investment levels and, thus, to an increase in the headquarters' profit.
Christian Mitsch
2023-03-25T16:45:32Z
http://arxiv.org/abs/2303.14515v1
Specific investments under negotiated transfer pricing: effects of different surplus sharing parameters on managerial performance ###### Abstract This paper focuses on a decentralized profit-center firm that uses negotiated transfer pricing as an instrument to coordinate the production process. Moreover, the firm's headquarters gives its divisions full authority over operating decisions and it is assumed that each division can additionally make an upfront investment decision that enhances the value of internal trade. On early works, the paper expands the number of divisions by one downstream division and relaxes basic assumptions, such as the assumption of common knowledge of rationality. Based on an agent-based simulation, it is examined whether cognitively bounded individuals modeled by fuzzy Q-learning achieve the same results as fully rational utility maximizers. In addition, the paper investigates different constellations of bargaining power to see whether a deviation from the recommended optimal bargaining power leads to a higher managerial performance. The simulation results show that fuzzy Q-learning agents perform at least as well or better than fully individual rational utility maximizers. The study also indicates that, in scenarios with different marginal costs of divisions, a deviation from the recommended optimal distribution ratio of the bargaining power of divisions can lead to higher investment levels and, thus, to an increase in the headquarters' profit. Keywords:Agent-based simulation Fuzzy Q-learning Hold-up problem Negotiated transfer pricing Specific investments ## 1 Introduction In decentralized firms, the headquarters gives its divisions more authority in making decisions. Transfer pricing and managerial performance evaluation are two key instruments for managing potential conflicts between divisions and the headquarters and guiding the internal trade between divisions (e.g., Anctil and Dutta 1999; Baldenius et al. 1999). Internal transfers, especially in multi-stage production processes, are often performed under conditions of asymmetric information (Baldenius 2000). To induce effort and reduce the risk of moral hazard, divisional profit-based compensation schemes are often applied (e.g., Edlin and Reichelstein 1995; Vaysman 1998). The divisions' coordination problem increases when divisions can additionally make specific investments that enhance the value of internal trade. The main problem with specific investments is that they are irreversible and are of little or no value in the divisions' external lines of business (Edlin and Reichelstein 1995). Since those investments are usually made under uncertainty, each division tends to underinvest. This problem is also known as an investment "hold-up" problem (Schmalenbach 1908/1909; Williamson 1979, 1985). While various methods of transfer pricing to mitigate the hold-up problem have been investigated extensively in the economic transfer pricing literature (e.g., Baldenius et al. 1999; Edlin and Reichelstein 1995; Hofmann and Pfeiffer 2006; Lengsfeld and Schiller 2003; Pfeiffer et al. 2011; Wagner 2008), it seems less clear how well the literature's recommendations actually work in practice. Moreover, the solutions of transfer pricing problems are often based on game theory approaches, such as the concept of subgame perfect equilibrium. Such concepts require demanding assumptions, e.g., of rational decision-making behavior or common knowledge, and, in practice, these assumptions are often not met (Axtell 2007; Simon 1979). Assumptions on rationality are questionable in models that reflect human behavior and, therefore, most of such models quickly lose practical relevance (Young 2001). Against this background, this paper focuses on negotiated transfer pricing and, specifically, addresses and extends the simulation model introduced in Mitsch (2023). In particular, the simulation study analyzes a decentralized firm in which divisions produce highly specialized intermediate products. The firm's headquarters applies the concept of profit centers to determine the profit for internal divisions of responsibility. In addition, divisions can make upfront specific investments and have private information regarding their area of responsibility. A linear divisional profit-based compensation scheme is applied in order to guarantee that the divisions act in the headquarters' interest. Furthermore, the study relaxes basic assumptions, e.g., the assumption of common knowledge of rationality for the divisions, and examines whether cognitively bounded individuals (modeled with fuzzy Q-learning agents) achieve the same results as fully rational utility maximizers. In addition, different constellations of bargaining power are examined to see whether a deviation from the recommended optimal bargaining power leads to a higher divisional performance. The findings of Mitsch's (2023) simulation study show that cognitively bounded individuals can achieve the same results as fully rational utility maximizers, but, in certain cases where divisional investment costs differ widely from each other, cognitively bounded individuals can generate even higher profits, if they obtain approximately the same level of bargaining power from the headquarters for their negotiation process. In this study, the number of divisions is increased by one. Concretely, there is one supplying division and two buying divisions. As a consequence, two negotiations over transfer prices and quantities have to be carried out at the same time. Since the upfront investment decisions depend on the anticipated outcome of the negotiations, it is more difficult for the headquarters to determine the optimal bargaining power for each division. Simultaneously, the investments made by the divisions cannot be individually separated out due to their nature. Hence, the management decision problem to be solved cannot be divided into two or more disjoint parts. This paper distinguishes between an "all-knowing" headquarters (serve as a benchmark for optimal decision-making behavior) and a headquarters that does not have all the information to find the optimal bargaining power for its divisions and, therefore, relies on reference values or simple rules to determine the bargaining power of each division. The first research question in this contribution is to examine whether cognitively bounded individuals achieve the same results as fully rational utility maximizers on the investment hold-up problem described above. In addition, the study also seeks to examine the impact of bargaining power on managerial performance. For this purpose, this paper conducts an agent-based computer simulation with individual learning agents which are modeled by fuzzy Q-learning. The use of fuzzy Q-learning offers many advantages, including a high degree of heterogeneity with respect to the structure and the dynamic interactions between agents and their environment and, in particular, it is a feasible way to deal with the divisions' bounded rationality and cognitive limitations. Moreover, an agent-based simulation also allows to observe the agents' behavior as well as the system's behavior on the macro-level in time, which otherwise cannot be derived in relation to a "functional relationship" from the individual behaviors of those agents (e.g., Epstein 2006; Wall 2016). The remainder of this paper is organized as follows: In Sec. 2, the extended negotiated transfer pricing model is introduced and the resulting solutions are discussed. Section 3 introduces the simulation model with fuzzy Q-learning agents. The parameter settings for the agent-based simulation are explained in Sec. 4 and, in Sec. 5, the simulation results are presented and discussed. Section 6 contains concluding remarks and suggests possible directions for future research. ## 2 Negotiated transfer pricing model ### Specification of the firm In this paper, a decentralized firm is examined that consists of one headquarters and three divisions - one supplying division (the "upstream" division) and two buying divisions (the "downstream" divisions). Suppose that the supplying division purchases raw materials from a commodity market in order to produce a highly specialized intermediate product that, in particular, cannot be bought from an external market (e.g., Anctil and Dutta 1999; Edlin and Reichelstein 1995; Wagner 2008). On the downside, each buying division refines the intermediate products independently of each other in order to improve their quality. Lastly, the buying divisions sell the refined products on an outlet market separately. Beyond that, all three divisions can additionally make specific investments that increase the value of internal trade. However, these investment decisions have to be made before the negotiations over transfer prices and transfer quantities take place. Figure 1 schematically represents the negotiated transfer pricing model investigated here. Corresponding to the well-known negotiated transfer pricing models by Eccles and White (1988), Edlin and Reichelstein (1995), Gox and Schiller (2006), Pfeiffer and Wagner (2007), Vaysman (1998), and Wagner (2008), it is assumed that all three divisions are treated as profit centers. Therefore, divisional performance evaluations are based on profits and each division is allowed to determine the amount of specific investments, the level of the transfer price, and the amount of intermediate products autonomously. In the following, the supplying division, the 1st buying division, the 2nd buying division, and the headquarters are abbreviated to \(S\), 1, 2, and \(HQ\), respectively. To keep the analysis simple and, especially to ensure, that the negotiated transfer pricing model has a unique subgame perfect equilibrium, it is assumed that the supplying division's costs of manufacturing \(q_{j}\in\mathbb{R}^{+}\), \(j\in\{1,2\}\), units of the intermediate product are given by \[C_{S}(q_{1},q_{2},\theta_{S},I_{S})=(\theta_{S}-I_{S})\cdot(q_{1}+q_{2})\, \tag{1}\] where \(\theta_{S}\in\mathbb{R}^{+}\) is a state variable which reflects the purchase price of raw materials and \(I_{S}\in\mathbb{R}^{+}\) stands for the amount of specific investment carried by the supplying division. In contrast, each buying division's, \(j\in\{1,2\}\), net revenue is \[R_{j}(q_{j},\theta_{j},I_{j})=(\theta_{j}-\frac{1}{2}\;b\;q_{j}+I_{j})\;q_{j}\, \tag{2}\] where \(\theta_{j}\in\mathbb{R}^{+}\) is a state variable which represents the constant term in the inverse demand function for the selling product and \(I_{j}\in\mathbb{R}^{+}\) denotes the amount of specific investment carried by the \(j\)th buying division. For the sake of simplicity, it is assumed that \(b_{1}=b_{2}=b\in\mathbb{R}^{+}\), where \(b\) describes the slope of the inverse demand function. In the decentralized setting examined here, it is assumed that the supplying division has private information about purchase prices of raw materials on the commodity market, while the buying divisions have private information about selling prices of the refined products on the outlet market. Therefore, the supplying division and the \(j\)th buying division know the distribution of the state variable \(\theta_{S}\) and \(\theta_{j}\), respectively. In order to solve the transfer price problem analytically, it is required that all divisions have at least access to information about the expected values of the markets. For the sake of simplicity, it is assumed that the state variables \(\theta_{S}\), \(\theta_{1}\), and \(\theta_{2}\) are stochastically independent random variables. Apart from that, the analysis assumes that the headquarters cannot observe the state variables nor the undertaken investments; the headquarters only knows the expected values of the state variables. Furthermore, the headquarters' accounting system receives costs and revenues after the negotiation phase and, subsequently, the headquarters' profit is calculated by \[\Pi_{HQ}(q_{1},q_{2},\theta_{S},\theta_{1},\theta_{2},I_{S},I_{1},I_{2})=R_{1}+ R_{2}-C_{S}-w_{S}-w_{1}-w_{2}. \tag{3}\] The supplying division's investments as well as each buying division's investments cause divisional investment costs (or divisional capital expenditure) \(w_{S}(I_{S})\) and \(w_{j}(I_{j})\), respectively. As in the study of Baldenius et al. (1999), Edlin and Reichelstein (1995), Hofmann and Pfeiffer (2006), Pfeiffer and Wagner (2007), and Wagner (2008), the divisional investment costs have the following quadratic cost structure. \[w_{j}(I_{j})=\frac{1}{2}\ \lambda_{j}\ I_{j}^{2}\ \ \ \mbox{for}\ j\in\{S,1,2\} \tag{4}\] Figure 1: Schematic representation of the decentralized firm. In addition, the sequence of events within the negotiated transfer pricing model is illustrated. Source: Based on Wagner (2008) and, further, modified here for the case of two buying divisions. The marginal cost parameter is denoted by \(\lambda_{j}\in\mathbb{R}^{+}\) and the analysis assumes that the parameters \(w_{S}\), \(w_{1}\), \(w_{2}\), \(\lambda_{S}\), \(\lambda_{1}\), \(\lambda_{2}\), and \(b\) are known to the headquarters and each division. The term, one half, is only for reasons of expediency. The sequence of events within the negotiated transfer pricing model (see Fig. 1) can be summarized as follows: at date one, all three divisions have to make an investment decision independently of each other. Subsequently, the supplying division and each buying division independently observe the state variables \(\theta_{S}\) and \(\theta_{j}\), respectively. At date three, the amount of intermediate products is determined by the divisions and, finally, profits are realized. ### First-best solution In order to provide a benchmark solution (first-best solution or ex post efficient solution), Eq. 3 is maximized with respect to investment and quantity. Since investment and quantity are determined at different times, the analysis starts by backward induction on date three, on which the quantity is set.1 Footnote 1: A step-by-step solution of the two-stage decision problem for \(j=1\) (only one buying division) is presented, e.g., in Wagner’s (2008) PhD thesis. \[(q_{1}^{*},q_{2}^{*})\in\operatorname*{arg\,max}_{(q_{1},q_{2})\in\mathbb{R} _{+}^{2}}\Pi_{HQ}(q_{1},q_{2},\theta_{S},\theta_{1},\theta_{2},I_{S},I_{1},I_{ 2}) \tag{5}\] In this paper, first-best solutions are indexed by a superscript \({}^{*}\). On date three, the first order condition for maximizing the headquarters' profit \(\Pi_{HQ}\) with respect to \(q_{j}\) is equal to \[\frac{\partial\Pi_{HQ}}{\partial q_{j}}=\theta_{j}-b\ q_{j}+I_{j}-\theta_{S}+I _{S}=0 \tag{6}\] and, hence, the profit maximizing quantity is given by \[q_{j}^{*}=\frac{\theta_{j}-\theta_{S}+I_{S}+I_{j}}{b}\,\ \text{for}\ j\in\{1,2\}. \tag{7}\] On date one, the investment decisions are made under uncertainty and, therefore, the expected headquarters' profit \(E[\Pi_{HQ}]\) is maximized with respect to \(I_{S}\), \(I_{1}\), and \(I_{2}\). \[(I_{S}^{*},I_{1}^{*},I_{2}^{*})\in\operatorname*{arg\,max}_{(I_{S},I_{1},I_{2 })\in\mathbb{R}_{+}^{3}}E[\Pi_{HQ}(q_{1},q_{2},\theta_{S},\theta_{1},\theta_{ 2},I_{S},I_{1},I_{2})] \tag{8}\] Differentiating the expected headquarters' profit with respect to investments yields \[I_{S}^{*} = \frac{E[q_{1}^{*}+q_{2}^{*}]}{\lambda_{S}}\, \tag{9}\] \[I_{j}^{*} = \frac{E[q_{j}^{*}]}{\lambda_{j}}\,\ \text{for}\ j\in\{1,2\}. \tag{10}\] Now, substituting Eq. 7 into Eq. 9 and 10, one gets the following first-best expected investments. \[I_{S}^{*} = \frac{E[\theta_{1}-\theta_{S}]\cdot(b\;\lambda_{1}\;\lambda_{2}- \lambda_{1})+E[\theta_{2}-\theta_{S}]\cdot(b\;\lambda_{1}\;\lambda_{2}-\lambda_ {2})}{b^{2}\;\lambda_{1}\;\lambda_{2}\;\lambda_{S}-b\;(\lambda_{1}\;\lambda_{S} +\lambda_{2}\;\lambda_{S}+2\;\lambda_{1}\;\lambda_{2})+\lambda_{1}+\lambda_{2 }+\lambda_{S}} \tag{11}\] \[I_{1}^{*} = \frac{E[\theta_{1}-\theta_{S}]\cdot(b\;\lambda_{2}\;\lambda_{S}- \lambda_{2}-\lambda_{S})+E[\theta_{2}-\theta_{S}]\cdot\lambda_{2}}{\lambda_{1} \;\lambda_{2}\;\lambda_{S}-b\;(\lambda_{1}\;\lambda_{S}+\lambda_{2}\;\lambda_ {S}+2\;\lambda_{1}\;\lambda_{2})+\lambda_{1}+\lambda_{2}+\lambda_{S}}\] (12) \[I_{2}^{*} = \frac{E[\theta_{2}-\theta_{S}]\cdot(b\;\lambda_{1}\;\lambda_{S}- \lambda_{1}-\lambda_{S})+E[\theta_{1}-\theta_{S}]\cdot\lambda_{1}}{b^{2}\; \lambda_{1}\;\lambda_{2}\;\lambda_{S}-b\;(\lambda_{1}\;\lambda_{S}+\lambda_{2 }\;\lambda_{S}+2\;\lambda_{1}\;\lambda_{2})+\lambda_{1}+\lambda_{2}+\lambda_{S}} \tag{13}\] Finally, substituting Eq. 11 - 13 into Eq. 7 implies the first-best expected quantities. \[q_{1}^{*} = \frac{\theta_{1}-\theta_{S}}{b}+\frac{E[\theta_{1}-\theta_{S}](b \lambda_{1}\lambda_{2}+b\lambda_{2}\lambda_{S}-\lambda_{1}-\lambda_{2}-\lambda _{S})+E[\theta_{2}-\theta_{S}]\;b\lambda_{1}\lambda_{2}}{b(b^{2}\;\lambda_{1} \;\lambda_{2}\;\lambda_{S}-b(\lambda_{1}\;\lambda_{S}+\lambda_{2}\;\lambda_{S} +2\;\lambda_{1}\;\lambda_{2})+\lambda_{1}+\lambda_{2}+\lambda_{S})} \tag{14}\] \[q_{2}^{*} = \frac{\theta_{2}-\theta_{S}}{b}+\frac{E[\theta_{2}-\theta_{S}](b \lambda_{1}\lambda_{2}+b\lambda_{1}\lambda_{S}-\lambda_{1}-\lambda_{2}-\lambda _{S})+E[\theta_{1}-\theta_{S}]\;b\lambda_{1}\lambda_{2}}{b(b^{2}\;\lambda_{1} \;\lambda_{2}\;\lambda_{S}-b(\lambda_{1}\;\lambda_{S}+\lambda_{2}\;\lambda_{S} +2\;\lambda_{1}\;\lambda_{2})+\lambda_{1}+\lambda_{2}+\lambda_{S})} \tag{15}\] With the first-best solutions of the two-stage decision problem, the first-best expected profit is given by the following expression. \[\Pi_{HQ}^{*}\!=\!\frac{E[\theta_{1}\!-\!\theta_{S}]^{2}(b\;\lambda_{1}\lambda_ {2}\lambda_{S}\!-\!\lambda_{1}\lambda_{S})\!+\!E[\theta_{2}\!-\!\theta_{S}]^{2 }(b\;\lambda_{1}\lambda_{2}\lambda_{S}\!-\!\lambda_{2}\lambda_{S})\!-\!E[ \theta_{1}\!-\!\theta_{2}]^{2}\lambda_{1}\lambda_{2}}{2\;(b^{2}\;\lambda_{1} \;\lambda_{2}\;\lambda_{S}-b(\lambda_{1}\;\lambda_{S}+\lambda_{2}\;\lambda_{S} +2\;\lambda_{1}\;\lambda_{2})+\lambda_{1}+\lambda_{2}+\lambda_{S})} \tag{16}\] The headquarters' profit in Eq. 16 can be seen as a benchmark for the highest feasible profit that can be achieved, if investments and quantities are set according to Eq. 11 - 13 and Eq. 14 - 15, respectively. ### Second-best solution Assuming that each division acts in its own interest, the first-best solutions are usually not achieved. However, the headquarters could provide the following linear divisional profit-based compensation schemes to ensure that the negotiations over transfer prices and quantities result in ex post efficient transfer prices and quantities.2 Footnote 2: In general, incentives for efficient investments do not necessarily provide incentives for an efficient quantity and verse versa. For instance, in full-cost transfer pricing models, both divisions are rewarded for efficient investments, but the negotiation does not lead to an ex post efficient quantity (Baldenius et al. 1999). \[\Pi_{S}(q_{1},q_{2},\theta_{S},\theta_{1},\theta_{2},I_{S},I_{1},I _{2},\Gamma_{1},\Gamma_{2}) = \Gamma_{1}\;M_{1}+\Gamma_{2}\;M_{2}-w_{S} \tag{17}\] \[\Pi_{1}(q_{1},\theta_{S},\theta_{1},I_{S},I_{1},\Gamma_{1}) = (1-\Gamma_{1})\;M_{1}-w_{1}\] (18) \[\Pi_{2}(q_{2},\theta_{S},\theta_{2},I_{S},I_{2},\Gamma_{2}) = (1-\Gamma_{2})\;M_{2}-w_{2} \tag{19}\] \(\Pi_{S}\), \(\Pi_{1}\), and \(\Pi_{2}\) denote the profit of the supplying division, the profit of the 1st buying division, and the profit of the 2nd buying division, respectively. \(M_{j}\) stands for the headquarters' contribution margin with regard to the internal trade between supplying division and the \(j\)th buying division, i.e., \(M_{j}=(\theta_{j}-\frac{1}{2}\,b\,q_{j}+I_{j})\,q_{j}-(\theta_{S}-I_{S})\,q_{j}\), \(j\in\{1,2\}\). \(\Gamma_{j}\in[0,1]\) represents the share of the contribution margin achieved between the supplying division and the \(j\)th buying division (also known as the \(\Gamma\)-surplus sharing rule (Edlin and Reichelstein 1995), the supplying division's bargaining power (Baldenius et al. 1999), or the surplus sharing parameter (Wielenberg 2000)). As in the preceding section, the headquarters can apply the concept of subgame perfect equilibrium. By doing so, the analysis of negotiated transfer pricing starts by backward induction on date three. Starting with the quantity decision on date three, one gets \[q_{j}^{sb}=\frac{\theta_{j}-\theta_{S}+I_{S}+I_{j}}{b}\,\ \mbox{for}\ j\in\{1,2\}. \tag{20}\] Note that differentiating the divisional profits with respect to \(q_{j}\), \(j\in\{1,2\}\), leads to the same profit maximizing quantity given by Eq. 7, but the first order conditions for maximizing the expected divisional profits \(E[\Pi_{j}]\) with respect to \(I_{j}\), \(j\in\{S,1,2\}\), lead to second-best investments (labelled by the superscript \(sb\)). \[I_{S}^{sb} = \frac{\Gamma_{1}\ E[q_{1}^{sb}]+\Gamma_{2}\ E[q_{2}^{sb}]}{ \lambda_{S}} \tag{21}\] \[I_{j}^{sb} = \frac{(1-\Gamma_{j})\ E[q_{j}^{sb}]}{\lambda_{j}}\,\ \mbox{for}\ j\in\{1,2\} \tag{22}\] Since the headquarters has to specify how the contribution margins are shared among the three divisions, the headquarters has to solve the following additional decision problem before the decision-making process for the divisions takes place. \[(\Gamma_{1}^{sb},\Gamma_{2}^{sb})\in\mathop{\arg\max}_{(\Gamma_{1},\Gamma_{2} )\in[0,1]^{2}}E[\Pi_{HQ}(q_{1},q_{2},\theta_{S},\theta_{1},\theta_{2},I_{S},I_ {1},I_{2},\Gamma_{1},\Gamma_{2})] \tag{23}\] Differentiating the expected headquarters' profit with respect to the surplus sharing parameter \(\Gamma_{j}\), for \(j\in\{1,2\}\), \[\frac{\partial}{\partial\Gamma_{j}}\Big{(}(E[\theta_{1}]-\frac{1}{2}bq_{1}^{ab}+ I_{1}^{ab})q_{1}^{ab}-(E[\theta_{S}]-I_{S}^{sb})q_{1}^{ab}+(E[\theta_{2}]-\frac{1}{2}bq_{2}^{ ab}+I_{2}^{ab})q_{2}^{ab}-(E[\theta_{S}]-I_{S}^{sb})q_{2}^{ab}-\frac{1}{2} \lambda_{S}I_{S}^{bs^{2}}-\frac{1}{2}\lambda_{1}I_{1}^{sb^{2}}-\frac{1}{2} \lambda_{2}I_{2}^{kb^{2}}\Big{)} \tag{24}\] and setting that equation to zero, yields to the following two expressions. \[\Gamma_{1}^{ab}=\frac{E[\theta_{1}-\theta_{S}]\lambda_{S}(\lambda_ {1}+\lambda_{2}-b\lambda_{1}\lambda_{2})-\lambda_{1})+E[\theta_{2}-\theta_{S}] \lambda_{S}(\lambda_{1}\lambda_{2}(2-b\lambda_{1})-\lambda_{2})}{E[\theta_{1}- \theta_{S}]\lambda_{1}\lambda_{S}(b(\lambda_{1}\lambda_{2}+\lambda_{S})-b \lambda^{2}\lambda_{2}(\lambda_{1}+\lambda_{2}+\lambda_{S})-\lambda)+E[\theta_ {2}-\theta_{S}]\lambda_{2}\lambda_{S}(b(\lambda_{1}-1)+E[\theta_{1}-\theta_{2 }]\lambda_{1}\lambda_{2}(b\lambda_{1}+b\lambda_{2}-2)} \tag{25}\] \[I_{2}^{ab}=\frac{E[\theta_{S}]\lambda_{S}(b\lambda_{2}+\lambda_ {1}\lambda_{2}-b\lambda_{1})-\lambda_{2})+E[\theta_{1}-\theta_{S}]\lambda_{S}(b \lambda_{1}\lambda_{2}(b-b\lambda_{2})-\lambda_{1})}{E[\theta_{2}-\theta_{S}] \lambda_{2}\lambda_{S}(b(\lambda_{2}+\lambda_{1}+\lambda_{S})-b\lambda^{2} \lambda_{1}(\lambda_{1}+\lambda_{2}+\lambda_{S})-\lambda)+E[\theta_{1}-\theta_{ S}]\lambda_{1}\lambda_{S}(b\lambda_{2}(b\lambda_{1})+b\lambda_{2}-2)} \tag{26}\] By using substitution in Sec. 2.2, one obtains the following second-best solutions. \[I_{S}^{sb}=\frac{(\Gamma_{1}E[\theta_{1}-\theta_{S}]+\Gamma_{2}E[\theta_{2}- \theta_{S}]b)\lambda_{1}\lambda_{2}-E[\theta_{1}-\theta_{S}]\Gamma_{1}(-\Gamma _{2})\lambda_{1}-E[\theta_{2}-\theta_{S}](\Gamma_{1}-\Gamma_{2})\lambda_{2}}{ \lambda_{1}\lambda_{2}\lambda_{2}\lambda_{S}-b((1-\Gamma_{1})\lambda_{2}\lambda _{S}+(1-\Gamma_{2})\lambda_{1}+(1-\Gamma_{2})\lambda_{1}+(1-\Gamma_{1})\Gamma_ {2}\lambda_{2}} \tag{27}\] \[I_{1}^{ab}=\frac{E[\theta_{1}-\theta_{S}](1-\Gamma_{1})b\lambda_{2}\lambda_{S}- E[\theta_{1}-\theta_{S}](1-\Gamma_{1})(1-\Gamma_{2})\lambda_{S}-E[\theta_{1}- \theta_{2}](1-\Gamma_{1})\Gamma_{2}\lambda_{2}}{\lambda_{1}\lambda_{2}\lambda_ {S}-b((1-\Gamma_{1})\lambda_{2}\lambda_{S}+(1-\Gamma_{2})\lambda_{1}\lambda_{S} +(1+\Gamma_{2})\lambda_{1})+(1-\Gamma_{1})(1-\Gamma_{2})\lambda_{S}+F_{1}(1- \Gamma_{2})\lambda_{1}+(1-\Gamma_{1})\Gamma_{2}\lambda_{2}} \tag{28}\] \[I_{2}^{sb}=\frac{E[\theta_{2}-\theta_{S}](1-\Gamma_{2})b\lambda_{1}\lambda_{S}- E[\theta_{2}-\theta_{S}](1-\Gamma_{1})(1-\Gamma_{2})\lambda_{S}-E[\theta_{2}- \theta_{1}]\Gamma_{1}(-\Gamma_{2})\lambda_{1}}{\lambda_{2}\lambda_{S}-b((1- \Gamma_{1})\lambda_{2}\lambda_{S}+(1-\Gamma_{2})\lambda_{1}\lambda_{S}+(1+ \Gamma_{2})\lambda_{1})+(1-\Gamma_{1})(1-\Gamma_{2})\lambda_{S}+F_{1}(1-\Gamma _{2})\lambda_{1}+(1-\Gamma_{1})\Gamma_{2}\lambda_{2}} \tag{29}\] \[q_{1}^{ab}=\frac{\theta_{1}-\theta_{S}}{b}\frac{E[\theta_{1}-\theta_{S}](b \lambda_{2}((1-\Gamma_{1})\lambda_{S}+\Gamma_{1}\lambda_{1})-\Gamma_{1}(- \Gamma_{2})\lambda_{1}-(1-\Gamma_{1})\Gamma_{2}\lambda_{2}-(1-\Gamma_{1})(1- \Gamma_{2})\lambda_{S})+E[\theta_{2}-\theta_{S}]\Gamma_{2}b\lambda_{1}\lambda_{2 }}{b(b^{2}\lambda_{1}\lambda_{2}\lambda_{S}-b((1-\Gamma_{1})\lambda_{2}\lambda_ {S}+(1-\Gamma_{2})\lambda_{1}\lambda_{S}+(1+\Gamma_{1})\Gamma_{2})\lambda_{1}+(1 -\Gamma_{1})(1-\Gamma_{2})\lambda_{S}+F_{1}(1-\Gamma_{2})\lambda_{1}+(1-\Gamma_ {1})\Gamma_{2}\lambda_{2}} \tag{30}\] \[q_{2}^{ab}=\frac{\theta_{2}-\theta_{S}}{b}\frac{E[\theta_{2}-\theta_{S}](b \lambda_{1}((1-\Gamma_{2})\lambda_{S}+\Gamma_{2}\lambda_{2})-(1-\Gamma_{1}) \Gamma_{2}\lambda_{2}-(1-\Gamma_{2})\lambda_{1}\lambda_{1}-(1-\Gamma_{1})(1- \Gamma_{2})\lambda_{S})+E[\theta_{1}-\theta_{S}]\Gamma_{1}b\lambda_{1}\lambda_{2 }}{b(b^{2}\lambda_{1}\lambda_{2}\lambda_{S}-b((1-\Gamma_{1})\lambda_{2}\lambda_ {S}+(1-\Gamma_{2})\lambda_{1}\lambda_{S}+(1+\Gamma_{2})\lambda_{1}\lambda_{2})+ (1-\Gamma_{1})(1-\Gamma_{2})\lambda_{S}+(1-\Gamma_{2})\lambda_{1}+(1-\Gamma_ {1})\Gamma_{2}\lambda_{2}} \tag{31}\] \[\Pi_{QQ}^{ab}=E[\theta_{1}-\theta_{S}]^{2}\Big{(}b^{2}(\lambda_{1} \lambda_{2}\lambda_{2}^{2}+\lambda_{1}\lambda_{2}^{2}\lambda_{S}+\lambda_{1}^{2} \lambda_{2}\lambda_{S})-b(\lambda_{1}\lambda_{S}^{2}+3\lambda_{1}\lambda_{2} \lambda_{S}+\lambda_{1}^{2}\lambda_{S})+2\lambda_{1}\lambda_{S})\Big{)}+ \tag{32}\] \[E[\theta_{2}-\theta_{S}]^{2}\Big{(}b^{2}(\lambda_{1}\lambda_{2} \lambda_{2}^{2}+\lambda_{1}\lambda_{2}^{2}\lambda_{S}+\lambda_{1}^{2}\lambda_{2} \lambda_{S})-b(\lambda_{2}\lambda_{S}^{2}+3\lambda_{1}\lambda_{2}\lambda_{S}+ \lambda_{2}^{2}\lambda_{S})+2\lambda_{2}\lambda_{S})\Big{)}-\] \[E[\theta_{1}-\theta_{2}]^{2}\Big{(}b(\lambda_{1}\lambda_{2}^{2}+ \lambda_{1}^{2}\lambda_{2})-2\lambda_{1}\lambda_{2})/\] \[2\big{(}b^{3}(\lambda_{1}\lambda_{2}\lambda_{S}^{2}+\lambda_{1} \lambda_{2}^{2}\lambda_{S}+\lambda_{1}^{2}\lambda_{2}\lambda_{S})-b^{2}( \lambda_{1}+\lambda_{2}+\lambda_{S})(\lambda_{1}\lambda_{S}+2\lambda_{1}\lambda_ {2}+\lambda_{2}\lambda_{S})+\] \[b((\lambda_{1}+\lambda_{2}+\lambda_{S})^{2}+4\lambda_{1}\lambda_ {2}+\lambda_{1}\lambda_{S}+\lambda_{2}\lambda_{S})-2(\lambda_{1}+\lambda_{2}+ \lambda_{S})\big{)}\] Note, if \(E[\theta_{1}]=E[\theta_{2}]\), \(\lambda_{S}=1\), and \(\lambda_{1}=\lambda_{2}=0.5\), then each surplus sharing parameter \(\Gamma_{j}\) is one half, which means that the contribution margin \(M_{j}\) achieved is divided between the supplying division and the \(j\)th buying division in equal shares. But independently of how the headquarters determines \(\Gamma_{j}\), each division tends to underinvest, since each division has to bear the investment costs on its own. As a consequence, the first-best quantities resulting from the negotiation process cannot be reached either, which implies that the resulting headquarters' profit \(\Pi_{HQ}^{sb}\) is smaller than the headquarters' profit in the first-best case \(\Pi_{HQ}^{*}\). A headquarters that has all the information required to solve the \(\Gamma\)-choice problem and knows exactly, how its divisions will operate, would choose \(\Gamma_{1}^{sb}\) and \(\Gamma_{2}^{sb}\) according to Eq. 25 and Eq. 26, respectively. But how could a headquarters anticipate how its divisions will respond in the future. A headquarters with a lack of knowledge therefore needs a rule of thumb or alternative approaches that are easier to implement in order to determine the optimal share of the contribution margin achieved between divisions. ## 3 Simulation model with fuzzy Q-learning agents ### Overview of the simulation model The negotiated transfer pricing model described in the preceding section expects that all three divisions make decisions by means of their anticipated information. However, in decentralized business organizations, it is common that not all parties have access to information which are required to make optimal decisions. Therefore the common knowledge assumption is widely relaxed in the simulation model (see Tab. 1 for the most important adaptations regarding the common knowledge assumption and, correspondingly, how the decision variables are determined). Since the headquarters applies incentive compatibilities for ex post efficient quantities (cf. Eq. 17 - 19), all three divisions have an incentive to truthfully share their private information with each other during the negotiation process. In the agontized version of the negotiated transfer pricing model, the headquarters' profit function \(\Pi_{HQ}\) is known to each division, but the division's profit function \(\Pi_{j}\) as well as the expected value of the state variable \(E[\theta_{j}]\), for \(j\in\{S,1,2\}\), remain private for each division. In the agent-based simulation, the choice of different \(\Gamma_{j}\) values, \(j\in\{1,2\}\), is examined. \(\Gamma_{j}\) is a scenario-based exogenous parameter, which is varied in small steps (see Tab. 2), in order to investigate whether \(\Gamma_{j}^{sb}\) resulting from a headquarters which has access to all required information leads to higher profit-effectiveness than other \(\Gamma_{j}\) constellations taken from a headquarters which cannot solve the three-stage decision problem analytically due to a lack of knowledge and uses, e.g, a fifty-fifty surplus sharing rule. In addition, the study assumes that each division has no beliefs about the rationality of other divisions nor how other divisions face their maximization problem. Therefore, \begin{table} \begin{tabular}{|c|l|l|l|} \hline Parameter & Description & Negotiated transfer pricing & Agent-based variant of negotiated transfer pricing \\ & & with fully rational agents (see Sec. 2) & with cognitively bounded agents (see Sec. 3) \\ \hline \(\Pi_{HQ}\) & headquarters’ profit & common knowledge & common knowledge \\ \(\Pi_{j}\) & division’s profit & common knowledge & private information for entire duration \\ \(E[\theta_{j}]\) & expected value of state variable & common knowledge & private information for entire duration \\ \(C_{j}\) & division’s costs & common knowledge & private information until negotiation \\ \(R_{j}\) & division’s net revenue & common knowledge & private information until negotiation \\ \(\lambda_{j}\) & division’s marginal cost & common knowledge & private information until negotiation \\ \(w_{j}\) & division’s investment costs & common knowledge & private information until negotiation \\ \(b\) & slope of the inv. demand func. & common knowledge & private information until negotiation \\ \hline \(\Gamma_{j}\) & surplus sharing parameter & is set optimally & is a scenario-based exogenous parameter, which \\ & & & varies in small steps \\ \(I_{j}\) & amount of specific investment & is set optimally given \(\Gamma_{j}\) & is chosen by an exploration policy, which mainly \\ & & & depends on the learned Q-function \\ \(\theta_{j}\) & state variable & private information until negotiation \\ \(q_{j}\) & quantity & is set optimally given \(I_{j}\) and \(\theta_{j}\) & is set optimally given \(I_{j}\) and \(\theta_{j}\) \\ \hline \end{tabular} \end{table} Table 1: Comparison between negotiated transfer pricing with fully rational agents and the agent-based variant with cognitively bounded agents. the divisions cannot apply the concept of subgame perfect equilibrium as in Sec. 2 to solve the multi-stage decision-making process and, in particular, determine the optimal values for \(I_{j}\) according to Eq. 27 - 29. Thus, each division has first to learn which level of investment leads to which consequence and, in doing so, the study assumes that the divisions behave like fuzzy Q-learning agents. Given that the divisions no longer instantaneously see the consequences of their actions, an exploration policy for the amount of specific investment \(I_{j}\) has to be chosen, which is discussed in Sec. 3.3 in more detail. Since the simulation study assumes that all divisions have no prior knowledge about the consequences of their decisions, the sequence of events within the negotiated transfer pricing model is run through several times. This additional time dimension is characterized by a time subscript \(t\in\mathbb{N}\), which describes the "inner loop" of the simulation. A flow diagram of the agent-based simulation is given in Fig. 9 in Appendix A. ### Fuzzy Q-learning In this paper, fuzzy Q-learning proposed by Glorennec (1994) is applied to describe the before-mentioned divisions, hereinafter referred to as agents.3 Note that fuzzy logic is based on the premise that the key elements in human thinking are not numbers, but labels of fuzzy sets (Zadeh 1973). Since human thinking is tolerant of imprecision, fuzzy logic is suitable for representing human decision-making in a natural way (e.g., Zadeh 1973; Zimmermann 2011). By doing so, the fuzzy conditional statements are expressions of the form, IF \(A\) THEN \(B\), where \(A\) and \(B\) have fuzzy meaning. The fuzzy rule-based system in this simulation study has the following form, which can be regarded as a zero order Takagi-Sugeno fuzzy system. For each fuzzy rule \(i\): Footnote 3: Readers unfamiliar with reinforcement learning methods should refer to Sutton and Barto (2018). For agent-based simulations with fuzzy Q-learning agents acting in an economic context see, e.g., Kofinas et al. (2018); Rahimiyan and Mashhadi (2006). IF \(s_{S,t}\) IS \(L_{S}^{i}\) AND \(s_{1,t}\) IS \(L_{1}^{i}\) AND \(s_{2,t}\) IS \(L_{2}^{i}\) THEN (33) \[Q_{j,t}(\boldsymbol{s}_{t},\boldsymbol{a}_{j,t}) =\sum_{i=1}^{N}\alpha_{i}(\boldsymbol{s}_{t})\;q_{j,t}[i,a_{j,t,i}] \tag{34}\] \[A_{j,t}(\boldsymbol{s}_{t},\boldsymbol{a}_{j,t}) =\sum_{i=1}^{N}\alpha_{i}(\boldsymbol{s}_{t})\;a_{j,t}[i,a_{j,t,i}] \tag{35}\] where the parameter \(i\in\{1,...,N\}\) describes the index of fuzzy rules, \(N\in\mathbb{N}\) is the number of fuzzy rules, \(\boldsymbol{s}_{t}=(s_{S,t},s_{1,t},s_{2,t})\in\mathcal{S}\subset\mathbb{R}^{3}\) represents the state vector, \(\boldsymbol{a}_{j,t}=(a_{j,t,1},...,a_{j,t,N})\in\{1,...,K\}^{N}\) denotes the index vector of stored actions, \(a_{j,t}[i,a_{j,t,i}]\in\mathcal{A}\subset\mathbb{R}\) are the stored actions, \(K\in\mathbb{N}\) is the number of stored actions in each fuzzy rule, \(A_{j,t}(\boldsymbol{s}_{t},\boldsymbol{a}_{j,t})\in\mathbb{R}\) refers to the inferred action, and \(q_{j,t}[i,a_{j,t,i}]\in\mathbb{R}\) denotes the stored q-values, whereby \(a_{j,t}[.,.]\) and \(q_{j,t}[.,.]\) are stored in look-up tables (indicated by square brackets). The number of fuzzy rules \(N\) and the number of stored actions \(K\) rely on the fuzzy partition of the state space \(\mathcal{S}\) and on the discretization of the action space \(\mathcal{A}\), respectively. For the sake of simplicity, it is assumed that each fuzzy rule has exactly \(K\) possible actions. Note, in fuzzy Q-learning, \(\mathcal{S}\) and \(\mathcal{A}\) are continuous spaces and the inferred action \(A_{j,t}(\boldsymbol{s}_{t},\boldsymbol{a}_{j,t})\) of agent \(j\) corresponds to the investment decision \(I_{j,t}\), for \(j\in\{S,1,2\}\). Further, the fuzzy sets \(L^{i}_{j}\) are characterized by linguistic labels and the function \(\alpha_{i}(\boldsymbol{s}_{t})\) denotes the truth value of rule \(i\) given the state vector \(\boldsymbol{s}_{t}\). Q-values \(Q_{j,t}(\boldsymbol{s}_{t},\boldsymbol{a}_{j,t})\) and actions \(A_{j,t}(\boldsymbol{s}_{t},\boldsymbol{a}_{j,t})\) are inferred from Eq. 34 and Eq. 35, respectively. The fuzzy rule-based system in this paper assumes that the "weights" \(\alpha_{i}(\boldsymbol{s}_{t})\) are generated by the T-norm product \[\alpha_{i}(\boldsymbol{s}_{t})=\mu_{L^{i}_{S}}(s_{S,t})\cdot\mu_{L^{i}_{1}}(s _{1,t})\cdot\mu_{L^{i}_{2}}(s_{2,t})\;, \tag{36}\] where \(\mu_{L^{i}_{j}}(s_{j,t})\in[0,1]\) denotes the membership function (or membership grade) of rule \(i\) in state \(s_{j,t}\), \(j\in\{S,1,2\}\). The T-norm (or triangular norm, see, e.g., Zimmermann (2011)) is a type of binary operation that is often used in fuzzy logic to model the AND operator in Eq. 33. The membership functions considered here are defined on the interval \([0,1]\), which implies that if an object has a membership grade of one (zero) in a fuzzy set, then the object is absolutely (not) in that fuzzy set. In addition, the membership functions are set in such a way that the strong fuzzy partition is fulfilled, i.e., \(\sum_{i=1}^{N}\alpha_{i}(\boldsymbol{s}_{t})=1\) for each \(\boldsymbol{s}_{t}\in\mathcal{S}\). Finally, the stored q-values are updated by \[q_{j,t+1}[i,a_{j,t,i}]=q_{j,t}[i,a_{j,t,i}]+\alpha_{i}(\boldsymbol{s}_{t})\; \Delta Q_{j,t}(\boldsymbol{s}_{t},\boldsymbol{a}_{j,t})\;, \tag{37}\] where \(\Delta Q_{j,t}(\boldsymbol{s}_{t},\boldsymbol{a}_{j,t})\) is the temporal difference error which is given by \[\alpha\Big{(}r_{j,t}(\boldsymbol{s}_{t},A_{j,t}(\boldsymbol{s}_{t}, \boldsymbol{a}_{j,t}))+\gamma\sum_{i=1}^{N}\alpha_{i}(\boldsymbol{s}_{t+1}) \underset{k\in\{1,\ldots,K\}}{\max}q_{j,t}[i,k]-Q_{j,t}(\boldsymbol{s}_{t}, \boldsymbol{a}_{j,t})\Big{)}\;. \tag{38}\] In Eq. 38, \(r_{j,t}\in\mathbb{R}\) describes the reward of agent \(j\) which is the division's profit \(\Pi_{j}\) at time \(t\in\mathbb{N}\), \(j\in\{S,1,2\}\). \(\alpha\in(0,1]\) and \(\gamma\in[0,1)\) denote the learning rate and the discount factor, respectively. For the sake of simplicity, it is assumed that all three agents have the same learning rate as well as the same discount factor. ### Exploration policy When using a reinforcement learning method, a trade-off is required between choosing the currently optimal action and choosing a varied action with the prospect of a higher reward in the future (Sutton and Barto 2018). Thus, the fuzzy Q-learning agents, which are situated in a non-stationary environment, need an exploration policy which ensures that all actions are performed frequently enough so that the fuzzy Q-function converges to an optimum. In this simulation study, three commonly used exploration policies are applied to verify whether the results are robust to changes in the exploration policy. #### 3.3.1 Boltzmann exploration policy The Boltzmann, Gibbs, or softmax exploration policy (Cesa-Bianchi et al. 2017) is a classic strategy for sequential decision-making under uncertainty. The probability of choosing an action is proportional to an exponential function of the empirical mean of the reward of that action. The Boltzmann exploration policy is given by the probability mass function \[P_{j,t,i,k}(a_{t}[i,k])=\frac{exp\big{(}q_{j,t}[i,k]/\beta_{j}(t)\big{)}}{\sum_ {l=1}^{K}exp\big{(}q_{j,t}[i,l]/\beta_{j}(t)\big{)}}\;,\;\text{for}\;k\in\{1,...,K\}\;, \tag{39}\] where \(\beta_{j}(t)\in\mathbb{R}^{+}\), \(j\in\{S,1,2\}\), controls the degree of randomness for exploration (Powell and Ryzhov 2012). Note that, for each agent \(j\), for each time step \(t\), and for each fuzzy rule \(i\), it holds \(\sum_{k=1}^{K}P_{j,t,i,k}=1\) and, moreover, that actions with higher q-values are more likely to be selected than actions with lower q-values. After all probabilities have been calculated according to Eq. 39, the index of each stored action is drawn from \[a_{j,t,i}\thicksim\left(\begin{array}{c}1\\ P_{j,t,i,1}\end{array},...,\begin{array}{c}K\\ P_{j,t,i,K}\end{array}\right)\;. \tag{40}\] #### 3.3.2 \(\epsilon\)-greedy exploration policy A very easy and commonly used exploration policy in reinforcement learning is the \(\epsilon\)-greedy exploration policy. It chooses the action with the highest q-value in the current state with probability \(1-\epsilon\) and a random action otherwise (Tijsma et al. 2016). The \(\epsilon\)-greedy exploration policy is given by \[a_{j,t,i}=\begin{cases}\operatorname*{arg\,max}_{k\in\{1,...,K\}}q_{j,t}[i,k]& \text{with probability }1-\epsilon(t)\\ \\ \thicksim Unif\Big{(}\{1,...,K\}\Big{)}&\text{with probability }\epsilon(t)\end{cases} \tag{41}\] where \(\epsilon(t)\in[0,1]\) determines the degree of randomness for exploration which decreases over time. Note that the \(\epsilon\)-greedy exploration policy can be seen as a benchmark for other more sophisticated exploration policies (Tijsma et al. 2016). #### 3.3.3 Upper confidence bound exploration policy The third and last exploration policy is the so-called upper confidence bound exploration policy (hereafter abbreviated to UCB policy). The idea of the UCB policy is that the square-root expression is a measure of the uncertainty in the estimate of the action in the current state (Sutton and Barto 2018). After choosing an action, the uncertainty is reduced by increasing the number of actions by one. For each agent \(j\in\{S,1,2\}\), the UCB policy is given by \[a_{j,t,i}=\begin{cases}\operatorname*{arg\,max}_{k\in\{1,...,K\}}q_{j,t}[i,k]+ c_{1}\sqrt{\frac{In(t)}{N_{j,t}[i,k]}}&\text{if }\forall k\;N_{j,t}[i,k]>0\\ \thicksim Unif\Big{(}\{k\in\{1,...,K\}\mid N_{j,t}[i,k]=0\}\Big{)}&\text{else} \end{cases} \tag{42}\] where \(c_{1}\in\mathbb{R}\) controls the degree of randomness for exploration and \(N_{j,t}[i,k]\in\mathbb{N}\) is the number of times that the index of the stored action \(k\) in fuzzy rule \(i\) has been chosen prior to time \(t\)(Sutton and Barto 2018). ## 4 Parameter settings This simulation study focuses on settings in which the learning behavior of the three agents is modeled by fuzzy Q-learning. It is assumed that each division has an individual constant marginal cost parameter \(\lambda_{j}\), \(j\in\{S,1,2\}\). For reasons of simplicity, \(\lambda_{S}\) is set to one in all examined scenarios, while \(\lambda_{1}\) and \(\lambda_{2}\) are varied in small steps (the corresponding \(\lambda_{j}\) values are listed in Tab. 2). Besides \(\lambda_{j}\), also the surplus sharing parameter \(\Gamma_{j}\), \(j\in\{1,2\}\) is a scenario-based exogenous parameter which is set in small increments. In addition, the discount factor \(\gamma\) is varied between 0 and 0.9 in steps of 0.1. Note that a discount factor of zero indicates that the agent is very myopic, which implies that the agent cannot observe future rewards, while, if \(\gamma\) goes to one, the agent becomes more non-myopic, which means that the agent will place more emphasis on future rewards. To describe the turbulence of the market environment in which the agents operate, the standard deviations of the state variables \(\sigma\) are set to 0, 5, and 10, which can be interpreted as a deterministic market environment, a market environment with minor stochastic fluctuations, and an environment with considerable volatility on the markets. Additionally, the state variables \(\theta_{S}\) and \(\theta_{j}\), \(j\in\{1,2\}\), are captured in a normally distributed random variable with mean 60 and 100, respectively. Finally, three different exploration policies are applied to check the robustness of the simulation results. In the following, the parameter settings in Tab. 2 are discussed in more detail. In Eq. 2, the slope of the inverse demand function \(b\) is set to 12, as this results in that, in the first scenario examined, the first- and second-best solutions have integer solutions (see Tab. 3 in Appendix A). The choice of \(\mathcal{A}\) and \(\mathcal{S}\) is set at least in such a way that the agents can always find the first- and second-best solution in all studied scenarios. In this sense, the stored actions \(a_{j,t}[i,a_{j,t,i}]\in\mathcal{A}\) vary between 0 and 50 in steps of 5, while the states \(\boldsymbol{s}_{t}\in\mathcal{S}\) are set at least in such a way that the agents can always find the first- and second-best solutions in all studied scenarios. \begin{table} \begin{tabular}{|l|l|} \hline **Fixed exogenous parameters** & \multicolumn{1}{c|}{**Values**} \\ \hline Number of simulation runs & 10,000 \\ Number of times steps per simulation run & \(T=2,\)100 \\ Time steps to learn the Q-function & \(T_{L}=2,\)000 \\ Time steps to evaluate the outcome & \(T_{E}=100\) \\ Slope of the inverse demand function & \(b=12\) \\ Expected values of the state variables & \(E[\theta_{S}]=60\), \(E[\theta_{1}]=E[\theta_{2}]=E[\theta_{B}]=100\) \\ Action space and state space & \(A=\{0,5,...,50\}\) and \(\mathcal{S}=\{0,12.5,...,50\}\) \\ Number of fuzzy rules & \(N=125\) \\ Number of stored actions in each fuzzy rule & \(K=11\) \\ Learning rate & \(\alpha=0.5\) \\ Boltzmann exploration policy & \(\beta_{S,1}=49975\), \(\beta_{1,1}=\beta_{2,1}=24987.5\), \(\beta_{S,2}=\beta_{1,2}=\beta_{2,2}=498.75\) \\ \(e\)-greedy and UCB exploration policy & \(\epsilon_{1}=1+1/1999\), \(\epsilon_{2}=-1/1999\), and \(c_{S}=60\), \(c_{1}=c_{2}=30\) \\ \hline \hline **Scenario-based exogenous parameters** & \multicolumn{1}{c|}{**Symmetric scenarios**} \\ & **I** & **II** & **III** \\ \hline \(\lambda_{S}\) marginal cost of the supplying division & 1 & 1 & 1 \\ \(\lambda_{1}\) marginal cost of the 1st buying division & 0.5 & 0.222 & (0.5, 0.222) & (0.222, 0.262, 0.307, 0.361, 0.424, 0.5) \\ \(\lambda_{2}\) marginal cost of the 2nd buying division & 0.5 & 0.25 & (0.25, 0.30,..., 0.55) \\ \(T_{1}\) surplus sharing parameter between “S and 1” & 0.5 & 0.25 & (0.25, 0.30,..., 0.55) \\ \(T_{2}\) surplus sharing parameter between “S and 2” & 0 & (0, 0.9) & (0, 0.1,..., 0.9) \\ \(\sigma\) standard deviation of the state variables & 0 & 0 & (0, 5, 10) \\ Exploration policy & BM & BM & (Greedy, UCB, BM) \\ \hline \hline **Scenario-based exogenous parameters** & \multicolumn{1}{c|}{**V**} & \multicolumn{1}{c|}{**Vf**} \\ \hline \(\lambda_{S}\) marginal cost of the supplying division & 1 & 1 \\ \(\lambda_{1}\) marginal cost of the 1st buying division & (0.534, 0.621) & (0.5, 0.534, 0.564, 0.590, 0.609, 0.621) \\ \(\lambda_{2}\) marginal cost of the 2nd buying division & (0.463, 0.301) & (0.5, 0.463, 0.425, 0.385, 0.343, 0.301) \\ \(T_{1}\) surplus sharing parameter between “S and 1” & (0.5, 0.55,..., 0.75) & (0.5, 0.550, 0.600, 0.650, 0.700, 0.750) \\ \(T_{2}\) surplus sharing parameter between “S and 2” & (0.5, 0.45,..., 0.25) & (0.5, 0.450, 0.400, 0.350, 0.300, 0.250) \\ \(\gamma\) discount factor & (0, 0.1,..., 0.9) & (0, 0.1,..., 0.9) \\ \(\sigma\) standard deviation of the state variables & 0 & (0, 5, 10) \\ Exploration policy & BM & (Greedy, UCB, BM) \\ \hline \end{tabular} \end{table} Table 2: Parameter settings and scenario overview range from 0 to 50 in steps of 12.5. Consequently, there are 11 stored actions for each fuzzy rule \(i\in\{1,...,N\}\), 5 states, and the number of fuzzy rules \(N\) is \(5^{3}=125\) (the fuzzy partition of the state space raised to the power of the number of agents). According to Zimmermann's (2011) suggestions, triangular membership functions are applied (cf. Fig. 2) because they are simple, have low computing power per iteration, and are easy to interpret. For example, if the state of the supplying division \(s_{t,S}\) is 10, then the membership grade \(\mu_{L_{S}^{1}}\) of fuzzy rule 1 is 0.2, whereas the membership grade \(\mu_{L_{S}^{2}}\) of fuzzy rule 2 is 0.8, and all other membership grades \(\mu_{L_{S}^{3}}\), \(\mu_{L_{S}^{4}}\), and \(\mu_{L_{S}^{5}}\) are zero, or in other words, the supplying division is in the "very low investment range" with a fraction of 20% and in the "low investment range" with a share of 80%. Incidentally, increasing the number of fuzzy rules means that the agents need more time to learn their environment. With only 5 membership functions, the decision-making behavior can be exhibited quite well with the given parameter settings. Furthermore, the learning rate \(\alpha\) is set to 0.5, because it can be assumed that, for long-term decisions, old stored information is just as important as new received information. Regardless of the selection of the learning rate, the Q-function converges (sometimes faster and sometimes slower) to an optimum with the parameter settings giving in Tab. 2. The number of time steps to learn the Q-function \(T_{L}\) is set to 2,000 as this is a sufficient number to guarantee that the Q-function converges in all investigated scenarios. Note that this simulation study assumes that all agents have no prior knowledge of the consequences of their actions (all q-values are initialized to zero). After determining the number of time steps required for learning the Q-function, further 100 time steps are simulated to evaluate the agents' outputs, because, if \(\sigma\) is not zero, the stochastic fluctuations of the markets affect the divisions' profits. Consequently, the number of time steps per simulation run \(T\) is set to 2,100. Since the choice of action is based on stochastic exploration policies, each scenario is carried out 10,000 times and, due to the coefficient of variation Figure 2: The fuzzy sets considered here consist of 5 triangular membership functions in each state space dimension \(j\in\{S,1,2\}\), which can be characterized by linguistic labels ranging from “Very low” to “Very high”. (the ratio of the standard deviation to the mean), 10,000 simulation runs are sufficient to express the precision and repeatability of the simulation study. Finally, the used exploration policies are discussed. Inspired by the work of Tijsma et al. (2016), this simulation study focuses on the Boltzmann, \(\epsilon\)-greedy, and upper confidence bound exploration policy. For the introduced Boltzmann exploration policy in Sec. 3.3.1, \(\beta_{j}(t)\) should slowly decrease over time in order to reduce exploration (Dearden et al. 1998). In this sense, \(\beta_{j}(t)\) is calculated by a simple rational function \[\beta_{j}(t)=\frac{\beta_{j,1}}{\beta_{j,2}+t}\,\ \text{for}\ j\in\{S,1,2\}, \tag{43}\] where \((\beta_{j,1},\beta_{j,2})\in\mathbb{R}^{2}\) are set in such a way that the learning agents have a long time to explore for good policies, but also time to exploit them.4 Note that rational functions, like Eq. 43, have the advantage over linear functions that the degree of randomness for exploration decreases faster at the beginning of a simulation run, but it never reaches zero. According to pre-generated simulations, \(\beta_{S}(t)\) should be about 100 for the supplying division at the very beginning of a simulation run, while, at the end, a value of 20 is sufficient for the convergence of the Q-function. Conversely, for each buying division \(j\in\{1,2\}\), \(\beta_{j}(t)\) only needs to be half as large. Solving Eq. 43 with the two associated boundary conditions (\(\beta_{S}(1)=100\) and \(\beta_{S}(T_{L})=20\) for the supplying division or rather \(\beta_{j}(1)=50\) and \(\beta_{j}(T_{L})=10\) for each buying division \(j\in\{1,2\}\)) leads to the Boltzmann exploration parameters given in Tab. 2. It should be mentioned that balancing the parameters of an exploration policy is one of the most challenging tasks in reinforcement learning (Tijsma et al. 2016). Footnote 4: Technical note: The experimentation tendency \(\beta_{j}(t)\) should not become too small during a simulation run, otherwise the expression \(exp(q_{j,t}[i,a_{j,t},i]/\beta_{j}(t))\) will be too large and this could endanger the numerical stability of the simulation. For the \(\epsilon\)-greedy exploration policy (see Sec. 3.3.2), \(\epsilon(t)\) is a simple decreasing linear function of the time \[\epsilon(t)=\begin{cases}\epsilon_{1}-\epsilon_{2}\ t&\text{for}\ t\leq T_{L} \\ 0&\text{for}\ t>T_{L}\end{cases} \tag{44}\] with the two properties that, at the beginning of a simulation run, \(\epsilon(1)\) is one, which indicates that the action-selection is total random (pure exploration) and, at the end, \(\epsilon(T_{L})\) is zero (pure exploitation). The associated \(\epsilon\)-greedy exploration parameters \(\epsilon_{1}\) and \(\epsilon_{2}\) are reported in Tab. 2. Finally, for the UCB exploration policy in Sec. 3.3.3, the pre-generated simulations reveal that a value of about 60 is a good choice for \(c_{S}\), while \(c_{1}\) and \(c_{2}\) only need to be half as large. ## 5 Results and discussion The analysis of results is structured in two parts: (1) results for symmetric marginal cost parameters \(\lambda_{1}=\lambda_{2}\) (called symmetric scenarios) and (2) results for non-symmetric marginal cost parameters (called non-symmetric scenarios). For a better overview, symmetric and non-symmetric scenarios are additionally divided into four and two sub-scenarios, respectively. An overview of the examined scenarios is given in Tab. 2. In the following, the supplying division, the 1st buying division, and the 2nd buying division are referred to as seller, first buyer, and second buyer, respectively. In addition, this simulation study distinguishes between an all-knowing headquarters which sets the surplus sharing parameters according to Eq. 25 and Eq. 26, and a headquarters which cannot solve the \(\Gamma\)-choice problem analytically and, therefore, chooses \(\Gamma_{j}\), \(j\in\{1,2\}\), according to Tab. 2. ### Results for symmetric marginal cost parameters In scenarios I-IV, \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}\) which yields to \(\Gamma_{1}=\Gamma_{2}\). In scenarios I-III, the exploration policy is the Boltzmann exploration policy and the underlying market environment is deterministic. Finally, scenario IV performs an extensive sensitivity analysis in order to verify whether the results for symmetric marginal cost parameters are robust to changes in volatility of the markets and exploration policy. #### 5.1.1 Scenario I In the first scenario, high investment costs are assumed on the downstream divisions. Concretely, \(\lambda_{1}\) and \(\lambda_{2}\) are set to 0.5 so that the simulation results can also be compared to Mitsch's (2023) previous simulation study where only one buyer is involved. If \(\lambda_{1}=\lambda_{2}=0.5\), then an all-knowing headquarters would set \(\Gamma_{1}\) and \(\Gamma_{2}\) to 0.5, according to Eq. 25 and Eq. 26. To get an understanding of the decision-making behavior of fuzzy Q-learning agents, the simulation model studies investment decisions with two extreme levels of no (\(\gamma=0\)) and high (\(\gamma=0.9\)) foresight of future rewards. The simulation results of investment decisions and, what profits the headquarters can expect, are presented with modified boxplots (see Fig. 3). In addition, first- and second-best solutions are also displayed. Be aware that the thick black line in the modified boxplots represents the median of 10,000 observations per time step and, due to the fact that the mean value is not robust against outliers, the mean value of the headquarters' profit is below the median value in case of a negative skewness (roughly spoken, the mass of the distribution is concentrated on the "right side"). Additionally, note that, in the modified boxplots used here, the distance between the 25th and 75th percentile is visualized in gray, the outer black lines denote the whiskers, and, for an improved readability, outliers beyond the wiskers are not depicted. Figure 3: Modified boxplots of \(I_{S}\), \(I_{1}\), \(I_{2}\), and \(\Pi_{HQ}\) for myopic (\(\gamma=0\)) and non-myopic (\(\gamma=0.9\)) fuzzy Q-learning agents with \(\lambda_{S}=1\), \(\lambda_{1}=\lambda_{2}=0.5\), \(\Gamma_{1}=\Gamma_{2}=0.5\), \(\sigma=0\), and exploration policy is Boltzmann. For myopic fuzzy Q-learning agents (see left plots in Fig. 3), the simulation results indicate that the investment decisions of all divisions converge towards the second-best solution. In the case where all divisions seek for high future rewards (see right plots in Fig. 3), the investment decisions converge towards the first-best solution. Similarly, the headquarters' profit approaches the second-best solution and the first-best solution for myopic and non-myopic agents, respectively. On looking more closely at the last 100 investment decisions over 10,000 simulation runs, the investment decisions of myopic (non-myopic) agents are normally distributed with mean 5.13 (10.02) and standard deviation 0.44 (2.62). Moreover, the headquarters' profit for myopic (non-myopic) agents is nearly normally distributed with mean 177.9 (189.8) and standard deviation 2.12 (6.37), whereby the distribution of the headquarters' profit has a slightly negative skewness of \(-0.21\) (\(-0.98\)) for myopic (non-myopic) agents. The findings of scenario I show that myopic fuzzy Q-learning agents invest about as much as fully individual rational utility maximizers in the classic hold-up problem (cf. Tab. 3 in Appendix A), whereas non-myopic agents invest optimally. With a high level of foresight to strive for high future rewards, the investment level of each division increases on the one hand and the headquarters' profit on the other. #### 5.1.2 Scenario II In the second scenario, the marginal cost parameters \(\lambda_{j}\), \(j\in\{1,2\}\), are set to 0.222. In the case that both buying divisions have low investment costs, an all-knowing headquarters would choose \(\Gamma_{1}=\Gamma_{2}=0.25\). The simulation results are depicted in the left plot for myopic agents and in the right plot for non-myopic agents in Fig. 4. As the output shows, myopic fuzzy Q-learning agents achieve second-best results, while non-myopic agents realize better results, but the first-best solution is not reached. A closer analysis reveals that the investment decisions of myopic (non-myopic) sellers are nearly normally distributed with mean 4.27 (6.82) and standard deviation 0.5 (1.78) with a positive (negative) skewness of 0.48 (\(-0.25\)). On the downside, the investment decisions of myopic (non-myopic) buyers are normally distributed with mean 18.41 (25.24) and standard deviation 1.53 (5.02). In the case of myopic (non-myopic) agents, the headquarters' profit is (almost) normally distributed with mean 233.7 (258.5) and standard deviation 3.58 (11.5) with a negative skewness of \(-0.02\) (\(-0.63\)). The findings of scenario II indicate that divisions' investment decisions of myopic fuzzy Q-learning agents are close to the second-best solutions (cf. Tab. 3 in Appendix A), but compared to scenario I, non-myopic agents generate lower profits for themselves as well as for their headquarters. A high degree of foresight favors divisions' investment decisions, however, on average, first-best investment decisions are not made. Figure 4: Modified boxplots of \(I_{S}\), \(I_{1}\), \(I_{2}\), and \(\Pi_{HQ}\) for myopic (\(\gamma=0\)) and non-myopic (\(\gamma=0.9\)) fuzzy Q-learning agents with \(\lambda_{S}=1\), \(\lambda_{1}=\lambda_{2}=0.222\), \(\Gamma_{1}=\Gamma_{2}=0.25\), \(\sigma=0\), and exploration policy is Boltzmann. #### 5.1.3 Scenario III In order to get a general understanding of the effects of different values of surplus sharing parameters and discount factors, \(\Gamma_{j}\) and \(\gamma_{j}\), \(j\in\{1,2\}\), are varied from 0.25 to 0.55 in 0.05 increments and from 0 to 0.9 in 0.1 increments, respectively. Note that, in the symmetric case, it is sufficient to vary \(\Gamma_{j},j\in\{1,2\}\), from 0.25 to 0.55 to adequately study the divisions' decision-making behavior. The simulation results are summarized in Fig. 5 and Fig. 6 with four subplots for \(\lambda_{1}=\lambda_{2}=0.5\) (high investment costs) and for \(\lambda_{1}=\lambda_{2}=0.222\) (low investment costs), respectively. In the following, the meaning of the subplots is explained. The top left subplot displays contours representing the headquarters' profits resulting from the simulation. The bottom left subplot depicts the "first-best performance indicator" for fuzzy Q-learning agents, which is defined by \(\Pi_{HQ}/\Pi_{HQ}^{*}\), i.e., the headquarters' profit obtained from the simulation is normalized by the highest feasible profit that can be achieved. The first-best performance indicator can serve as a relative indicator for the profit-effectiveness of the decision-making behavior of fuzzy Q-learning agents; the higher the value, the better the profitability for the headquarters. The bottom right subplot shows the "second-best performance indicator", which reflects how much better fuzzy Q-learning agents perform than fully rational utility maximizers. This second-best performance indicator is formalized by the relative change between \(\Pi_{HQ}\) and \(\Pi_{HQ}^{sb}\) dividing by \(\Pi_{HQ}^{sb}\); the higher the relative change, the higher the performance of fuzzy Q-learning agents. Lastly, and more importantly, the so-called "baseline performance indicator", which is displayed in the top right subplot. The baseline performance indicator is based on the relative change between \(\Pi_{HQ}^{\Gamma_{j}}\) and \(\Pi_{HQ}^{baseline}\) dividing by \(\Pi_{HQ}^{baseline}\), where \(\Pi_{HQ}^{\Gamma_{j}}\) denotes the headquarters' profit resulting from scenario where the headquarters uses \(\Gamma_{j}\), while \(\Pi_{HQ}^{baseline}\) describes the headquarters' profit resulting from scenario where the headquarters applies \(\Gamma_{j}^{sb}\). The scenario in which the headquarters uses the optimal surplus sharing parameter \(\Gamma_{j}^{sb}\) is called the "baseline scenario". Note that in a baseline scenario, the headquarters knows all information from its divisions to determine the optimal level of bargaining power and, therefore, the headquarters can set the bargaining power \(\Gamma_{1}\) and \(\Gamma_{2}\) according to Eq. 25 and 26, respectively. So, this indicator provides information about how much better fuzzy Q-learning agents given \(\Gamma_{j}\) perform than fuzzy Q-learning agents given \(\Gamma_{j}^{sb}\) or, in other words, whether the profit of a headquarters that does not know the exact optimal surplus sharing parameters is higher than the profit of a headquarters that applies the optimal surplus sharing parameters. In order to find out whether the baseline performance indicator is significant (i.e., if there is a significant difference between \(\Pi_{HQ}^{\Gamma_{j}}\) and \(\Pi_{HQ}^{baseline}\)), Welch's t-tests and Wilcoxon rank-sum tests are applied.5 Since the p-values of hypothesis tests go quickly to zero when very large samples (10,000 observations and more) are evaluated, the null hypotheses become statistically significant because the standard errors become extremely small (Lin et al. 2013). To mitigate this phenomenon, one-tailed Welch's t-tests and Wilcoxon rank-sum tests are used with a positive hypothesized mean difference (hereafter abbreviated to \(d_{H}\)). The null hypotheses to be tested are given by \(mean(\Pi_{HQ}^{\Gamma_{j}})-mean(\Pi_{HQ}^{baseline})\geq d_{H}\). Be aware that the Welch's t-test is designed for normally distributed samples but, due to the central limit theorem, it can be assumed that this also holds for the headquarters' profits which are nearly normally distributed. Footnote 5: Welch’s (1947) t-test and Wilcoxon rank-sum test (Mann and Whitney 1947) are widely common statistical hypothesis tests. In the default case, the first one tests the equality of the means for two independent normally distributed samples with unequal and unknown variances, while the second one, a nonparametric test, tests the null hypothesis that the distributions of two independent samples differ by a location shift of zero. In order to find out, if the headquarters' profits differ significantly in terms of their means (see Welch's t-tests) or their distributions (see Wilcoxon rank-sum tests), the hypothesized mean difference \(d_{H}\) is set to \(\Pi_{HQ}^{baseline}/100\), which can be interpreted as 1% of the headquarters' profit achieved in the baseline scenario. Therefore, whenever the t-test result is statistically significant, fuzzy Q-learning agents perform at least 1% better in scenario with \(\Gamma_{j}\) than fuzzy Q-learning agents in the baseline scenario with \(\Gamma_{j}^{sb}\) given a standard significance level of 0.05. To better distinguish the test results graphically, the baseline performance indicator is colored as follows: If the p-value of the t-test (rank-sum test) is greater than 0.05, the cell turns yellow (blue). If both tests are statistically significant, the cell is green; otherwise the cell is white. The simulation results on the performance of fuzzy Q-learning agents with \(\lambda_{1}=\lambda_{2}=0.5\) and \(\lambda_{1}=\lambda_{2}=0.222\) are presented in Fig. 5 and Fig. 6, respectively. According to subplot (a) in Fig. 5, non-myopic fuzzy Q-learning agents tend to generate higher profits for their headquarters compared to myopic fuzzy Q-learning agents. In cases where seller and buyer do not have equal bargaining power, i.e., \(\Gamma_{j}\neq 0.5\), \(j\in\{1,2\}\), the headquarters' profit decreases. In the extreme case with a non-symmetrical \(\Gamma\)-surplus sharing rule, \(\Gamma_{j}=0.25\), \(j\in\{1,2\}\), non-myopic agents perform worse than myopic agents. The efficiency of fuzzy Q-learning agents can be seen in the bottom contour plots (c) and (d). According to subplot (c) in Fig. 5, the first-best performance indicator shows that the profit-effectiveness of the headquarters is relatively high, especially for non-myopic agents. The second-best performance indicator on the other side points out that fuzzy Q-learning agents slightly outperform fully rational utility maximizers in all investigated scenarios. This results from the agents' learning phase in which learning begins without prior knowledge of the consequences of agents' actions (all q-values are initialized to zero). The timelines in all modified boxplots show the same picture: Agents start with a moderate level of investment, which slowly decreases to its new equilibrium level (sometimes faster and sometimes slower). When learning the Q-values, the agents will get higher rewards as they make higher investments. But on the other hand, if too high investments are made, profits will decrease. Also, if only one-sided investments are made, the agent's own profit is reduced while the other agents benefit from these investments. Overall, fuzzy Q-learning agents tend to invest more than the benchmark of the second-best solution indicates. Note that pre-generated simulations reveal that, regardless of the starting level of investments, the Q-functions converge to their final values which are slightly higher than the expected values resulting from fully rational utility maximizers. Finally, to test whether the headquarters' profit is significantly different for fuzzy Q-learning agents with \(\Gamma_{j}\) or \(\Gamma_{j}^{sb}\), the baseline performance indicator is calculated. Based on the baseline performance indicator (see subplot (b) in Fig. 5), it may be inferred that, for sym Figure 5: Results for symmetric marginal cost parameters with \(\sigma=0\) and exploration policy is Boltzmann: (a) absolute performance of the headquarters’ profit resulting from the simulation, (b) relative performance change and statistical hypothesis testing of \(\Pi_{HQ}\) compared to the simulation output from the baseline scenario with \(\Gamma_{1}^{sb}=0.5\), (c) relative performance of \(\Pi_{HQ}\) compared to \(\Pi_{HQ}^{*}\), and (d) relative performance change of \(\Pi_{HQ}\) compared to \(\Pi_{HQ}^{sb}\). rameters, \(\Gamma_{j}=0.5\), \(j\in\{1,2\}\), a deviation from the recommended optimal solutions of subgame perfection equilibrium leads to worse results. Therefore, the headquarters should follow the theory of subgame perfection equilibrium and should give all divisions the same bargaining power, or in other words, the headquarters should use a fifty-fifty surplus sharing rule. The simulation results for \(\lambda_{1}=\lambda_{2}=0.222\) (low investment costs) show a similar picture. The absolute performance of the headquarters' profit rises with increasing degree of agents' foresight (cf. subplot (a) in Fig. 6). First- and second-best performance indicators indicate that fuzzy Q-learning agents generate relatively high profits, although, again, a high level of foresight favors agents' investment decisions. According to subplot (b) in Fig. 6, the results indicate that, for agents with a low discount factor \(\gamma\) between 0 and 0.4, the headquarters should set \(\Gamma_{j}\), \(j\in\{1,2\}\), to \(\Gamma_{j}^{sb}\) for maximum profits. However, for agents with a high foresight greater than 0.4, the headquarters should empower the seller with greater bargaining power in order to achieve higher Figure 6: Results for symmetric marginal cost parameters with \(\sigma=0\) and exploration policy is Boltzmann: (a) absolute performance of the headquarters’ profit resulting from the simulation, (b) relative performance change and statistical hypothesis testing of \(\Pi_{HQ}\) compared to the simulation output from the baseline scenario with \(\Gamma_{1}^{sb}\)= 0.25, (c) relative performance of \(\Pi_{HQ}\) compared to \(\Pi_{HQ}^{*}\), and (d) relative performance change of \(\Pi_{HQ}\) compared to \(\Pi_{HQ}^{sb}\). profits. A closer look reveals that, in 19 cases, Welch's t-test and Wilcoxon rank-sum test show a significant difference between \(\Pi_{HQ}^{\Gamma_{j}}\) and \(\Pi_{HQ}^{baseline}\), when the seller obtains a bargaining power which is greater than the recommended optimal surplus sharing rule resulting from solving the subgame perfect equilibrium. In order to get a rule of thumb for determining the ideal surplus sharing rule, the weighted arithmetic mean of \(\Gamma_{j}\) may be used. For \(j\in\{1,2\}\), the weighted arithmetic mean of \(\Gamma_{j}\) is given by the double series over all pairs of \(\Gamma_{j}\) and \(\gamma\), where the baseline performance indicator \(BPI\) is significant: \(\sum_{\Gamma_{j}=0.25}^{0.55}\sum_{\gamma=0}^{0.9}\Gamma_{j}\cdot BPI[\Gamma_ {j},\gamma]/\sum_{\Gamma_{j}=0.25}^{0.55}\sum_{\gamma=0}^{0.9}BPI[\Gamma_{j}, \gamma]\). According to subplot (b) in Fig. 6, the weighted arithmetic mean of \(\Gamma_{j}\) is 0.386, which can be seen as a good choice for agents with a high degree of foresight between 0.5 and 0.9. An \(1:2\) surplus sharing rule for seller and buyer may therefore be a good choice for non-myopic agents when the headquarters cannot determine \(\Gamma_{j}\) analytically. #### 5.1.4 Scenario IV To verify whether the simulation results are robust in terms of volatility on the markets and the implementation of other exploration policies, an extensive sensitivity analysis is performed (see figures in Appendix B). To evaluate the robustness of stochastic fluctuations on the markets, the standard deviations of the state variables \(\sigma\) vary from 0 to 10 in 5 increments. In addition to the Boltzmann exploration policy, the \(\epsilon\)-greedy and the upper confidence bound exploration policy, which are introduced in Sec. 3.3, are applied. In the case of symmetric marginal cost parameters, the learning behavior of fuzzy Q-learning agents seems to be robust against volatilities on the markets. Also, all three exploration policies show a similar picture, whereby fuzzy Q-learning agents using the \(\epsilon\)-greedy exploration policy performing worst while agents applying Boltzmann performing best. In scenarios where \(\lambda_{1}=\lambda_{2}=0.5\) (see Fig. 16), a symmetric \(\Gamma\)-surplus sharing rule leads to high profits, a deviation from it results in lower profits and should therefore not be pursued. More importantly, the simulation results for \(\lambda_{1}=\lambda_{2}<0.361\) (low investment costs) suggest that the seller should have a greater bargaining power in order to increase the headquarters' profit, especially when agents have a high foresight of future rewards. In such cases, the headquarters should allocate the distribution ratio of the bargaining power at a ratio of \(1:2\) for seller and buyer. ### Results for non-symmetric marginal cost parameters Again, \(\lambda_{S}\) is set to one, the exploration policy is the Boltzmann exploration policy, and the underlying market environment is deterministic. In scenario V, the marginal cost parameter \(\lambda_{1}\) is systematically increased from 0.534 to 0.621 in small increments, while \(\lambda_{2}\) is decreased from 0.463 to 0.301 (cf. Tab. 2). Also the surplus sharing parameters \(\Gamma_{1}\) and \(\Gamma_{2}\) differ accordingly. Conclusively, scenario VI conducts an extensive sensitivity analysis in order to verify whether the results for non-symmetric marginal cost parameters are robust to changes in volatility of the markets and exploration policy. #### 5.2.1 Scenario V Due to symmetry considerations, it is sufficient to vary \(\Gamma_{1}\) from 0.5 to 0.75 in order to study the agents' decision-making behavior in case of non-symmetric marginal cost parameters (cf. Tab. 2). Note that, in scenarios I-IV, \(\Gamma_{1}=\Gamma_{2}\), while, in scenarios V-VI, \(\Gamma_{2}=1-\Gamma_{1}\). For \(\lambda_{1}=0.534\) and \(\lambda_{2}=0.463\) (small difference between investment costs), the simulation results on the performance of fuzzy Q-learning agents are summarized in Fig. 7. For \(\lambda_{1}=0.621\) and \(\lambda_{2}=0.301\) (large difference between investment costs), the simulation outputs are depicted in Fig. 8. Figure 7: Results for non-symmetric marginal cost parameters with \(\sigma=0\) and exploration policy is BM: (a) absolute performance of the headquarters’ profit resulting from the simulation, (b) relative performance change and statistical hypothesis testing of \(\Pi_{HQ}\) compared to the simulation output from the baseline scenario with \(\Gamma_{1}^{sb}\)= 0.55, (c) relative performance of \(\Pi_{HQ}\) compared to \(\Pi_{HQ}^{*}\), and (d) relative performance change of \(\Pi_{HQ}\) compared to \(\Pi_{HQ}^{sb}\). If the marginal cost parameters differ only slightly (cf. Fig. 7), a deviation from the theory of subgame perfection equilibrium leads to a lower investment level and, thus, lower profits are made. As discussed in the symmetric case, the headquarters should set the surplus sharing parameters \(\Gamma_{1}\) and \(\Gamma_{2}\) according to Eq. 25 and Eq. 26, respectively, in order to maximize profits. Furthermore, the simulation results show that the higher the agents' level of foresight, the higher the generated profits. Moreover, the headquarters' profit as well as the divisions' investment decisions are almost normally distributed in all parameter constellations investigated. According to Fig. 8, a deviation from \(\Gamma_{1}^{sb}\) can actually lead to higher profits, especially, if divisions have a high degree of foresight. The simulation results provide support the intuition that all divisions should receive approximately the same share of the contribution margin. In particular, in situations in which the headquarters only have poor information, e.g., about the costs of manufacturing, the headquarters relies on simple methods for determining decision Figure 8: Results for non-symmetric marginal cost parameters with \(\sigma=0\) and exploration policy is BM: (a) absolute performance of the headquarters’ profit resulting from the simulation, (b) relative performance change and statistical hypothesis testing of \(\Pi_{HQ}\) compared to the simulation output from the baseline scenario with \(\Gamma_{1}^{sb}\)\(=0.75\), (c) relative performance of \(\Pi_{HQ}\) compared to \(\Pi_{HQ}^{*}\), and (d) relative performance change of \(\Pi_{HQ}\) compared to \(\Pi_{HQ}^{sb}\). problems in transfer pricing. According to the weighted arithmetic mean of \(\Gamma_{j}\) (\(\Gamma_{1}=0.586\) and \(\Gamma_{2}=0.414\)), a \(3:2\) surplus sharing rule for seller and first buyer and a distribution ratio of the bargaining power at a ratio of \(2:3\) for seller and second buyer may be good choices for determining the surplus sharing parameters when divisions' marginal cost parameters differ widely. #### 5.2.2 Scenario VI As in scenario IV, an extensive sensitivity analysis is conducted to check whether the simulation results for non-symmetric marginal cost parameters are robust in terms of volatility on the markets and the implementation of other exploration policies. According to figures in Appendix C, the findings appear to be robust against the turbulence of the market environment and different exploration policies. The simulation results of the baseline performance indicator in Fig. 28-33 suggest that, for non-myopic fuzzy Q-learning agents, there is an advantage in allocating equal bargaining power to all three divisions when marginal costs differ. Adjusting the distribution ratio of bargaining power in direction of an equal level leads to higher investments and, thus, to a significant improvement in agents' performance. ## 6 Summary and conclusive remarks The starting point of the simulation study is the well-known negotiated transfer pricing model by Edlin and Reichelstein (1995). The authors extend the neoclassical model of Schmalenbach (1908/1909) and Williamson (1985) by assuming that each division can simultaneously make an upfront specific investment that enhances the value of internal trade. However, in negotiated transfer pricing models with specific investments (e.g., Eccles and White 1988; Edlin and Reichelstein 1995; Vaysman 1998), it is generally supposed that divisions operate like fully individual rational utility maximizers. But in reality, such assumptions are quite demanding, because, e.g., how could a division expect that the other division will behave optimally at a later point in time as required by (Nash) equilibrium concept. For this purpose, an agent-based variant of negotiated transfer pricing with individual utility maximizers who are subject to limitations is examined. To deal with the divisions' bounded rationality, asymmetric information, and cognitive limitations, fuzzy Q-learning is apply. Moreover, divisions are not able to find the best investment decision instantaneously, instead they have the ability to explore their decision space stepwise in order to find good policies. In addition, the study widely relaxes the common knowledge assumption and also increases the complexity of the divisions' coordination problem by increasing the number of downstream divisions by one. The paper makes two main contributions: on the one hand, the paper derives closed-form expressions for the subgame perfect equilibrium of the invest ment hold-up problem with one supplying division and two buying divisions and, on the other hand, the computer simulation provides some important insights into the dynamics of negotiated transfer pricing. According to the simulation outputs, fuzzy Q-learning agents perform at least as well or better than fully individual rational utility maximizers. In addition, the simulation results show that, if the headquarters applies the concept of subgame perfect equilibrium and allocates the distribution ratio of the bargaining power according to optimal (Nash) solutions, then fuzzy Q-learning agents generate profits that are at least in the range of the second-best solution or even higher (however, first-best solutions are generally not achieved). In cases where both buying divisions have marginal cost parameters that are only half as large as the supplying division, the headquarters is well advised to give all of its divisions the same bargaining power. But if the marginal cost parameters of both buying divisions are less than half, then the headquarters should assign the supplying division a greater bargaining power than the theory of subgame perfection equilibrium recommends. In such a case, an \(1:2\) surplus sharing rule for seller and buyer may be a good choice for non-myopic divisions. This finding is especially important when the headquarters cannot optimally set the bargaining power due to a lack of information. On the other hand, if the marginal cost parameters of the buying divisions differ widely from each other, then \(2:3\) surplus sharing rules may favor divisions' investment decisions and, thus, can lead to significantly higher profits. The findings also show that the performance of fuzzy Q-learning agents depends crucially on the level of foresight. According to an extensive sensitivity analysis, the simulation outputs appear to be robust against the turbulence of the market environment and different exploration policies. Future research could address the following points: (1) The simulation study investigated here focuses on fuzzy Q-learning. It would be interesting to know to what extent the simulation results change, when other reinforcement learning approaches are applied instead. (2) The expansion is not limited to two buying divisions. Besides, it is also interesting to examine other firm settings, e.g, a supply chain in which each division can improve the quality of the intermediate product by upfront investments. (3) Future research could also study negotiated transfer pricing under capacity constraints. This is especially important when the number of divisions increases. For example, Gavous (1999) and Wielenberg (2000) examine a two-divisional firm where the term capacity describes the level of production that is achievable with the resources currently available and could be raised by investing specific resources such as labor, machinery, or simply by overtime. Model extensions with several consecutive downstream divisions would certainly be interesting, especially how well cognitively bounded agents perform under these terms. (4) More research work could also consider changes in the timeline. For example, the assumption that the state variables are realized before the negotiation process takes place is only to simplify the backward induction. In reality, stochastic influences also take place after the negotiation phase and should therefore be taken into consideration. ## Appendix A Flow diagram of the agent-based simulation Figure 9: Flow diagram of the agent-based simulation using the Boltzmann exploration policy. For an improved readability, most for-loops are omitted. \begin{table} \begin{tabular}{|c c c c c c|c c c c c c c c c|} \hline \multicolumn{13}{|c|}{**First-best solutions for non-symmetric scenarios V-VI**} \\ \hline \(\begin{array}{c}I_{1}\\ \end{array}\) & \(\begin{array}{c}I_{2}\\ \end{array}\) & \(\begin{array}{c}\lambda_{S}\\ \end{array}\) & \(\begin{array}{c}\lambda_{1}\\ \end{array}\) & \(\begin{array}{c}\lambda_{2}\\ \end{array}\) & \(\begin{array}{c}I_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}I_{1}^{b}\\ \end{array}\) & \(\begin{array}{c}I_{2}^{b}\\ \end{array}\) & \(\begin{array}{c}q_{1}^{b}\\ \end{array}\) & \(\begin{array}{c}q_{2}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{HQ}^{b}\\ \end{array}\) \\ \hline 0.5 & 0.5 & 1 & 0.5 & 0.5 & 0.5 & 10 & 10 & 10 & 5 & 5 & 100 & 50 & 50 & 200 \\ 0.55 & 0.45 & 1 & 0.534 & 0.463 & 10.02 & 9.25 & 10.98 & 4.94 & 5.08 & 100.02 & 42.96 & 57.48 & 200.46 \\ 0.60 & 0.40 & 1 & 0.564 & 0.425 & 10.09 & 8.68 & 12.22 & 4.90 & 5.19 & 100.12 & 36.32 & 65.36 & 201.80 \\ 0.65 & 0.35 & 1 & 0.590 & 0.385 & 10.21 & 8.26 & 13.87 & 4.87 & 5.34 & 100.32 & 29.73 & 74.21 & 204.26 \\ 0.70 & 0.30 & 1 & 0.609 & 0.343 & 10.42 & 7.99 & 16.18 & 4.87 & 5.55 & 100.56 & 23.18 & 84.61 & 208.35 \\ 0.75 & 0.25 & 1 & 0.621 & 0.301 & 10.73 & 7.86 & 19.42 & 4.88 & 5.85 & 101.05 & 16.36 & 97.16 & 214.57 \\ \hline \multicolumn{13}{|c|}{**Second-best solutions for non-symmetric scenarios V-VI**} \\ \hline \(\begin{array}{c}I_{1}\\ \end{array}\) & \(\begin{array}{c}I_{2}\\ \end{array}\) & \(\begin{array}{c}\lambda_{S}\\ \end{array}\) & \(\begin{array}{c}\lambda_{1}\\ \end{array}\) & \(\begin{array}{c}\lambda_{2}\\ \end{array}\) & \(\begin{array}{c}I_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}I_{1}^{b}\\ \end{array}\) & \(\begin{array}{c}I_{2}^{b}\\ \end{array}\) & \(\begin{array}{c}q_{1}^{b}\\ \end{array}\) & \(\begin{array}{c}q_{2}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{HQ}^{b}\\ \end{array}\) \\ \hline 0.5 & 0.5 & 1 & 0.5 & 0.5 & 4 & 4 & 4 & 4 & 4 & 88 & 44 & 44 & 176 \\ 0.45 & 0.45 & 1 & 0.424 & 0.424 & 3.67 & 5.29 & 4.08 & 4.08 & 83.14 & 49.02 & 49.02 & 181.18 \\ 0.40 & 0.40 & 1 & 0.361 & 0.361 & 3.35 & 6.97 & 6.97 & 4.19 & 4.19 & 78.78 & 54.55 & 54.55 & 187.89 \\ 0.35 & 0.35 & 1 & 0.307 & 0.307 & 3.04 & 9.23 & 9.23 & 4.36 & 4.36 & 74.92 & 61.01 & 61.01 & 196.93 \\ 0.30 & 0.30 & 1 & 0.262 & 0.262 & 2.75 & 12.24 & 12.24 & 4.58 & 4.58 & 71.85 & 68.56 & 68.56 & 208.97 \\ 0.25 & 0.25 & 1 & 0.222 & 0.222 & 2.46 & 16.65 & 16.65 & 4.93 & 4.93 & 69.67 & 78.46 & 78.46 & 226.59 \\ \hline \multicolumn{13}{|c|}{**First-best solutions for non-symmetric scenarios V-VI**} \\ \hline \(\begin{array}{c}I_{1}\\ \end{array}\) & \(\begin{array}{c}I_{2}\\ \end{array}\) & \(\begin{array}{c}\lambda_{S}\\ \end{array}\) & \(\begin{array}{c}\lambda_{1}\\ \end{array}\) & \(\begin{array}{c}\lambda_{2}\\ \end{array}\) & \(\begin{array}{c}I_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}I_{1}^{b}\\ \end{array}\) & \(\begin{array}{c}I_{2}^{b}\\ \end{array}\) & \(\begin{array}{c}q_{1}^{b}\\ \end{array}\) & \(\begin{array}{c}q_{2}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) & \(\begin{array}{c}\Pi_{S}^{b}\\ \end{array}\) \\ \hline 0.5 & 0.5 & 1 & 0.5 & 0.5 & 0.5 & 10 & 10 & 10 & 5 & 5 & 100 & 50 & 50 & 200 \\ 0.55 & 0.45 & 1 & 0.534 & 0.463 & 10.02 & 9.25 & 10.98 & 4.94 & 5.08 & 100.02 & 42.96 & 57.48 & 200.46 \\ 0.60 & 0.40 & 1 & 0.564 & 0.425 & 10.09 & 8.68 & 12.22 & 4.90 & 5.19 & 100.12 & 36.32 & 65.36 & 201.80 \\ 0.65 & 0.35 & 1 & 0.590 & 0.385 & 10.21 & 8.26 & 13.87 & 4.87 & 5.34 & 100.32 & 29.73 & 74.21 & 204.26 \\ 0.70 & 0.30 & 1 & 0.609 & 0.343 & 10.42 & 7.99 & 16.18 & 4.87 & 5.55 & 100.56 & 23.18 & 84.61 & 208.35 \\ 0.75 & 0.25 & 1 & 0.621 & 0.301 & 10.73 & 7.86 & 19.42 & 4.88 & 5.85 & 101.05 & 16.36 & 9 ## Appendix B Sensitivity analysis conducted according to scenario IV Figure 12: Abs. performance of the headquarters’ profit for \(\Gamma_{1}^{sb}=0.4\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.361\). Figure 13: Abs. performance of the headquarters’ profit for \(\Gamma_{1}^{sb}=0.35\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.307\). Figure 14: Abs. performance of the headquarters’ profit for \(\Gamma_{1}^{sb}=0.3\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.262\). Figure 15: Abs. performance of the headquarters’ profit for \(\Gamma_{1}^{sb}=0.25\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.222\). Figure 16: Results of the baseline performance indicator for \(\Gamma_{j}^{sb}=0.5\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.424\). Figure 17: Results of the baseline performance indicator for \(\Gamma_{j}^{sb}=0.45\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.424\). Figure 19: Results of the baseline performance indicator for \(\Gamma_{j}^{sb}=0.35\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.307\). Figure 18: Results of the baseline performance indicator for \(\Gamma_{j}^{sb}=0.4\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.361\). Figure 21: Results of the baseline performance indicator for \(\Gamma_{j}^{sb}=0.25\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.222\). Figure 20: Results of the baseline performance indicator for \(\Gamma_{j}^{sb}=0.3\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\) and \(\lambda_{1}=\lambda_{2}=0.262\). ## Appendix C Sensitivity analysis conducted according to scenario VI Figure 24: Abs. performance of the headquarters’ profit for \(\Gamma_{1}^{sb}=0.6\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.564\), and \(\lambda_{2}=0.425\). Figure 25: Abs. performance of the headquarters’ profit for \(\Gamma_{1}^{sb}=0.65\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.59\), and \(\lambda_{2}=0.385\). Figure 27: Abs. performance of the headquarters’ profit for \(\Gamma_{1}^{sb}=0.75\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.621\), and \(\lambda_{2}=0.301\). Figure 26: Abs. performance of the headquarters’ profit for \(\Gamma_{1}^{sb}=0.7\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.609\), and \(\lambda_{2}=0.343\). Figure 28: Results of the baseline performance indicator for \(\Gamma_{1}^{sb}=0.5\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.5\), and \(\lambda_{2}=0.463\). Figure 29: Results of the baseline performance indicator for \(\Gamma_{1}^{sb}=0.55\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.534\), and \(\lambda_{2}=0.463\). Figure 31: Results of the baseline performance indicator for \(\Gamma_{1}^{sb}=0.65\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.59\), and \(\lambda_{2}=0.385\). Figure 30: Results of the baseline performance indicator for \(\Gamma_{1}^{sb}=0.6\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.564\), and \(\lambda_{2}=0.425\). Figure 33: Results of the baseline performance indicator for \(\Gamma_{1}^{sb}=0.75\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.621\), and \(\lambda_{2}=0.301\). Figure 32: Results of the baseline performance indicator for \(\Gamma_{1}^{sb}=0.7\) of different exploration policies and standard deviations of state variables given \(\lambda_{S}=1\), \(\lambda_{1}=0.609\), and \(\lambda_{2}=0.343\).
2305.16932
A Neural State-Space Model Approach to Efficient Speech Separation
In this work, we introduce S4M, a new efficient speech separation framework based on neural state-space models (SSM). Motivated by linear time-invariant systems for sequence modeling, our SSM-based approach can efficiently model input signals into a format of linear ordinary differential equations (ODEs) for representation learning. To extend the SSM technique into speech separation tasks, we first decompose the input mixture into multi-scale representations with different resolutions. This mechanism enables S4M to learn globally coherent separation and reconstruction. The experimental results show that S4M performs comparably to other separation backbones in terms of SI-SDRi, while having a much lower model complexity with significantly fewer trainable parameters. In addition, our S4M-tiny model (1.8M parameters) even surpasses attention-based Sepformer (26.0M parameters) in noisy conditions with only 9.2 of multiply-accumulate operation (MACs).
Chen Chen, Chao-Han Huck Yang, Kai Li, Yuchen Hu, Pin-Jui Ku, Eng Siong Chng
2023-05-26T13:47:11Z
http://arxiv.org/abs/2305.16932v1
# A Neural State-Space Model Approach to Efficient Speech Separation ###### Abstract In this work, we introduce S4M, a new efficient speech separation framework based on neural state-space models (SSM). Motivated by linear time-invariant systems for sequence modeling, our SSM-based approach can efficiently model input signals into a format of linear ordinary differential equations (ODEs) for representation learning. To extend the SSM technique into speech separation tasks, we first decompose the input mixture into multi-scale representations with different resolutions. This mechanism enables S4M to learn globally coherent separation and reconstruction. The experimental results show that S4M performs comparably to other separation backbones in terms of SI-SDRi, while having a much lower model complexity with significantly fewer trainable parameters. In addition, our S4M-tiny model (1.8M parameters) even surpasses attention-based Sepformer (26.0M parameters) in noisy conditions with only 9.2% of multiply-accumulate operation (MACs). Chen Chen\({}^{1}\), Chao-Han Huck Yang\({}^{2}\), Kai Li\({}^{3}\), Yuchen Hu\({}^{1}\), Pin-Jui Ku\({}^{3}\), Eng Siong Chng\({}^{1}\)\({}^{1}\)Nanyang Technological University, Singapore \({}^{2}\)Georgia Institute of Technology, USA \({}^{3}\)Tsinghua University, China [email protected] **Index Terms**: Speech separation, state-space model, ordinary differential equations ## 1 Introduction Speech separation (SS) aims to separate target speech from overlapping speech signal sources [1], also known as _cocktail party problem_. SS widely serve as a pre-processor for speech applications [2, 3], e.g., automatic speech recognition [4, 5] and speaker verification [6]. Recently, SS has gained remarkable progress driven by the power of deep learning [7, 8, 9], where the clean speech of individual speakers serves as ground truth to supervise the training of the neural network [10]. Developing an efficient SS architecture with low model complexity is challenging due to the high-dimensional input of speech signals, which contains tens of thousands of time steps per second and exhibits long-range behaviors at multiple timescales. In order to handle this challenge, previous deep learning-based attempts have tailored standard sequence modeling approaches like CNNs [11], RNNs [12, 13], and Transformers [14] to predict clean speech from a mixture. However, these works have different limitations.CNNs are constrained by the size of the receptive field, making it difficult to achieve global coherence [15]. RNNs lack computational efficiency because they cannot be parallelized during training. While Transformers-based [16] architectures achieve impressive performance on a public dataset, their vast network size (e.g., Sepformer with 26.0M parameters [14]) results in high computational costs for training and inference, hampering the application of the trained model in practical scenarios. To improve the efficiency for SS, we are inspired by the recent advances in neural state-space model (SSM) [17], which have shown outstanding performance in high-rate audio generation tasks [15]. The globally coherent generation of SSM is similar to self-attention mechanism in Transformers, but with significantly fewer trainable parameters are required in SSM. Consequently, we believe that SSM offers a solution to reduce the model complexity of SS, thus improving the separation efficiency for both training and inference. In this paper, we introduce an efficient SS method called S4M (speech separation using state-space model), which follows the mainstream encoder-decoder pipeline. Specifically, the encoder in S4M extracts multiple features with varying resolutions from a flat input mixture, and then feeds them into S4 blocks to capture the representation with global long-range dependencies. Similarly, S4 layer is also employed in the decoder for feature reconstruction. The main strengths of S4M are summarized as follows: * S4M offers significant advantages over mainstream SS methods, in terms of model complexity and computational cost. * S4M effectively captures long-range dependencies for high-rate waveforms, which benefits separated feature reconstruction, especially in noisy conditions. To demonstrate these strengths of S4M, we conducted experiments on clean datasets WSJ0-2Mix and LibriMix, as well as the noisy dataset LRS2-Mix. The experimental results show that S4M achieves comparable performance with other competitive baselines in clean conditions, and achieves state-of-the-art performance on LRS2-Mix, which includes practical noise and reverberation in the mixture. Furthermore, we compared the model complexity of S4M with other models, and results show that S4M has remarkable superiority in terms of computational cost and inference time, making it one potential solution for streaming-based speech separations [18]. ## 2 Background: State-Space Models Given the input mixture \(x\in\mathbb{R}^{1\times T}\), the goal of speech separation is to separate and predict clean speech \(y^{n}\in\mathbb{R}^{1\times T}\) for each speaker, where \(n\) is the number of speakers. CNNs and RNNs are the most widely used models for speech separation, each with its own advantages and limitations during training and inference. Specifically, a CNN layer computes a convolution with parameterized kernels \[K=(k_{0},\cdots,k_{w-1})\qquad y^{n}=K*x \tag{1}\] where \(w\) is the width of the kernel. The receptive field or context size of a CNN is determined by the sum of kernel widths across all layers. As duration \(T\) is usually large for speech signal, this results in increased computational complexity. To address this, a variant of CNNs called dilated convolution (DCNN) is widely used in SS, where each kernel K is non-zero only at its endpoints [19]. On the other hand, RNNs sequentially compute a hidden state \(h_{t}\) from a previous history state \(h_{t-1}\) and current input \(x\). The output \(y\) is modeled as: \[h_{t}=f(h_{t-1},x)\qquad y=g(h_{t}) \tag{2}\] \(f\) is also known as an RNN cell, such as the popular LSTM. The recently proposed deep neural state-space model(SSM) advances speech tasks by combining the properties of both CNNs and RNNs. The SSM [17] is defined in continuous time using the following equations: \[h^{\prime}(t)=Ah(t)+Bx(t) \tag{3}\] \[y(t)=Ch(t)+Dx(t) \tag{4}\] To operate on discrete-time sequences sampled with a step size of \(\Delta\), SSM can be computed with recurrence as follows: \[h_{k}=\overline{A}h_{k-1}+\overline{B}x_{k}\quad y_{k}=\overline{C}h_{k}+ \overline{D}x_{k} \tag{5}\] \[\overline{A}=(I-\Delta/2\cdot A)^{-1}(I+\Delta/2\cdot A) \tag{6}\] where \(\overline{A},\overline{B},\overline{C},\overline{D}\) are the discretized state matrices. According to [15], Eq.(5) can be rewritten as a discrete convolution: \[y_{k}=\overline{CA}^{k}\overline{B}x_{0}+\overline{CA}^{k-1} \overline{B}x_{1}+\cdots+\overline{CB}x_{k} \tag{7}\] \[y=\overline{K}*x\qquad\overline{K}=(\overline{CB},\overline{ CAB},\overline{C}A^{2}\overline{B}) \tag{8}\] \(\overline{K}\) is the SSM convolution kernel. The Eq.(8) is a single (non-circular) convolution and can be computed very efficiently with Fast Fourier Transformation, provided that \(\overline{K}\) is known. In order to calculate \(\overline{K}\), we employ a specific instantiation of SSM, known as **S4 layer**[17], which parametrizes \(A\) as a diagonal plus low-rank (DPLR) matrix: \(A=\Lambda-pq^{*}\). This parameterization has three advantages: 1) Faster computation. The kernel \(\overline{K}\) in Eq.(8) can be computed very quickly in this setting. 2) Improved capture of long-range dependencies. This parameterization includes HiPPO matrices [20], which theoretically and empirically allow SSM to better capture global correspondence from input. 3) Better stability. SSM involves the spectrum of the state matrix \(A\), which is more easily controlled since \(-pp^{*}\) is always a negative semi-definite matrix [15]. Given any time step \(\Delta\), the computation of the SSM convolution kernel \(\overline{K}\) requires \(\mathcal{O}(S+L)\) operations and \(\mathcal{O}(S+L)\) space, where \(S\) is the state size and \(L\) is the length of input. ## 3 S4M: State-Space Speech Separation Model The overview structure of S4M is shown in Fig 1, where an encoder-decoder pipeline with S4 block is employed for speech separation tasks. As a time-domain method, S4M typically converts input waveform \(x\in\mathbb{R}^{1\times T}\) to 2D features \(F_{0}\in\mathbb{R}^{C\times L}\) using a 1-D convolutional layer, where \(C\) and \(L\) represent the channel number and feature length respectively. ### Encoder and S4 Block Prior works [21, 22] have demonstrated the advantages of using multi-scale representations with different resolutions for speech tasks. Consequently, we stack three down-sampling encoders (red blocks in Fig. 1) that consists of a 1-D dilation convolution layer, followed by a global normalization layer. The dilation factor is set as 2 to gradually increase the receptive field. In this way, the length dimension \(L\) of feature is squeezed layer by layer, shown as the grey chunks in Fig. 1. Subsequently, a set of representations \(F=\{F_{i}\in\mathbb{R}^{C\times\frac{L}{2^{i-1}}}|\,i=0,1,\cdots\}\) with same channel \(C\) but different length \(L\) are extracted from input, where \(i\) is set as 4 in this paper. To integrate the information from the multi-scale representations, we perform average pooling on the features from shallow layer to reshape them, and then add them to obtain feature \(F_{m}\in\mathbb{R}^{C\times\frac{L}{8}}\). To capture global correspondence from \(F_{m}\), a residual S4 block is employed (shown as blue box in Fig.1). Specifically, it contains a normalization layer, a S4 layer with GELU activation function [23], and a linear layer. We also use additional point-wise linear layers in the style of a feed-forward network in Transformer, along with a residual connection to avoid the vanishing gradient problem. Notably, the S4 block does not change the shape of feature, therefore, the feature of \(F^{\prime}_{m}\) with shape of \(C\times\frac{L}{8}\) is obtained after the S4 block. ### Decoder The decoder of S4M progressively reshapes the separated features to maintain symmetry with the encoder. As shown in Fig.1, two decoder inputs \(F^{\prime}_{i-1}\) and \(F^{\prime}_{i}\) are obtained by the Figure 1: The block diagram of the (A) S4M model, (B) S4 Block, and (C) Decoder. “\(\sigma\)” denotes the Sigmoid function. The grey chunks denote the hidden features after each layer. element-wise multiplication between \(F_{i-1}\) and \(F^{\prime}_{m}\), as well as \(F_{i}\) and \(F^{\prime}_{m}\). This mask-based operation is commonly used in speech separation tasks. In addition, up-sampling of nearest neighbour interpolation is required for \(F^{\prime}_{m}\) due to shape mismatch. Given \(F_{i-1}\) and \(F_{i}\), the decoder first employs a light local attention mechanism [22] using adaptive parameters \(\rho\) and \(\tau\), which are respectively denoted as: \[\tau=f_{2}(\phi(F^{\prime}_{i}))\ \ \ \ \ \rho=\sigma(f_{1}(\phi(F^{\prime}_{i}))) \tag{9}\] where \(f_{1}\) and \(f_{2}\) are two 1-D convolutional layers followed by normalization layer, \(\phi\) denotes the nearest neighbor interpolation along time dimension \(L\) for up-sampling (\(C\times\frac{L}{2^{2}}\to C\times\frac{L}{2^{2}-1}\)), and \(\sigma\) denotes the Sigmoid function. As \(F^{\prime}_{i-1}\), \(\rho\), and \(\tau\) have the same shape, the local attention process is formulated by: \[F^{\prime}_{i-1}=\rho\odot F^{\prime}_{i-1}+\tau \tag{10}\] Then the same S4 block is employed for globally coherent generation after local attention. As shown in Fig. 1, the output of decoder is recursively multiplied by the output of encoder to get \(F^{\prime}_{i-2}\) and then fed it into next decoder layer until the output shape is restored to \(C\times L\). We adopt unfolding scheme for the network as proposed in A-FRCNN [24]. Concretely, the structure shown in Fig. 1 (A) is repeated for B times (weight sharing), such that the input of the current model is also added by each previous model's output. ### Training objective The objective of training the end-to-end S4M is to maximize the scale-invariant source-to-noise ratio (SI-SNR), which is commonly used as the evaluation metric for source separation. The SI-SNR loss is defined as: \[\mathcal{L}_{si-snr}=-\sum_{n=1}^{N}10\log_{10}\left(\frac{\left\|\frac{\hat{y }_{n}^{T}y_{n}}{\|y_{n}\|^{2}}y_{n}\right\|^{2}}{\|\frac{\hat{y}_{n}^{T}y_{n}} {\|y_{n}\|^{2}}-\hat{y}_{n}\|^{2}}\right) \tag{11}\] where \(y^{(n)}\) is the ground-truth signal for speaker \(n\), and \(\hat{y}^{(n)}\) is the estimated time-domain speech produced by S4M. Furthermore, Utterance-level permutation invariant training (uPIT) is applied during training to address the source permutation problem [25]. ## 4 Experiment ### Database We evaluate S4M and other competitive methods on both clean and noisy datasets, including WSJ0-2Mix [36], LibriMix [37] and LRS2-Mix [22]. To ensure the generality, the mixture in test set are generated by the speakers that are not seen during training. **WSJ0-2Mix** is the most common speech separation dataset derived from Wall Street Journal (WSJ0). It consists of a 30 hours of training set (20k utterances), a 8 hours of validation set (5k utterances), and a 5 hours of test set (3k utterances). All utterances are re-sampled to 8 kHz for comparison with other works. **LibriMix**. Considering the limited data amount of WSJ0-2mix, we further employ LibriMix dataset to evaluate the performance in clean condition. The target speech in LibriMix is randomly drawn from the train-100 subset of LibriSpeech dataset with 8 kHz sampling rate. Each mixture uniformly samples Loudness Units relative to Full Scale (LUFS) between -25 and -33 dB. The training set contains 13.9k utterances with duration of 58 hours, while the validation set and test set both contain 3k utterances with duration of 11 hours. **LRS2-Mix**. The source of LRS2-Mix is LRS2 dataset [38] that includes thousands of video clips from BBC. It contains practical noise and reverberation interference, which is more close to reality. We randomly select utterances of 16 kHz from different scenes and mix them with signal-to-noise ratios sampled between -5 dB and 5dB. In practice, we utilize the same mixing script as WSJ0-2Mix, in which the training set, validation set and test set contain 20k, 5k and 3k utterances respectively. ### S4M Setup The kernel size of convectional layer to process time domain signal is set as 4ms and the stride size is set as 1ms. The number of channels in the dilation convolution layer of the encoder and the number of hidden units in all linear layers are both set as 512. For S4 layer, we found that the model performs the best when the number of channels is set as 16. Furthermore, we develop a lighter version of S4M called _S4M-tiny_, which removes the S4 layers (dark blue block in Fig. 1-C) in the decoder. It is worth noting that the S4M-tiny only contains 1.8M trainable parameters. Both S4M and S4M-tiny are trained for 200 epochs with a learning rate of 0.001. An early-stopping strategy is adopted when validation loss does not decrease for 5 epochs. To avoid \begin{table} \begin{tabular}{c|c c|c} \hline \hline Model & SI-SDRi (dB) & SDRi (dB) & \# Para. (M) \\ \hline ADANet [26] & 9.1 & 10.4 & 9.1 \\ WA-MISI-5 [27] & 12.6 & 13.1 & 32.9 \\ SPN [28] & 15.3 & 15.6 & 56.6 \\ Conv-TasNet [11] & 15.3 & 15.6 & 5.1 \\ Deep CASA [29] & 17.7 & 18.0 & 12.8 \\ FurcaNeXt [30] & - & 18.4 & 51.4 \\ TDANet [22] & 18.6 & 18.9 & 2.3 \\ DPPRNN [12] & 18.8 & 19.0 & 2.6 \\ SUDO RM-RF [31] & 18.9 & - & 6.4 \\ Gated DPRNN [32] & 20.1 & 20.4 & 7.5 \\ Sepformer [14] & 20.4 & 20.5 & 26.0 \\ Wavesplit [33] & 21.0 & 21.2 & 29.0 \\ SFSRNet [34] & 22.0 & 22.1 & 59.0 \\ TF-GridNet [35] & **23.4** & **23.5** & 14.4 \\ \hline S4M-tiny & 19.4 & 19.7 & **1.8** \\ S4M & 20.5 & 20.7 & 3.6 \\ \hline \hline \end{tabular} \end{table} Table 1: SI-SDRi and SDRi results on WSJ0-2Mix. “# Para.” denotes the number of trainable parameters for each model. Best results are in bold. \begin{table} \begin{tabular}{c|c c c c|c} \hline \hline Model & \multicolumn{2}{c}{LibriMix} & \multicolumn{2}{c}{LRS2-Mix} & \multicolumn{1}{c}{\# Para.} \\ & SI-SDRi & SDRi & SI-SNRi & SDRi & (M) \\ \hline BLSTM-TasNet & 7.9 & 8.7 & 6.1 & 6.8 & 23.6 \\ Conv-TasNet & 12.2 & 12.7 & 10.6 & 11.0 & 5.6 \\ DPRNN & 16.1 & 16.6 & 12.7 & 13.0 & 2.7 \\ SuDoRM-RF & 14.0 & 14.4 & 11.3 & 11.7 & 6.4 \\ Sepformer & 16.5 & 17.0 & 13.5 & 13.8 & 26.0 \\ WaveSplit & 16.6 & 17.2 & 13.1 & 13.4 & 29.0 \\ A-FRCNN & 16.7 & 17.2 & 13.0 & 13.3 & 6.1 \\ TDANet & **17.4** & **17.9** & 14.2 & 14.5 & 2.3 \\ \hline S4M-tiny & 16.2 & 16.6 & 14.2 & 14.5 & **1.8** \\ S4M & 16.9 & 17.4 & **15.3** & **15.5** & 3.6 \\ \hline \hline \end{tabular} \end{table} Table 2: SI-SDRi and SDRi results on LibriMix and LRS2-Mix. gradient explosion, we apply gradient clipping with a maximum L2 norm of 5 during training. ### Evaluation Metric We asses the clarity of separated audios based on scale-invariant signal-to-distortion ratio improvement (SI-SDRi) and signal-to-distortion ratio improvement (SDRi). To evaluate model efficiency, we measure the processing time consumption per second for all models, indicated by real-time factor (RTF) in the tables. RTF is calculated by processing ten audio tracks of 1 second in length and 16 kHz in sample rate on CPU and GPU (total processing time / 10), represented as "CPU-RTF" and "GPU-RTF" respectively. The numbers are then averaged after running 1000 times. Also, we use the parameter size and the number of multiply-accumulate operations (MACs) to measure the model size. MACs are calculated using the open-source tool PyTorch-OpCounter4 under the MIT license. For both SI-SDRi and SDRi, higher score indicates better quality of separated signal. For all efficiency metrics, lower value means lower complexity of model. ## 5 Result and Analysis ### Main results We report our main results on the test set of WSJ0-2Mix in Table 1, as well as LibriMix and LRS3-Mix in Table 2. On WSJ0-2Mix dataset, S4M surpasses CNN-based Conv-TasNet and RNN-based DPRNN, and achieves comparable SI-SDRi performance with Transformer-based Sepformer, with far lower model complexity. In addition, S4M-tiny achieves 19.4 SI-SDRi performance with only 1.8M parameters, which demonstrates the efficiency of state-space model. For LibriMix dataset, we observe that S4M surpasses the Sepformer by 2.4% on SI-SDRi performance when training data doubles (from 30 hours to 58 hours). We notice that S4M performs particularly well on LRS2-Mix which contains background noise and reverberation in the mixture. S4M-tiny even surpasses Sepformer by 13.3% (13.5 dB \(\rightarrow\) 15.3 dB) in terms of SI-SDRi, with only 6.9% parameters. In addition, S4M achieves the best performance on LRS2-Mix in terms of both SI-SDRi and SDRi. This phenomenon indicates that S4M is effective to capture long-range dependencies for specific speaker, resulting in better noise-robustness in a more realistic environment. ### Ablation Study on S4 We conduct ablation study on S4 module which serves as our main contribution. The results are summarized in Table 3. "Mid." denotes whether S4 block exists between the encoder and decoder (Fig. 1-B), and "\(S\)" represents the dimension of the state in S4 block. "Dec." denotes whether S4 layer is inserted in the decoder after local attention (Fig. 1-C), where the dimension of state is uniformly set as 16. We observe that: 1) System 2 outperforms System 1 by a significant margin, highlighting the importance of S4 block which captures global correspondence from the multi-scale representations produced by the encoder and benefits subsequent separation. 2) The system achieves the best performance when the dimension of the state is set as 16. Increasing the value of "\(S\)" leads to an increase in the number of parameters and a degradation in speech separation performance. 3) S4 layer is also effective for feature reconstruction in the decoder, but it inevitably increases the number of parameters. ### Analysis of model complexity We analyze the model complexity of S4M, which also indicates the separation efficiency. Using RTF and MACs as metrics, we report the performance of S4M and its comparison with other models in Table 4. GPU-RTF-\(f\) and GPU-RTF-\(b\) indicate the training time for each model on GPU devices, while CPU-RTF-\(f\) denotes the inference speed on CPU devices when GPU resources is unavailable in some practical conditions. In addition, the "TDANet (own)" is reproduced without accelerated Transformer by Pytorch. Table 4 shows that S4M-tiny consistently requires the least training and inference time on both GPU and CPU devices. Moreover, S4M-tiny can achieve better performance than Sepformer on LRS2-Mix dataset, with only 9.2% of MACs of Sepformer. For S4M, its model complexity is 4.8 times higher than S4M-tiny, but still significantly lower than Sepformer. ## 6 Conclusion In this paper, we propose a efficient speech separation method (S4M) that achieves competitive performance while maintaining low model complexity. S4M utilizes state-space model to capture long-range dependencies from multi-scale representations, and integrates it into separated feature reconstruction. Experimental results show that S4M achieves comparable separation performance with significantly fewer trainable parameters in comparison with other mainstream methods. Furthermore, we analyze the model complexity using computing time and MACs, which shows that S4M provides a potential solution for streaming-based speech separation on mobile devices or streaming applications [39]. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Model & \begin{tabular}{c} GPU-RTF-\(f\) \\ (ms) \\ \end{tabular} & \begin{tabular}{c} GPU-RTF-\(b\) \\ (ms) \\ \end{tabular} & \begin{tabular}{c} CPU-RTF-\(f\) \\ (s) \\ \end{tabular} & \begin{tabular}{c} MACs \\ (G/s) \\ \end{tabular} \\ \hline BLSTM-TasNet & 233.85 & 654.14 & 5.90 & 43.0 \\ SuDRM-RF & 64.70 & 228.57 & 1.73 & 10.1 \\ DPRNN & 88.79 & 241.54 & 8.13 & 85.3 \\ A-FRCNN & 61.16 & 183.65 & 5.32 & 125.3 \\ TDANet & 23.77 & 97.92 & 1.78 & 9.1 \\ TDANet (own) & 61.25 & 368.54 & 5.97 & 9.1 \\ Sepformer & 65.61 & 184.91 & 7.55 & 86.9 \\ TF-GridNet & 100.52 & 285.37 & 86.4 & 128.7 \\ \hline S4M-tiny & **18.19** & **73.62** & **1.34** & **8.0** \\ S4M & 40.15 & 132.83 & 2.57 & 38.7 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of inference time and MACs on LRS2-Mix dataset. “\(f\)” and “\(b\)” respectively stand for “feed-forward” and “backward” processes. For all metrics, lower is better. \begin{table} \begin{tabular}{c|c c|c|c c|c} \hline \hline ID & Mid. & \(S\) & Dec. & SI-SDRi & SDRi & \# Para. \\ \hline 1 & ✗ & - & ✗ & 10.5 & 10.9 & 0.23 \\ 2 & ✓ & 8 & ✗ & 13.9 & 14.3 & 1.82 \\ 3 & ✓ & 16 & ✗ & 14.2 & 14.5 & 1.84 \\ 4 & ✓ & 32 & ✗ & 14.0 & 14.4 & 1.88 \\ 5 & ✓ & 16 & ✓ & 15.3 & 15.5 & 3.59 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study of S4 on LRS2-Mix dataset. “✓” denotes S4 layer exist in corresponding module. “✗” indicates the opposite.
2301.05507
Correlation-Based And-Operations Can Be Copulas: A Proof
In many practical situations, we know the probabilities $a$ and $b$ of two events $A$ and $B$, and we want to estimate the joint probability ${\rm Prob}(A\,\&\,B)$. The algorithm that estimates the joint probability based on the known values $a$ and $b$ is called an and-operation. An important case when such a reconstruction is possible is when we know the correlation between $A$ and $B$; we call the resulting and-operation correlation-based. On the other hand, in statistics, there is a widely used class of and-operations known as copulas. Empirical evidence seems to indicate that the correlation-based and-operation derived in https://doi.org/10.1007/978-3-031-08971-8_64 is a copula, but until now, no proof of this statement was available. In this paper, we provide such a proof.
Enrique Miralles-Dolz, Ander Gray, Edoardo Patelli, Scott Ferson, Vladik Kreinovich, Olga Kosheleva
2023-01-13T12:17:40Z
http://arxiv.org/abs/2301.05507v1
# Correlation-Based And-Operations Can Be Copulas: A Proof ###### Abstract In many practical situations, we know the probabilities \(a\) and \(b\) of two events \(A\) and \(B\), and we want to estimate the joint probability \(\operatorname{Prob}(A\,\&\,B)\). The algorithm that estimates the joint probability based on the known values \(a\) and \(b\) is called an and-operation. An important case when such a reconstruction is possible is when we know the correlation between \(A\) and \(B\); we call the resulting and-operation correlation-based. On the other hand, in statistics, there is a widely used class of and-operations known as copulas. Empirical evidence seems to indicate that the correlation-based and-operation derived in [4] is a copula, but until now, no proof of this statement was available. In this paper, we provide such a proof. ## 1 Formulation of the problem **Correlation-based "and"-operation.** In many practical situations, we know the probabilities \(a\) and \(b\) of two events \(A\) and \(B\), and we need to estimate the joint probability \(\operatorname{Prob}(A\,\&\,B)\). An algorithm \(f_{\&}(a,b)\) that transforms the known values \(a\) and \(b\) into such an estimate is usually called an _and-operation_. One important case when such an estimate is possible is when, in addition to the probabilities \(a\) and \(b\), we also know the correlation \(\rho\) between the corresponding two random events. It is known (see, e.g., [3, 4]) that in this case, we can uniquely determine the probability of \(\operatorname{Prob}(A\,\&\,B)\) as \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}. \tag{1}\] While this formula is true whenever the correlation is known, this formula does not lead to an everywhere defined and-operation. For example, for \(a=b=0.1\) and \(\rho=-1\), this formula leads to a meaningless negative probability \[0.1\cdot 0.1+(-1)\cdot\sqrt{0.1\cdot 0.9\cdot 0.1\cdot 0.9}=0.01-0.09=-0.08<0.\] To avoid such meaningless estimates, we need to take into account that the joint probability \(\operatorname{Prob}(A\,\&\,B)\) must satisfy Frechet inequalities (see, e.g., [2]): \[\max(a+b-1,0)\leq\operatorname{Prob}(A\,\&\,B)\leq\min(a,b). \tag{2}\] So, if an expert claims to know the correlation \(\rho\) and the estimate for \(\operatorname{Prob}(A\,\&\,B)\) based on this value \(\rho\) is smaller than the lower bound \(\max(a+b-1,0)\) - which cannot be - a reasonable idea is to take the closest possible value of the joint probability, i.e., the value \(\max(a+b-1,0)\). Similarly, if the estimate for \(\operatorname{Prob}(A\,\&\,B)\) based on the expert-provided value \(\rho\) is larger than the upper bound \(\min(a,b)\) - which also cannot be - a reasonable idea is to take the closest possible value of the joint probability, i.e., the value \(\min(a,b)\). Thus, we arrive at the following and-operation - which we will call _correlation-based and-operation_: \[f_{\rho}(a,b)=T_{a,b}\left(a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1- b)}\right), \tag{3}\] where \[T_{a,b}(c)=\max(a+b-1,0)\text{ if }c<\max(a+b-1,0);\] \[T_{a,b}(c)=c\text{ if }\max(a+b-1,0)\leq c\leq\min(a,b);\text{ and} \tag{4}\] \[T_{a,b}(c)=\min(a,b)\text{ if }\min(a,b)<c.\] **Question: is this and-operation a copula?** In probability theory, there is a known class of and-operations known as _copulas_ (see, e.g., [5, 6]). These are functions \(C(a,b)\) for which, for some random 2-D vector \((X,Y)\), the joint cumulative distribution function \(F_{XY}(x,y)\stackrel{{\text{def}}}{{=}}\operatorname{Prob}(X\leq x \,\&\,Y\leq y)\) has the form \(F_{XY}(x,y)=C(F_{X}(x),F_{Y}(y))\), where \(F_{X}(x)\stackrel{{\text{def}}}{{=}}\operatorname{Prob}(X\leq x)\) and \(F_{Y}(y)\stackrel{{\text{def}}}{{=}}\operatorname{Prob}(Y\leq y)\) are known as _marginals_. One important aspect of (3)-(4) is that these formulas can be expressed as a copula (2-copula) family as described in [4], allowing us to operate not only with precise probabilities, but also with interval probabilities and probability boxes. A 2-copula must satisfy the following properties: 1. Grounded: \(C(0,b)=C(a,0)=0\) 2. Uniform margins: \(C(a,1)=a;C(1,b)=b\) 3. 2-increasing: \(C(\overline{a},\overline{b})+C(\underline{a},\underline{b})-C(\overline{a}, \underline{b})-C(\underline{a},\overline{b})\geq 0\) for all \(\underline{a}<\overline{a}\) and \(\underline{b}<\overline{b}\) It is easy to see that (3)-(4) satisfies the two first properties. In [4] the third property was checked for a dense set of tuples \((\underline{a},\overline{a},\underline{b},\overline{b},\rho)\), and for all these tuples, the inequality was satisfied. However, at that moment, we could not prove that the correlation-based and-operation is indeed a 2-copula. In this paper we provide the missing proof. ## 2 Main result **Proposition**.: _For every \(\rho\in[-1,1]\), the correlation and-operation \(f_{\rho}(a,b)\) described by the formulas (3)-(4) is a copula._ **Proof**.: \(1^{\circ}\). It is known that the desired inequality has the following property - if we represent a box \([\underline{a},\overline{a}]\times[\underline{b},\overline{b}]\) as a union of several sub-boxes, then the left-hand side of the desired inequality is equal to the sum of the left-hand sides corresponding to sub-boxes. Indeed, as one can easily check, there is the following _additivity_ property: for each box consisting of several sub-boxes, the left-hand side of the inequality (4a) that corresponds to the larger box is equal to the sum of expressions (4a) corresponding to sub-boxes. Thus, if the expressions corresponding to sub-boxes are non-negative, then the expression (4a) corresponding to the larger box is also non-negative. In general, the and-operation described by the formula (4) has three different expressions. So, to prove that the expression (4a) corresponding to this expression is also non-negative, we need to consider cases when at different vertices of the box, we may have different expressions. Good news is that every box whose vertices are described by different expressions can be represented as the union of sub-boxes in which: * either all vertices are described by the same expression * or two vertices are on the boundary between the areas of different expressions. This is easy to see visually: the following box, in which the slanted line represents the boundary between the areas can be represented as the union of sub-boxes with the desired property: Thus, to prove that our and-operation is a copula, it is sufficient to consider only boxes of the following type: * boxes for which all four vertices belong to the same area, and * boxes for which two vertices belong to the boundary between two areas. The functions \(\max(a+b-1,0)\) and \(\min(a,b)\) are known to be copulas, so if all four vertices belong to one of these areas, then the desired inequality (4a) is satisfied. So, it is sufficient to consider: * boxes for which all four vertices belong to the new area, in which the and-operation is described by the expression (1); we will consider such boxes in Parts 2-4 of this proof, and * boxes for which two vertices belong to the boundary between two areas; these boxes will be considered in the following Parts of the proof. \(2^{\circ}\). Let us start by considering boxes for which all four vertices belongs to the area in which the and-operation is described by the formula (1). It is known [1] - and it is easy to prove by considering infinitesimal differences \(\overline{x}-\underline{x}\) and \(\overline{y}-\underline{y}\) - that for smooth functions, the desired inequality is equivalent to the fact that the partial derivative \[\frac{\partial C}{\partial a}\] is non-decreasing in \(b\), i.e., equivalently, that the mixed derivative is non-negative: \[d\stackrel{{\rm def}}{{=}}\frac{\partial^{2}C}{\partial a\, \partial b}\geq 0.\] Thus, to prove that \(f_{\rho}(a,b)\) is a copula, it is sufficient to prove that its mixed derivative is non-negative everywhere where the new formula is applied. Indeed, at the points where the formula (1) is applied, the derivative of \(f_{\rho}(a,b)\) with respect to \(a\) has the has the form \[\frac{\partial f_{\rho}}{\partial a}=b+\rho\cdot\frac{1-2\cdot a}{2\cdot \sqrt{a\cdot(1-a)}}\cdot\sqrt{b\cdot(1-b)}, \tag{4b}\] and thus, the mixed derivative has the following form: \[d=\frac{\partial}{\partial b}\left(\frac{\partial f_{\rho}}{\partial a}\right) =1+\rho\cdot\frac{(1-2\cdot a)\cdot(1-2\cdot b)}{4\cdot\sqrt{a\cdot(1-a)\cdot b \cdot(1-b)}}. \tag{5}\] Since the expression (1) does not change if we swap \(a\) and \(b\), it is sufficient to consider the case when \(a\leq b\). When \(\rho=0\), we get a known copula \(f_{0}(a,b)=a\cdot b\). So, it is sufficient to consider cases when \(\rho\neq 0\). This can happen when \(\rho>0\) and when \(\rho<0\). Let us consider these cases one by one. \(3^{\circ}\). Let us first consider the case when \(\rho>0\). In this case, since \(a\leq b\), we have \(\min(a,b)=a\) and thus, the condition (4) takes the form \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\leq a, \tag{6}\] i.e., equivalently, \[\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\leq a-a\cdot b=a\cdot(1-b) \tag{7}\] and thus, \[\rho\leq\frac{a\cdot(1-b)}{\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}}=\frac{\sqrt{ a\cdot(1-b)}}{\sqrt{(1-a)\cdot b}}. \tag{8}\] For all such \(\rho\), we need to prove that the expression (5) is non-negative. When both \(a\) and \(b\) are larger than \(0.5\) or both are smaller than \(0.5\), the differences \(1-2a\) and \(1-2b\) have the same sign and thus, their product is non-negative and the expression (5) is non-negative. So, the only case when we need to check that \(d\geq 0\) is when one of the two values \(a\) and \(b\) is smaller than \(0.5\) and another one is larger than \(0.5\). Since \(a\leq 0.5\), this means that \(a<0.5<b\). In this case, the condition \(d\geq 0\) takes the form \[1-\rho\cdot\frac{(1-2\cdot a)\cdot(2\cdot b-1)}{4\cdot\sqrt{a\cdot(1-a)\cdot b \cdot(1-b)}}\geq 0, \tag{9}\] i.e., equivalently, \[\rho\cdot\frac{(1-2\cdot a)\cdot(2\cdot b-1)}{4\cdot\sqrt{a\cdot(1-a)\cdot b \cdot(1-b)}}\leq 1, \tag{10}\] and \[\rho\leq\frac{4\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}}{(1-2\cdot a)\cdot(2 \cdot b-1)}. \tag{11}\] So, to prove that we always have \(d\geq 0\), we need to prove that every \(\rho\) that satisfies the inequality (8) also satisfies the inequality (11). Clearly, if some value \(\rho\) satisfies the inequality (11), then every smaller value \(\rho\) also satisfies this inequality. Thus, to prove the desired implication, it is sufficient to check that the inequality (11) is satisfied for the largest possible value \(\rho\) that satisfies the inequality (8), i.e., for the value \(\rho\) which is equal to the right-hand side of the inequality (8). For this \(\rho\), the desired inequality (11) takes the form \[\frac{\sqrt{a\cdot(1-b)}}{\sqrt{(1-a)\cdot b}}\leq\frac{4\cdot\sqrt{a\cdot(1- a)\cdot b\cdot(1-b)}}{(1-2\cdot a)\cdot(2\cdot b-1)}. \tag{12}\] Dividing both sides by \(\sqrt{a\cdot(1-b)}\), we get an equivalent inequality \[\frac{1}{\sqrt{(1-a)\cdot b}}\leq\frac{4\cdot\sqrt{(1-a)\cdot b}}{(1-2\cdot a) \cdot(2\cdot b-1)}. \tag{13}\] Multiplying both sides by both denominators, we get the following equivalent inequality: \[(1-2\cdot a)\cdot(2\cdot b-1)\leq 4\cdot(1-a)\cdot b. \tag{14}\] If we open parentheses, this inequality takes the equivalent form \[2\cdot b-4\cdot a\cdot b-1+2\cdot a\leq 4\cdot b-4\cdot a\cdot b, \tag{15}\] i.e., by adding \(4\cdot a\cdot b-2\cdot b\) to both sides, the form \[-1+2\cdot a\leq 2\cdot b. \tag{16}\] We are considering the case when \(a\leq b\) - since, as we have mentioned earlier, it is sufficient to only consider this case. Thus, the equivalent inequality (12) is also true and hence, for the case when \(\rho>0\), we indeed have \(d\geq 0\). \(4^{\circ}\). To complete the proof, it is now sufficient to consider the case when \(\rho<0\). In this case, if one of the values \(a\) and \(b\) is smaller than \(0.5\) and another one is larger than \(0.5\), then the differences \(1-2\cdot a\) and \(1-2\cdot b\) have different signs, so the right-hand side of the expression (5) for \(d\) is larger than \(1\) and thus, non-negative. Thus, it is sufficient to consider the cases when: * either both \(a\) and \(b\) are larger than \(0.5\) * or both \(a\) and \(b\) are smaller than \(0.5\). Let us consider these two cases one by one. \(4.1^{\circ}\). Let us first consider the case when \(a>0.5\) and \(b>0.5\). In this case, \(a+b-1>0\), so the inequality (4) takes the form \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\geq a+b-1, \tag{17}\] i.e., equivalently, that \[|\rho|\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\leq a\cdot b-a-b+1=(1-a)\cdot( 1-b), \tag{18}\] or that \[|\rho|\leq\frac{(1-a)\cdot(1-b)}{\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}}=\frac{ \sqrt{(1-a)\cdot(1-b)}}{\sqrt{a\cdot b}}. \tag{19}\] In this case, the condition \(d\geq 0\) that the value (5) is non-negative takes the form \[1-|\rho|\cdot\frac{(2\cdot a-1)\cdot(2\cdot b-1)}{4\cdot\sqrt{a\cdot(1-a) \cdot b\cdot(1-b)}}, \tag{20}\] i.e., equivalently, \[|\rho|\cdot\frac{(2\cdot a-1)\cdot(2\cdot b-1)}{4\cdot\sqrt{a\cdot(1-a)\cdot b \cdot(1-b)}}\leq 1 \tag{21}\] and \[|\rho|\leq\frac{4\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}}{(2\cdot a-1)\cdot(2 \cdot b-1)}. \tag{22}\] Similarly to the case when \(\rho>0\), to check that all values \(|\rho|\) satisfying the inequality (19) also satisfies the inequality (22), it is sufficient to check that the largest possible value \(|\rho|\) satisfying the inequality (19) satisfies the inequality (22), i.e., that \[\frac{\sqrt{(1-a)\cdot(1-b)}}{\sqrt{a\cdot b}}\leq\frac{4\cdot\sqrt{a\cdot(1-a )\cdot b\cdot(1-b)}}{(2\cdot a-1)\cdot(2\cdot b-1)}. \tag{23}\] If we divide both sides by \(\sqrt{(1-a)\cdot(1-b)}\), we get the following equivalent inequality \[\frac{1}{\sqrt{a\cdot b}}\leq\frac{4\cdot\sqrt{a\cdot b}}{(2\cdot a-1)\cdot( 2\cdot b-1)}. \tag{24}\] Multiplying both sides by both denominators, we get the following equivalent inequality \[(2\cdot a-1)\cdot(2\cdot b-1)\leq 4\cdot a\cdot b. \tag{25}\] Opening parentheses, we get \[4\cdot a\cdot b-2\cdot a-2\cdot b+1\leq 4\cdot a\cdot b. \tag{26}\] Adding \(2\cdot a+2\cdot b-4\cdot a\cdot b\) to both sides, we get an equivalent inequality \[1\leq 2\cdot a+2\cdot b, \tag{27}\] which is true since we consider the case when \(a+b>1\). So, in this case, we indeed have \(d\geq 0\). \(4.2^{\circ}\). Let us now consider the case when \(a<0.5\) and \(b<0.5\). In this case, \(a+b-1<0\), so the inequality (4) takes the form \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\geq 0, \tag{28}\] i.e., equivalently, that \[|\rho|\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\leq a\cdot b, \tag{29}\] or that \[|\rho|\leq\frac{a\cdot b}{\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}}=\frac{\sqrt{a \cdot b}}{\sqrt{(1-a)\cdot(1-b)}}. \tag{30}\] In this case, the condition \(d\geq 0\) that the value (5) is non-negative takes the form \[1-|\rho|\cdot\frac{(1-2\cdot a)\cdot(1-2\cdot b)}{4\cdot\sqrt{a\cdot(1-a)\cdot b \cdot(1-b)}}, \tag{31}\] i.e., equivalently, \[|\rho|\cdot\frac{(1-2\cdot a)\cdot(1-2\cdot b)}{4\cdot\sqrt{a\cdot(1-a)\cdot b \cdot(1-b)}}\leq 1 \tag{32}\] and \[|\rho|\leq\frac{4\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}}{(1-2\cdot a)\cdot( 1-2\cdot b)}. \tag{33}\] Similarly to the cases when \(\rho>0\) and when \(a+b>1\), to check that all values \(|\rho|\) satisfying the inequality (30) also satisfies the inequality (33), it is sufficient to check that the largest possible value \(|\rho|\) satisfying the inequality (30) satisfies the inequality (33), i.e., that \[\frac{\sqrt{a\cdot b}}{\sqrt{(1-a)\cdot(1-b)}}\leq\frac{4\cdot\sqrt{a\cdot(1- a)\cdot b\cdot(1-b)}}{(1-2\cdot a)\cdot(1-2\cdot b)}. \tag{34}\] If we divide both sides by \(\sqrt{a\cdot b}\), we get the following equivalent inequality \[\frac{1}{\sqrt{(1-a)\cdot(1-b)}}\leq\frac{4\cdot\sqrt{(1-a)\cdot(1-b)}}{(1-2 \cdot a)\cdot(1-2\cdot b)}. \tag{35}\] Multiplying both sides by both denominators, we get the following equivalent inequality \[(1-2\cdot a)\cdot(1-2\cdot b)\leq 4\cdot(1-a)\cdot(1-b). \tag{36}\] Opening parentheses, we get \[1-2\cdot a-2\cdot b+4\cdot a\cdot b\leq 4-4\cdot a-4\cdot b+4\cdot a\cdot b. \tag{37}\] Adding \(4\cdot a+4\cdot b-4\cdot a\cdot b-1\) to both sides, we get an equivalent inequality \[2\cdot a+2\cdot b\leq 3, \tag{38}\] which is true since we consider the case when \(a+b<1\). So, in this case, we indeed have \(d\geq 0\). In all cases when have \(d\geq 0\), thus, the and-operation \(f_{\rho}(a,b)\) is indeed a copula. Thus, for boxes in which all four vertices belong to the area described by the expression (1), the inequality (4a) is always satisfied. \(5^{\circ}\). Let us now consider the boxes in which two vertices belong to the boundary between two areas. First, we will consider the case when \(\rho>0\) and then, we will consider the case when \(\rho<0\). \(6^{\circ}\). Let us first consider the case when \(\rho>0\). For this case, let us first describe the boundaries between the areas. \(6.1^{\circ}\). Let us analyze which of the three areas listed in formula (4) are possible in this case. When \(\rho>0\), we have \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\geq a\cdot b,\] and since it is known that we always have \(a\cdot b\geq\max(a+b-1,0)\), we have \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\geq\max(a+b-1,0).\] So, for \(\rho>0\), we cannot have the first of the three cases described by the formula (4). So, we only have two areas: * the area where the and-operation is described by the formula (1), and * the area where the and-operation is described by the formula \(\min(a,b)\). \(6.2^{\circ}\). Let us describe the two possible areas and the boundary between these two areas. The first area is characterized by the inequality \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\leq\min(a,b). \tag{39}\] Similarly to the previous part of the proof, without losing generality, we can consider the case when \(a\leq b\). In this case, the inequality (39) describing the first area takes the following form: \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\leq a. \tag{40}\] If we subtract \(a\cdot b\) from both sides of this inequality, we get the following equivalent inequality: \[\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\leq a\cdot(1-b). \tag{41}\] Both sides of this inequality are non-negative, so we can get an equivalent inequality is we square both sides: \[\rho^{2}\cdot a\cdot(1-a)\cdot b\cdot(1-b)\leq a^{2}\cdot(1-b)^{2}. \tag{42}\] The cases when \(a\) or \(b\) are equal to \(0\) or \(1\) can be obtained by taking a limit from the cases when both \(a\) and \(b\) are located insyed the interval \((0,1)\). For such values, we can divide both side of the inequality by positive numbers \(a^{2}\), \(b\), and \(1-b\), and get the following equivalent inequality: \[\rho^{2}\cdot\frac{1-a}{a}\leq\frac{1-b}{b}, \tag{43}\] i.e., equivalently, \[\rho^{2}\cdot\frac{1-a}{a}\leq\frac{1}{b}-1. \tag{44}\] By adding 1 to both sides of this inequality, we get \[\frac{a+\rho^{2}\cdot(1-a)}{a}\leq\frac{1}{b}, \tag{45}\] i.e., equivalently, that \[b\leq\frac{a}{a+\rho^{2}\cdot(1-a)}. \tag{46}\] This inequality describes the first area, in which the and-operation is described by the formula (1). Thus, the boundary between the two areas is described by the equality \[b=\frac{a}{a+\rho^{2}\cdot(1-a)}. \tag{47}\] _Comment._ One can see that for \(a=0\) we get \(b=0\), for \(a=1\), we get \(b=1\). \(6.3^{\circ}\). Let us prove that for all \(a\), the corresponding boundary value \(b\) is greater than or equal to \(a\) - i.e., that for all the points \((a,b)\) on this boundary, we have \(a\leq b\). Indeed, for the expression (47), the desired inequality \(a\leq b\) takes the form \[a\leq\frac{a}{a+\rho^{2}\cdot(1-a)}. \tag{48}\] If we divide both sides by \(a\) and multiply both sides by the denominator of the right-hand side, we get the following equivalent inequality \[a+\rho^{2}\cdot(1-a)\leq 1. \tag{49}\] If we move all the terms to the right-hand side, we get an equivalent inequality \[0\leq 1-a-\rho^{2}\cdot(1-a)=(1-\rho^{2})\cdot(1-a). \tag{50}\] This inequality is always true, since \(\rho^{2}\leq 1\) and \(a\leq 1\), so indeed, for all boundary points, we have \(a\leq b\). \(6.4^{\circ}\). Let us prove that the boundary describes \(b\) as an increasing function of \(a\). By applying, to the equality (47) that describes the boundary, the same transformations that show the equivalent of inequalities (43) and (46), we can conclude that the equality (47) is equivalent to \[\rho^{2}\cdot\frac{1-a}{a}=\frac{1-b}{b}, \tag{51}\] i.e., to \[\rho^{2}\cdot\left(\frac{1}{a}-1\right)=\frac{1}{b}-1. \tag{52}\] The left-hand side is decreasing with respect to \(a\), the right-hand side is a decreasing function of \(b\). Thus, as \(a\) increases, the left-hand side decreases, thus the right-hand side also decreases and hence, the value \(b\) increases as well. \(6.5^{\circ}\). For \(\rho=1\) the condition (46) describing the first area takes the form \(b\leq a\). Since we have \(a\leq b\), this means that this condition is only satisfies for \(a=b\). For these values, the expression (4a) is equal to \[a\cdot a+\sqrt{a\cdot(1-a)\cdot a\cdot(1-a)}=a^{2}+a\cdot(1-a)=a^{2}+a-a^{2}=\] \[a=\min(a,b), \tag{53}\] which means that our and-operation is always equal to \(\min(a,b)\). The expression \(\min(a,b)\) is known to be a copula. So, we only need to prove the fact that our and-operation is a copula for the case when \(\rho<1\). This is the case we will consider from now on. \(6.6^{\circ}\). Let us prove that for \(\rho<1\), the only boundary points for which \(a=b\) are points for which \(a=b=0\) and \(a=b=1\). Indeed, as we have mentioned, the points \((0,0)\) and \((1,1)\) are boundary points. Let us prove, by contradiction, that there are no other boundary points for which \(a=b\). Indeed, when \(a=b\), the equality (52) that describes the boundary takes the form: \[\rho^{2}\cdot\left(\frac{1}{a}-1\right)=\frac{1}{a}-1. \tag{54}\] Dividing both sides of this equality by the non-zero right-hand side, we get \(\rho^{2}=1\). This contradicts to the fact that we are considering the case when \(\rho<1\) and thus, \(\rho^{2}<1\). This contradiction shows that other boundary points with \(a=b\) are not possible. \(6.7^{\circ}\). The boundary consists of a curved line that is separate from the line \(a=b\) - except for the endpoints. So, if we limit ourselves to a sub-box \([\varepsilon,1-\varepsilon]\times[\varepsilon,1-\varepsilon]\) for some small \(\varepsilon>0\), the boundary line is separated from the line \(a=b\) - there is the smallest distance \(\delta>0\) between points of these two lines. So, if we have a box that includes both points with \(a\leq b\) and with \(a\geq b\), we can divide this box into sub-boxes of linear size \(<\delta/2\) and thus, make sure that every sub-box that contains boundary points with \(a\leq b\) cannot contain any points with \(a=b\) - and therefore, only contains points with \(a\leq b\). So, due to additivity, it is sufficient to prove the inequality (4a) for boxes for which: * two vertices lie on the boundary, and * we have \(a\leq b\) for all the points from this sub-box. This will allow us to prove the inequality (4a) for all sub-boxes of the square \([\varepsilon,1-\varepsilon]\times[\varepsilon,1-\varepsilon]\). We can do it for any \(\varepsilon\) and thus, in the limit, get the desired inequality for all sub-boxes of the original square \([0,1]\times[0,1]\) as well. So, suppose that we have a box for which: * two vertices lie on the boundary, and * we have \(a\leq b\) for all the points from this box. Since the boundary describes the increasing function of \(a\), the corresponding box has the form So, in the corresponding box: * the two vertices \((\underline{a},\underline{b})\) and \((\overline{a},\overline{b})\) are on the boundary, * the vertex \((\overline{a},\underline{b})\) is in the first area, i.e., for this point, we have the expression (1), and * the vertex \((\underline{a},\overline{b})\) is in the second area, i.e., here \(C(\underline{a},\overline{b})=\min(\underline{a},\overline{b})\). The desired inequality (4a) has the form \[C(\overline{a},\underline{b})-C(\underline{a},\underline{b})\leq C(\overline{ a},\overline{b})-C(\underline{a},\overline{b}). \tag{55}\] The points \((\overline{a},\overline{b})\) and \((\underline{a},\overline{b})\) are both in the second area for which \(C(a,b)=\min(a,b)\) - to be more precise, the second of these points is in the boundary, which means it also satisfies the condition \(C(a,b)=\min(a,b)\). For all the points from the box, \(a\leq b\), so we have \[C(\overline{a},\overline{b})-C(\underline{a},\overline{b})=\min(\overline{a}, \overline{b})-\min(\underline{a},\overline{b})=\overline{a}-\underline{a}. \tag{56}\] On the other hand, for the difference in the left-hand side of the formula (55), we have \[C(\overline{a},\underline{b})-C(\underline{a},\underline{b})=\int_{\underline {a}}^{\overline{a}}\frac{\partial C}{\partial a}\,da. \tag{57}\] So, if we prove that the partial derivative \(\partial C/\partial a\) is always smaller or equal than \(1\), we would indeed conclude that \[C(\overline{a},\underline{b})-C(\underline{a},\underline{b})=\int_{\underline {a}}^{\overline{a}}1\,da=\overline{a}-\underline{a}, \tag{58}\] i.e., exactly, the desired inequality (55). For the points \((\overline{a},\underline{b})\) and \((\underline{a},\underline{b})\) - and the points from the interval connecting these two points - the expression \(C(a,b)\) is described by the formula (1). Thus, the partial derivative of \(C(a,b)\) with respect to \(a\) is described by the formula (4b). Thus, the inequality \[\frac{\partial C}{\partial a}(a,b)\leq 1, \tag{59}\] takes the form \[b+\rho\cdot\frac{1-2\cdot a}{2\cdot\sqrt{a\cdot(1-a)}}\cdot\sqrt{b\cdot(1-b)}\leq 1. \tag{60}\] Subtracting \(b\) from both sides of (60), we get an equivalent inequality \[\rho\cdot\frac{1-2\cdot a}{2\cdot\sqrt{a\cdot(1-a)}}\cdot\sqrt{b\cdot(1-b)} \leq 1-b. \tag{61}\] To separate the variables, we can divide both sides by \(\sqrt{b\cdot(1-b)}\), then we get an equivalent inequality \[\rho\cdot\frac{1-2\cdot a}{2\cdot\sqrt{a\cdot(1-a)}}\leq\sqrt{\frac{1-b}{b}}. \tag{62}\] By taking the square root of both sides of the inequality (46), we conclude that: \[\rho\cdot\sqrt{\frac{1-a}{a}}\leq\sqrt{\frac{1-b}{b}}. \tag{63}\] Thus, if we prove that the left-hand side of the inequality (62) is smaller than or equal to the left-hand side of the inequality (63), i.e., that \[\rho\cdot\frac{1-2\cdot a}{2\cdot\sqrt{a\cdot(1-a)}}\leq\rho\cdot\sqrt{\frac{ 1-a}{a}}; \tag{64}\] this will prove the inequality (62) and thus, the desired upper bound (60) on the partial derivative. We can simplify the inequality (64) by dividing both sides by \(\rho\) and multiplying both sides by \(2\cdot\sqrt{a\cdot(1-a)}\). Then, we get an equivalent inequality \[1-2\cdot a\leq 2\cdot(1-a)=2-2\cdot a, \tag{65}\] which is equivalent to \(1\leq 2\) and is, thus, always true. Thus, (55) holds, so the inequality (4a) is true for all the boxes in which two vertices are located on the boundary. This completes the proof of the Proposition for the case when \(\rho>0\). \(7^{\circ}\). Let us now consider the case when \(\rho<0\). For this case, let us first describe the boundaries between the areas. \(7.1^{\circ}\). Let us analyze which of the three areas listed in formula (4) are possible in this case. When \(\rho<0\), we have \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\leq a\cdot b,\] and since it is known that we always have \(a\cdot b\leq\min(a,b)\), we have \[a\cdot b+\rho\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\leq\min(a,b).\] So, for \(\rho<0\), we cannot have the third of the three cases described by the formula (4). So, we only have two areas: * the area where the and-operation is described by the formula (1), and * the area where the and-operation is described by the formula \[\max(a+b-1,0).\] \(7.2^{\circ}\). Let us describe the two possible areas and the boundary between these two areas. The first area is characterized by the inequality \(C(a,b)\geq\max(a+b-1,0)\), i.e., equivalently, by two inequalities \[a\cdot b-|\rho|\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\geq 0 \tag{66}\] and \[a\cdot b-|\rho|\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}\geq a+b-1. \tag{67}\] Let us consider these two inequalities one by one. \(7.2.1^{\circ}\). The inequality (66) is equivalent to: \[a\cdot b\geq|\rho|\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}. \tag{68}\] To separate the variables, let us divide both sides of this inequality by \[a\cdot\sqrt{b\cdot(1-b)},\] then we get an equivalent inequality \[\sqrt{\frac{b}{1-b}}\geq|\rho|\cdot\sqrt{\frac{1-a}{a}}. \tag{69}\] Both sides of this inequality are non-negative, thus if we square both sides, we get an equivalent inequality \[\frac{b}{1-b}\geq\rho^{2}\cdot\frac{1-a}{a}. \tag{70}\] Reversing both sides, we get an equivalent inequality \[\frac{1-b}{b}\leq\frac{a}{\rho^{2}\cdot(1-a)}, \tag{71}\] i.e., equivalently, \[\frac{1}{b}-1\leq\frac{a}{\rho^{2}\cdot(1-a)}. \tag{72}\] By adding 1 to both sides, we get \[\frac{1}{b}\leq\frac{\rho^{2}\cdot(1-a)+a}{\rho^{2}\cdot(1-a)}, \tag{73}\] i.e., equivalently, \[b\geq\frac{\rho^{2}\cdot(1-a)}{\rho^{2}\cdot(1-a)+a}. \tag{74}\] 7.2.2\({}^{\circ}\). The inequality (67) is equivalent to \[a\cdot b-a-b+1\geq|\rho|\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}, \tag{75}\] i.e., \[(1-a)\cdot(1-b)\geq|\rho|\cdot\sqrt{a\cdot(1-a)\cdot b\cdot(1-b)}. \tag{76}\] To separate the variables, let us divide both sides by \((1-a)\cdot\sqrt{b\cdot(1-b)}\), then we get an equivalent inequality \[\sqrt{\frac{1-b}{b}}\geq|\rho|\cdot\sqrt{\frac{a}{1-a}}. \tag{77}\] Both sides of this inequality are non-negative, thus if we square both sides, we get an equivalent inequality \[\frac{1-b}{b}\geq\rho^{2}\cdot\frac{a}{1-a}, \tag{78}\] i.e., equivalently, \[\frac{1}{b}-1\geq\rho^{2}\cdot\frac{a}{1-a}. \tag{79}\] By adding 1 to both sides, we get \[\frac{1}{b}\geq\frac{\rho^{2}\cdot a+(1-a)}{1-a}, \tag{80}\] i.e., equivalently, \[b\leq\frac{1-a}{\rho^{2}\cdot a+(1-a)}. \tag{81}\] 7.2.3\({}^{\circ}\). By combining the inequalities (74) and (81), we get the following description of the area in which the and-operation is described by the formula (1): \[\frac{\rho^{2}\cdot(1-a)}{\rho^{2}\cdot(1-a)+a}\leq b\leq\frac{1-a}{\rho^{2} \cdot a+(1-a)}. \tag{82}\] Thus, the boundary between the two areas consists of the following two curves: \[b=\frac{\rho^{2}\cdot(1-a)}{\rho^{2}\cdot(1-a)+a} \tag{83}\] and \[b=\frac{1-a}{\rho^{2}\cdot a+(1-a)}. \tag{84}\] 7.3\({}^{\circ}\). Let us prove that: * the curve (83) lies in the area where \(a+b\leq 1\), and * the curve (84) lies in the area where \(a+b\geq 1\). \(7.3.1^{\circ}\). Let us first prove that for each value \(b\) described by the formula (83), we have \(a+b\leq 1\). We need to prove the inequality \[a+\frac{\rho^{2}\cdot(1-a)}{\rho^{2}\cdot(1-a)+a}\leq 1. \tag{85}\] Subtracting \(a\) from both sides, we get an equivalent inequality \[\frac{\rho^{2}\cdot(1-a)}{\rho^{2}\cdot(1-a)+a}\leq 1-a. \tag{86}\] Dividing both sides by \(1-a\) and multiplying both sides by the denominator of the left-hand side, we get the following equivalent inequality: \[\rho^{2}\leq\rho^{2}\cdot(1-a)+a=\rho^{2}+(1-\rho^{2})\cdot a, \tag{87}\] which is, of course, always true, since \(\rho^{2}\leq 1\) and \(a\geq 0\). The statement is proven. \(7.3.2^{\circ}\). Let us now prove that for each value \(b\) described by the formula (84), we have \(a+b\geq 1\). We need to prove the inequality \[a+\frac{1-a}{\rho^{2}\cdot a+(1-a)}\geq 1. \tag{88}\] Subtracting \(a\) from both sides, we get an equivalent inequality \[\frac{1-a}{\rho^{2}\cdot a+(1-a)}\geq 1-a. \tag{89}\] Dividing both sides by \(1-a\) and multiplying both sides by the denominator of the left-hand side, we get the following equivalent inequality: \[1\geq\rho^{2}\leq a+(1-a)=1-(1-\rho^{2})\cdot a, \tag{90}\] which is, of course, always true. The statement is proven. \(7.4^{\circ}\). Similarly to Part 6 of this proof, it is sufficient to prove the inequality (4a) for boxes in which two vertices are on the boundary and for which: * either we have \(a+b\leq 1\) for all the points from the box, * or we have \(a+b\geq 1\) for all the points from the box. Let us consider the two parts of the boundary one by one. \(7.4.1^{\circ}\). Let us first consider the case when we have \(a+b\leq 1\) for all the points from the box. In this case, the corresponding part of the boundary is described by the formula (83). By reformulating this expression in the equivalent form \[b=\frac{1}{1+\frac{a}{\rho^{2}\cdot(1-a)}}=\frac{1}{1+\frac{1}{\rho^{2}}\cdot \frac{1}{\frac{1}{a}-1}}, \tag{91}\] we can see that \(b\) is a decreasing function of \(a\). Thus, the corresponding box has the form So, in the corresponding box: * the two vertices \((\underline{a},\overline{b})\) and \((\overline{a},\underline{b})\) are on the boundary, * the vertex \((\overline{a},\overline{b})\) is in the first area, i.e., for this point, we have the expression (1), and * the vertex \((\underline{a},\underline{b})\) is in the second area, i.e., here \(C(\underline{a},\overline{b})=\max(\underline{a}+\underline{b}-1,0)\). So, for three vertices, we have \(C(a,b)=\max(a+b-1,0)\). Since for all the points from the box, we have \(a+b\leq 1\), this means that for three vertices, we have \(C(a,b)=0\). In this case, the inequality (4a) is clearly true. \(7.4.2^{\circ}\). Let us now consider the case when we have \(a+b\geq 1\) for all the points from the box. In this case, the corresponding part of the boundary is described by the formula (84). By reformulating this expression in the equivalent form \[b=\frac{1}{1+\rho^{2}\cdot\frac{a}{1-a}}=\frac{1}{1+\rho^{2}\cdot\frac{1}{ \frac{1}{a}-1}}, \tag{91}\] we can see that \(b\) is also a decreasing function of \(a\). Thus, the corresponding box has the same form as in the case \(a+b\leq 1\): So, in the corresponding box: * the two vertices \((\underline{a},\overline{b})\) and \((\overline{a},\underline{b})\) are on the boundary, * the vertex \((\underline{a},\underline{b})\) is in the first area, i.e., for this point, we have the expression (1), and * the vertex \((\overline{a},\overline{b})\) is in the second area, i.e., here \(C(\underline{a},\overline{b})=\max(\underline{a}+\underline{b}-1,0)\). Similarly to Part 6 of the proof, we can show that the desired inequality (4a) is satisfied if we the corresponding partial derivatives is smaller than or equal to 1, i.e., if \[\frac{\partial C}{\partial a}=b-|\rho|\cdot\frac{1-2\cdot a}{2\cdot\sqrt{a \cdot(1-a)}}\cdot\sqrt{b\cdot(1-b)}\leq 1. \tag{92}\] Subtracting \(b\) from both sides, we get an equivalent inequality \[-|\rho|\cdot\frac{1-2\cdot a}{2\cdot\sqrt{a\cdot(1-a)}}\cdot\sqrt{b\cdot(1-b) }\leq 1-b. \tag{93}\] We can separate the variable if we divide both sides by \(\sqrt{b\cdot(1-b)}\), then we get an equivalent inequality \[-|\rho|\cdot\frac{1-2\cdot a}{2\cdot\sqrt{a\cdot(1-a)}}\leq\sqrt{\frac{1-b}{ b}}. \tag{94}\] We know a lower bound on the expression in the right-hand side - it is provided by the inequality (77). Thus, to prove the inequality (94), it is sufficient to prove that the left-hand side of the formula (94) is smaller than or equal to this lower bound, i.e., that \[-|\rho|\cdot\frac{1-2\cdot a}{2\cdot\sqrt{a\cdot(1-a)}}\leq|\rho|\cdot\sqrt{ \frac{a}{1-a}}. \tag{95}\] Let us prove this inequality. Dividing both sides of (95) by \(|\rho|\) and multiplying both sides by \(2\cdot\sqrt{a\cdot(1-a)}\), we get an equivalent inequality \(-(1-2\cdot a)\leq 2\cdot a\), i.e., \(2\cdot a-1\leq 2\cdot a\), which is always true. Thus, the inequality (94) holds, hence the inequality (92) also holds, and therefore, in this case, the inequality (4a) that describes a copula is also true. \(8^{\circ}\). We have considered all possible cases, and in all these cases, we have shown that the inequality (4a) - that defines a copula - is true. Thus, our and-operation is indeed a copula. The proposition is proven. ## Acknowledgments This research was partly funded by the EPSRC and ESRC CDT in Risk and Uncertainty (EP/L015927/1), established within the Institute for Risk and Uncertainty at the University of Liverpool. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 - EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. V.K. was supported in part by the National Science Foundation grants 1623190 (A Model of Change for Preparing a New Generation for Professional Practice in Computer Science), and HRD-1834620 and HRD-2034030 (CAHSI Includes), and by the AT&T Fellowship in Information Technology. He was also supported by the program of the development of the Scientific-Educational Mathematical Center of Volga Federal District No. 075-02-2020-1478, and by a grant from the Hungarian National Research, Development and Innovation Office (NRDI).
2305.00188
New Characterizations and Efficient Local Search for General Integer Linear Programming
Integer linear programming (ILP) models a wide range of practical combinatorial optimization problems and significantly impacts industry and management sectors. This work proposes new characterizations of ILP with the concept of boundary solutions. Motivated by the new characterizations, we develop a new local search algorithm Local-ILP, which is efficient for solving general ILP validated on a large heterogeneous problem dataset. We propose a new local search framework that switches between three modes, namely Search, Improve, and Restore modes. Two new operators are proposed, namely the tight move and the lift move operators, which are associated with appropriate scoring functions. Different modes apply different operators to realize different search strategies and the algorithm switches between three modes according to the current search state. Putting these together, we develop a local search ILP solver called Local-ILP. Experiments conducted on the MIPLIB dataset show the effectiveness of our algorithm in solving large-scale hard ILP problems. In the aspect of finding a good feasible solution quickly, Local-ILP is competitive and complementary to the state-of-the-art commercial solver Gurobi and significantly outperforms the state-of-the-art non-commercial solver SCIP. Moreover, our algorithm establishes new records for 6 MIPLIB open instances. The theoretical analysis of our algorithm is also presented, which shows our algorithm could avoid visiting unnecessary regions.
Peng Lin, Shaowei Cai, Mengchuan Zou, Jinkun Lin
2023-04-29T07:22:07Z
http://arxiv.org/abs/2305.00188v4
# New Characterizations and Efficient Local Search for General Integer Linear Programming ###### Abstract Integer linear programming (ILP) models a wide range of practical combinatorial optimization problems and has significant impacts in industry and management sectors. This work proposes new characterizations of ILP with the concept of boundary solutions. Motivated by the new characterizations, we develop an efficient local search solver, which is the first local search solver for general ILP validated on a large heterogeneous problem dataset. We propose a new local search framework that switches between three modes, namely \(Search\), \(Improve\), and \(Restore\) modes. We design tailored operators adapted to different modes, thus improving the quality of the current solution according to different situations. For the \(Search\) and \(Restore\) modes, we propose an operator named _tight move_, which adaptively modifies variables' values, trying to make some constraint tight. For the \(Improve\) mode, an efficient operator _lift move_ is proposed to improve the quality of the objective function while maintaining feasibility. Putting these together, we develop a local search solver for integer linear programming called Local-ILP. Experiments conducted on the MIPLIB dataset show the effectiveness of our solver in solving large-scale hard integer linear programming problems within a reasonably short time. Local-ILP is competitive and complementary to the state-of-the-art commercial solver Gurobi and significantly outperforms the state-of-the-art non-commercial solver SCIP. Moreover, our solver establishes new records for 6 MIPLIB open instances. The theoretical analysis of our algorithm is also presented, which shows our algorithm could avoid visiting unnecessary regions and also maintain good connectivity of targeted solutions. integer linear programming, local search, combinatorial optimization, mathematical programming solver ## 1 Introduction Integer linear programming (ILP) is an important class of mathematical programming, where the goal is to optimize a linear objective function under linear constraints, while variables must take integer values. Integer linear programming is a versatile model that could be used to describe a wide range of real-world problems in logistics, economics, social science, and politics (Genova and Guliashki 2011). Many combinatorial optimization problems, such as the knapsack problem, travelling salesman problem, warehouse location problem, decreasing costs and machinery selection problem, network and graph problems (maximum flow, set covering, matching, weighted matching, spanning trees, etc.), and many scheduling problems, can all be formulated and solved as integer linear programming problems (Junger et al. 2009, Chen et al. 2011, Taha 2014, Sierksma and Zwols 2015, Wolsey 2020). Along with the powerful ability of ILP to model combinatorial optimization problems, much effort has been devoted to solving ILP problems. As the general ILP problem is NP-hard (KARP 1972), there is no polynomial algorithm that could solve the general ILP to an exact optimal solution, unless P = NP. Nonetheless, different exact and heuristic algorithms have been proposed. The best-known approach is branch-and-bound (BnB), which is a classical approach for solving combinatorial optimization problems by iteratively dividing the feasible region and bounding the objective function (Land and Doig 1960, Lawler and Wood 1966). The majority of work on BnB for ILP focuses on methods for calculating lower bounds on the value of the objective function. When the lower bound is greater than or equal to the upper bound, the branch can be pruned because no better solution can be found by extending the current node. In addition, other methods have been proposed, such as cutting plane (Gomory 1958) and domain propagation (Achterberg 2007), which are often integrated into the BnB process. However, exact algorithms suffer from exponential time complexity as problem size increases, which makes them impractical for large-scale instances. Heuristic algorithms, in contrast, try to find high-quality feasible solutions within a reasonable time, although they do not guarantee the optimality of the final solution. They perform well on problems with binary variables and specific structures but they are not yet competent in general ILP (Genova and Guliashki 2011). Hybrid approaches are promising because they leverage the strengths of different methods in a complementary mode (Bertacco et al. 2007, Hansen et al. 2006, Puchinger 2005, Luo et al. 2001). Hybrid approaches are the infrastructure of state-of-the-art commercial and non-commercial ILP solvers such as Gurobi (2022) and SCIP (Achterberg 2009), both of which are based on the BnB algorithm, combining heuristics, cutting planes, etc. Hybrid solvers are all designed for mixed integer linear programming (MILP) solving. We distinguish ILP from MILP solvers for two reasons: (1) The hybrid MILP frameworks strongly rely on solving LP relaxations, which is a computationally heavy procedure and depends on the underlying LP solver. (2) The MILP framework, while accepting continuous variables, does not make use of the characteristics of ILP and might be less efficient for ILP problems. Moreover, in practical industry applications, a large portion of problems belong to ILP, making it significant to build ILP solvers. Local search is a non-exhaustive method that plays an important role in solving combinatorial optimization problems (Hoos and Stutzle, 2004). Local search algorithms iteratively explore the neighborhood of a solution and move towards a better one. Its variants have been efficiently applied as subcomponents to BnB solvers as primal heuristics (Berthold, 2006; Achterberg, 2009; Hendel, 2018; Song et al., 2020). Despite many works on local search for specific problems (Jacobs and Brusco, 1995; Vaessens et al., 1996; Merz and Freisleben, 1997; Dorne and Hao, 1998; Stutzle, 2006), research on end-to-end local search algorithms for general ILP is pretty rare. Up to our knowledge, we only found (Prestwich and Verachi, 2008) that studies local search for general ILP, but is in fact tested on one special type of problem (the template design problem). Some attempts on sub-classes of ILP have been proposed, such as over-constrained ILP (or integer optimization) (Walser, 1998, 1999) and 0-1 ILP (Walser et al., 1998; Souza Brito et al., 2014; Umetani, 2017; Lei et al., 2021). Nevertheless, to the best of our knowledge, there is no result on general ILP solving tested on a broad range of heterogeneous problem benchmarks. We develop a new characterization of the solution space of ILP that makes use of the linearity of the objective function and constraints, namely the boundary solutions. We show that searching in boundary solutions is complete to the original ILP in the sense that all optimal solutions belong to boundary solutions. We also derive structural properties for designing local search algorithms. Motivated by the derived properties, we design our new operators, and develop an efficient local search solver for ILP. As far as we know, this is the first local search solver for general ILP validated on a variety of different problems (i.e., the MIPLIB dataset). We propose a local search framework that switches between three modes, namely \(Search\), \(Improve\), and \(Restore\) modes. Depending on the state of the best-found solution and the current solution, the framework chooses to execute the process of the appropriate mode, in which tailored operators are leveraged to improve the quality of the current solution according to different situations. For the \(Search\) and \(Restore\) modes, we propose an operator named _tight move_, that jointly considers variables and constraints' information and tries to make some constraint tight. To distinguish important constraints and help guide the search to find feasible and high quality solutions, we design a tailored weighting scheme and scoring function to select operations. For the \(Improve\) mode, an efficient operator _lift move_ is proposed to improve the quality of the objective function while maintaining feasibility. To drive _lift move_ operation, we propose a way to compute a variable's new candidate values called local domain reduction. Additionally, we also design a specialized scoring function for _lift move_ to pick a good operation. By putting these together, we develop a local search solver for integer linear programming called Local-ILP. Experiments are conducted to evaluate Local-ILP on the MIPLIB benchmark, in which the ILP instances labeled hard and open are selected. We compare our solver with the state-of-the-art non-commercial integer linear programming solver SCIP, as well as the state-of-the-art commercial solver Gurobi, and we use both the exact version and the heuristic version of Gurobi. Experimental results show that, within a reasonable time, Local-ILP is competitive and complementary with Gurobi, and significantly better than SCIP. Moreover, Local-ILP establishes six new records for MIPLIB instances by finding the new best solutions. We perform theoretical analysis of our operators and our algorithm based on the concept of boundary solutions, which are more explicit and closely related to the original form of ILP than the integral hull in polyhedral theory. We show that all feasible solutions visited by our algorithm are boundary solutions, efficiently avoiding unnecessary regions. On the other side, we show that the strategy adopted by our algorithm still maintains good connectivity, that the boundary and optimal solutions are of good connectivity in our algorithm. The remainder of the paper is organized as follows: Section 2 introduces the integer linear programming problem and basic concepts for the local search algorithm. Section 3 presents some basic characterizations of ILP we leveraged to design our local search algorithm. Section 4 proposes our local search framework for solving general integer linear programming. In sections 5 and 6, we introduce two new strategies that are key techniques for different modes of the framework. Section 7 presents detailed descriptions of how the Local-ILP algorithm implements the framework. The experimental results of our algorithm on public benchmark instances are reported in Section 8. The analysis of our algorithm is presented in Section 9, followed by the conclusion in Section 10. ## 2 Preliminary In this section, we present some fundamental integer linear programming and local search concepts pertinent to the paper. ### Integer Linear Programming Problem An instance of generalized ILP has the following form: \[\begin{array}{ll}Minimize&c^{T}x\\ subject\ to&Ax\leq b\\ &x^{l}\leq x\leq x^{u}\\ &x\in\mathbb{Z}^{n}\end{array} \tag{1}\] over integer variables \(x\), where \(A\in\mathbb{R}^{m\times n}\), \(b\in\mathbb{R}^{m}\), \(c\in\mathbb{R}^{n}\), \(x^{l}\in(\mathbb{Z}\cup-\infty)^{n}\) and \(x^{u}\in(\mathbb{Z}\cup+\infty)^{n}\) are given inputs. We use \(F(x)=c^{T}x\) to denote the objective function. The goal of ILP is to minimize the value of the objective function, while satisfying all constraints. We denote the \(ith\) constraint in the constraint system \(Ax\leq b\) by \(con_{i}\): \(A_{i}x\leq b_{i}\), where \(A_{i}\in\mathbb{R}^{n}\), \(b_{i}\in\mathbb{R}\). The coefficient of \(x_{j}\) in \(con_{i}\) is \(A_{ij}\) and \(con_{i}\) contains \(x_{j}\) if \(A_{ij}\neq 0\). The variables' bounds are denoted by \(x^{l}\leq x\leq x^{u}\); they are indeed parts of \(Ax\leq b\), but will be treated separately from the coefficient matrix \(A\) in practical algorithms. The infinite value of \(x^{l}\) or \(x^{u}\) indicates that there is no lower or upper bound on the corresponding variable. A complete assignment (assignment for short) \(\alpha\) for an ILP instance \(Q\) is a mapping that assigns to each variable an integer, and \(\alpha(x_{j})\) denotes the value of \(x_{j}\) under \(\alpha\). An assignment \(\alpha\) of \(Q\) satisfies \(con_{i}\) in \(Ax\leq b\) if \(A_{i}\cdot\alpha(x)\leq b_{i}\). An assignment \(\alpha\) of \(Q\) is feasible if and only if it satisfies all constraints in \(Q\). The value of the objective function of a solution \(\alpha\) is denoted as \(obj(\alpha)\). ### Local Search Algorithm The local search algorithm is well used in solving combinatorial optimization problems. It explores the search space that is comprised of all assignments, each of which is a candidate solution. Normally, a local search algorithm starts with an assignment and iteratively alters the assignment by modifying the value of one variable, in order to find a feasible solution with a high-quality objective function value. The key to a local search algorithm is to decide, under different circumstances, which variables to modify and to what value, to get a new assignment. An **operator** defines how the candidate solution is modified, i.e., given variables to be modified, how to fix them to new values. When an operator is instantiated by specifying the variable to operate on, we obtain an **operation**. Given an operation \(op\), the scoring function \(score(op)\) is used to measure how good \(op\) is. An operation \(op\) is said to be **positive** if \(score(op)>0\), which indicates that performing \(op\) can improve the quality of the current assignment. ## 3 A Glance at New Characterizations of ILP for Local Search To facilitate the analysis of local search algorithms for ILP, we introduce some new characterizations of ILP. We briefly present here the intuitions that we leveraged to design our local search algorithm and will give precise formulations and theoretical arguments in Section 9. Currently, the most common theoretical tool to analyze ILPs is the polyhedral theory Schrijver (1998), of which the key component is the convex hull of all feasible solutions of an ILP (i.e., the integral hull), and it has established the equivalence between searching for the optimal solution and finding this integral hull. This equivalence became the motivation for the cutting plane method, which is widely used in commercial solvers, although the integral hull is also hard to compute and could be done in exponential time. However, the characterizations by integral hull are difficult to use for analyzing local search algorithms for ILP, as there is no explicit form of this integral hull and also, no direct relations with constraints in the original form of ILP. Thus, we develop another way to characterize solutions of ILP based on its original form, and show our operators and algorithm are suited for ILPs. An important characteristic of ILP, is its linearity of objective function and constraints, although the ILP does not have true convexity, it still has some properties of this nature, such as all feasible solutions staying in the feasible domain of its LP relaxation, which is convex. From this aspect, some simple facts could be observed, let \(J=\{1,...,n\}\), \(\boldsymbol{e}_{j}\) be the unit vector with 1 in \(j\)-th coordinate and 0 in all other places. There is: **Fact 1**: _If for \(\mathbf{x}_{1},\mathbf{x}_{2}\in\mathbb{Z}^{n}\), s.t. \(\exists j\in J,k\in\mathbb{Z},k>0\), \(\mathbf{x}_{2}=\mathbf{x}_{1}+k\mathbf{e}_{j}\), \(\mathbf{x}_{1},\mathbf{x}_{2}\) are both feasible for an ILP instance, then for \(\mathbf{x}^{\prime}=\mathbf{x}_{1}+t\mathbf{e}_{j}\), \(t\in\{0,...,k\}\), \(\mathbf{x}^{\prime}\) is also feasible._ **Fact 2**: _If for \(\mathbf{x}_{1},\mathbf{x}_{2}\in\mathbb{Z}^{n}\), s.t. \(\exists j\in J,k\in\mathbb{Z},k>0\), \(\mathbf{x}_{2}=\mathbf{x}_{1}+k\mathbf{e}_{j}\). Let \(F(\mathbf{x})\) be the objective function of an ILP instance. Then if \(F(\mathbf{x}_{1})\leq F(\mathbf{x}_{2})\) then for \(\mathbf{x}^{\prime}=\mathbf{x}_{1}+t\mathbf{e}_{j}\), \(t\in\{0,...,k\}\), \(F(\mathbf{x}_{1})\leq F(\mathbf{x}^{\prime})\leq F(\mathbf{x}_{2})\)._ In other words, solutions that lie between two solutions that are different in only one dimension have the objective value that is also in between (or could be equal to) the two solutions. So we can have some intuitions about ILP feasible solutions and optimal solutions: (1) all feasible solutions lie within a region (the integral hull) (2) all optimal solutions lie in the "boundary" of the region Furthermore, we call a set of solutions a **complete** search space to an ILP if all optimal solutions are contained in this space. We will formalize the concept of "boundary" in Section 9 and show that boundary solutions are complete for an ILP. For a search algorithm, we can distinguish three different stages: (1) find the location of the integral hull (= find a feasible solution) (2) find good solutions within the integral hull (= improve the quality of a feasible solution) (3) get back to the integral hull when jumping out of it (= from an infeasible solution to get a good feasible solution) Our algorithm and operators are designed based on these observations. We will first give descriptions of our algorithm in the following sections and present the formal definitions and analysis of our algorithm in Section 9. ## 4 A New Local Search Framework for ILP We propose a new local search framework that takes advantage of adopting three different modes, namely \(Search\), \(Improve\), and \(Restore\) modes. In each mode, we use different operators to explore new assignments and thus diversify different search behaviors during different periods. \(\bullet\)**Search mode**. The algorithm initially works in \(Search\) mode and tries to find a feasible assignment. When a feasible assignment is found, the algorithm goes into the next phase and switches between \(Improve\) and \(Restore\) modes. * **Improve mode**. Based on a known feasible solution, the \(Improve\) mode attempts to improve the value of the objective function while maintaining feasibility. If no such operation is found, the algorithm will break feasibility, obtain an infeasible solution, and enter \(Restore\) mode. * **Restore mode**. The restore mode is to repair an infeasible solution to a good feasible solution. For an infeasible solution, the \(Restore\) mode concentrates on searching for a new high-quality feasible solution by considering objective function and repairing the feasibility, and then returns to \(Improve\) mode after success. As depicted in Figure 1, after initialization, which generates an assignment \(\alpha\), the algorithm works in three modes: \(Search\) mode, \(Improve\) mode and \(Restore\) mode. In each mode \(X\) (\(X\) is \(Search\), \(Improve\) or \(Restore\)), an \(X\) operation is iteratively picked to modify \(\alpha\), where an \(X\) operation refers to an operation that is customized for mode \(X\). When the feasibility of \(\alpha\) is changed in the current mode, the algorithm shifts to another mode as we explained. An outline of the framework is presented in Algorithm 1. In the beginning, the algorithm constructs an assignment \(\alpha\), and initializes the best-found solution \(\alpha^{*}\) as \(\emptyset\) and its objective value to \(+\infty\). The core of the algorithm consists of a loop (lines 2-7) in which assignment \(\alpha\) is modified iteratively until a given time limit is reached. Depending on the state of the assignments \(\alpha\) and \(\alpha^{*}\), it chooses, _Search_, _Improve_, or _Restore_ modes to perform operations on variables (lines 4-6). Once a better feasible solution is discovered during the search, \(\alpha^{*}\) and \(obj^{*}\) are updated correspondingly (line 3). The search is restarted if \(\alpha^{*}\) is not updated within a sufficient number of steps (line 7). When the time limit is reached, the algorithm returns the best-found solution \(\alpha^{*}\), and its objective value \(obj^{*}\). Note that, if the algorithm fails to find any feasible solution during the search, then \(\alpha^{*}\!=\!\emptyset\) and \(obj^{*}\!=\!+\infty\). Figure 1: A Local Search Framework for ILP ``` Input: ILP instance \(Q\), cut-off time \(cut\)\(off\) Output: A solution \(\alpha\) of \(Q\) and its objective value 1\(\alpha\) := an initial solution ; \(\alpha^{*}\) := \(\emptyset\) ; \(obj^{*}\) := +\(\infty\) ; 2while\(running\)\(time\) < \(cut\)\(off\)do 3if\(\alpha\) is feasible and\(obj(\alpha)\) < \(obj^{*}\)then\(\alpha^{*}\) := \(\alpha\); \(obj^{*}\) := \(obj(\alpha)\) ; 4if\(\alpha^{*}\) := \(\emptyset\)then perform operation for \(Search\) Mode ; 5elseif\(\alpha\) is feasiblethen perform operation for \(Improve\) Mode ; 6else perform operation for \(Restore\) Mode ; 7ifenough steps to not improvethen restart ; 8 return\(\alpha^{*}\) and \(obj^{*}\) ; ``` **Algorithm 1**Local Search Framework for ILP Now we introduce the process inside each mode. All three modes adopt a general procedure as described in Algorithm 2. It prefers to pick a positive operation (according to some heuristic) if one exists. If no such operation exists, at which point the algorithm is stuck in a local optimum, a random perturbation operation on \(\alpha\) is performed. Our three different modes adopt different operators to generate new assignments. In the following two sections, we present those operations (also called \(X\) operations) in each mode, which contain two new strategies raised by us, namely _tight move_ and _lift move_, and section 7 presents the whole Local-ILP algorithm that implements the above framework. ``` Input: ILP instance \(Q\), a solution: \(\alpha\) 1if\(\exists\) positive \(X\) operationsthen\(op\) := a positive \(X\) operation ; 2else\(op\) := a random perturbation \(X\) operation to escape local optima ; 3 perform \(op\) to modify \(\alpha\); ``` **Algorithm 2**Process for Mode \(X\) ## 5 Tight Move Operator The tight move (_tm_) operator is used in \(Search\) mode and \(Restore\) mode, with an efficient weighting scheme and a scoring function we designed that will be presented later. ### Tight Move Our tight move operator for integer linear programming is defined below. **Definition 1**: Given an assignment \(\alpha\), the **tight move operator**, denoted as \(tm(x_{j},con_{i})\), assigns an integer variable \(x_{j}\) to the value making the constraint \(con_{i}\) as tight as possible and keeping the \(x_{j}\)'s bounds satisfied. Here, \(con_{i}\) contains \(x_{j}\) and could be either violated or satisfied. Precisely, let \(\Delta=b_{i}-A_{i}\cdot\alpha(x)\), a \(tm\) operation is: * if \(\Delta<0\): \(con_{i}\) is violated. There exists an _tm_ operation \(tm(x_{j},con_{i})\) for variable \(x_{j}\): --if \(A_{ij}<0\), then \(tm(x_{j},con_{i})\) increases \(\alpha(x_{j})\) by \(min(\left\lceil\frac{\Delta}{A_{ij}}\right\rceil,x_{j}^{u}-\alpha(x_{j}))\) --if \(A_{ij}>0\), then \(tm(x_{j},con_{i})\) decreases \(\alpha(x_{j})\) by \(min(\left\lfloor\frac{\Delta}{A_{ij}}\right\rfloor,\left\lvert x_{j}^{l}- \alpha(x_{j})\right\rvert)\) * if \(\Delta\geq 0\): \(con_{i}\) is satisfied. There exists an _tm_ operation \(tm(x_{j},con_{i})\) for variable \(x_{j}\): --if \(A_{ij}<0\), then \(tm(x_{j},con_{i})\) decreases \(\alpha(x_{j})\) by \(min(\left\lvert\left\lceil\frac{\Delta}{A_{ij}}\right\rfloor,\left\lvert x_{j}^ {l}-\alpha(x_{j})\right\rvert)\) --if \(A_{ij}>0\), then \(tm(x_{j},con_{i})\) increases \(\alpha(x_{j})\) by \(min(\left\lfloor\frac{\Delta}{A_{ij}}\right\rfloor,x_{j}^{u}-\alpha(x_{j}))\) Our tight move operator has two properties while keeping the \(x_{j}\)'s bounds satisfied: 1. If \(con_{i}\) is violated, it makes \(con_{i}\) as close to being satisfied as possible while minimally influencing other constraints to be violated, as it selects the minimal change of \(x_{j}\) to make \(con_{i}\) do so. 2. If \(con_{i}\) is satisfied, it can push \(x_{j}\) to its extreme value, which keeps \(con_{i}\) satisfied. This explores the influence of the maximal change of \(x_{j}\) to the objective function and also helps to jump out of the local optimum. Remember, our local search always tries to find a positive operation to continuously improve \(\alpha\). When there is no positive \(tm\) operation in \(\alpha\), we may use another form of the tight move operator, the **paired tight move**, which executes two \(tm\) operations consecutively. We apply the paired tight move operator in _Search_ mode. **Definition 2**: The **paired tight move**, denoted as \(ptm(x_{j},con_{i},x_{j^{\prime}},con_{i^{\prime}})\), performs \(tm(x_{j},con_{i})\) and \(tm(x_{j^{\prime}},con_{i^{\prime}})\), where \(con_{i^{\prime}}\) is a constraint satisfied before performing \(tm(x_{j},con_{i})\) and violated after, \(x_{j^{\prime}}\) is a variable contained in \(con_{i^{\prime}}\) and \(tm(x_{j^{\prime}},con_{i^{\prime}})\) is constructed after performing \(tm(x_{j},con_{i})\). Since after performing the first operation \(tm(x_{j},con_{i})\) there may be many \(tm(x_{j^{\prime}},con_{i^{\prime}})\) to choose as the second operation. If we consider all such operations, the construction and score calculation of \(ptm(x_{j},con_{i},x_{j^{\prime}},con_{i^{\prime}})\) will be very time-consuming. To avoid such situations, we designed a two-level strategy to filter the \(con_{i^{\prime}}\) for choosing the second operation: 1. First, we consider "fragile" constraints that are exactly satisfied (\(A_{i^{\prime}}\cdot\alpha(x)=b_{i^{\prime}}\)) before performing \(tm(x_{j},con_{i})\) and violated after. 2. If no fragile constraint exists, we consider secure constraints that are overly satisfied (\(A_{i^{\prime}}\cdot\alpha(x)<b_{i^{\prime}}\)) before performing \(tm(x_{j},con_{i})\) and violated after. We only consider these candidate operations and use the scoring function to select them. ### Scoring Function for Tight Move During the local search process, scoring functions are used to compare different operations and pick one to execute in each step. For the _tight move_, we propose a weighted scoring function. Our scoring function has two ingredients: a weighting scheme to distinguish important constraints, and score computations to select operations. #### 5.2.1 Weighting Scheme In order to guide the search process, weighting techniques are widely applied in local search algorithms, and are used primarily to increase the weight of violated constraints, hence guiding the search process towards satisfying violated constraints. Here we present the weighting scheme we utilize, which is based on the probabilistic version of the PAWS scheme (Cai and Su 2013, Thornton et al. 2004) and will be further used in the scoring function for selecting operations. We assign an integral weight to each constraint and also the objective function, which are denoted as \(w(con_{i})\) and \(w(obj)\), respectively. At the beginning of the search, these weights are initialized to 1. To prevent too large values, we set an upper limit \(ul_{con}\) and \(ul_{obj}\) to them, respectively: * for the weight of the constraints, \(ul_{con}=max(IL,nCons)\), where \(IL\) is a parameter and \(nCons\) denotes the number of constraints in the instance. * for the weight of the objective function, \(ul_{obj}=ul_{con}/wf\), where \(wf\in(1,+\infty)\) is a parameter. The values of all parameters mentioned here and in the following sections are set in Table 1 in our main algorithm. For weight setting, only when a solution is feasible do we consider it meaningful to improve the quality of the objective function. With this consideration, \(w(obj)\) should not be too large compared to the weights of the constraints. When the weighting scheme is activated, the weights of constraints and objective function are updated according to a parameter \(sp\) as follows: * Once the weighting scheme is activated, the weights of the constraints are updated: -- with probability \(1-sp\), for each violated constraint \(con_{i}\), if \(w(con_{i})<ul_{con}\), \(w(con_{i})\) is increased by one. -- with probability \(sp\), for each satisfied constraint \(con_{i}\), if \(w(con_{i})>1\), \(w(con_{i})\) is decreased by one. * The weight of the objective function is updated only if any feasible solution is found: -- with probability \(1-sp\), if \(c^{T}\cdot\alpha(x)\geq obj^{*}\) and \(w(obj)<ul_{obj}\), \(w(obj)\) is increased by one. -- with probability \(sp\), if \(c^{T}\cdot\alpha(x)<obj^{*}\) and \(1<w(obj)\), \(w(obj)\) is decreased by one. We see next that the weights are used in score functions to select operations. By changing the weights of constraints and thus focusing on constraints that are often violated in local optima, we help the local search process find feasible solutions. The weight updates for the objective function help guide the search towards solutions with better objective values. #### 5.2.2 Score Computations Based on the weighting scheme, we now introduce the scoring function for _tight move_, which helps the local search algorithm pick a \(tm\) operation to execute for the next step. Specifically, it has three parts: score for reducing the overall violated penalty, score for reducing the violation degree of constraints, and score for improving the objective function. * **score for reducing the overall violation penalty**. If a constraint \(con_{i}\) is violated, we think it incurs a penalty of \(w(con_{i})\). The score of reducing the violated penalty of a \(tm\) operation \(op\), denoted by \(ROP(op)\), is the quantity of the total violated penalty decreased by performing \(op\). * **score for reducing the violation degree of constraints**. If a violated constraint \(con_{i}\) is still violated after performing \(op\), but \(A_{i}\cdot\alpha(x)\) is reduced, i.e., \(con_{i}\) is closer to being satisfied, it incurs a reward of \(w(con_{i})\); otherwise, if \(A_{i}\cdot\alpha(x)\) is increased and exacerbates \(con_{i}\)'s violation, the reward is \(-w(con_{i})\). The score of this impact of a \(tm\) operation \(op\), denoted by \(RVD(op)\), is set as the total reward obtained by performing \(op\). * **score for improving the objective function**. The score for improving the objective function of a \(tm\) operation \(op\) is denoted by \(IOF(op)\). If the value of the objective function after performing \(op\) is smaller than \(obj^{*}\), \(IOF(op)=w(obj)\); otherwise, \(IOF(op)=-w(obj)\). The **tight move score** of a \(tm\) operation \(op\) is defined as \[score_{tm}(op)=ROP(op)+RVD(op)+IOF(op)\] ### Discussion on Tight Move When local search algorithms are applied to classical combinatorial optimization problems, their operators are usually designed to flip 0-1 variables (e.g., knapsack), change the value of a variable in a finite and usually small domain (e.g., graph coloring problem), or change the order of some elements in a permutation (e.g., TSP, scheduling). In MILP primal heuristics, local search primarily performs neighborhood search on the optimal solution of LP relaxations. However, these designs of operators do not consider the characteristics of general ILP problems, i.e., the general linear form of constraints rather than specific types, and variables' ranges are some integral intervals. For solving integer programming, which may have infinite domains, a naive operator of local search is to modify the value of a variable \(x_{j}\) by a fixed increment \(inc\), i.e., \(\alpha(x_{j})\!:=\!\alpha(x_{j})\pm inc\). However, the \(inc\) is hard to choose and needs to be fine tuned: if \(inc\) is too small, it may take many iterations before making any violated constraints satisfied; if \(inc\) is too large, the algorithm may even become so troublesome that it can never satisfy some constraints, and therefore be hard to find a feasible solution. Our tight move operator modifies variable values according to the current solution and constraints and thus automatically adapts to different situations. The tight move operator, which considers both the information of variables and constraints, takes the idea of the Simplex method to modify a variable's value to make a constraint tight. The tight move is inspired by an operator named critical move, which is proposed in the context of solving Satisfiability Modulo Integer Arithmetic Theories (Cai et al., 2022). The critical move operator assigns a variable \(x_{j}\) to the threshold value making a violated literal (similar to a constraint in ILP) true, while our tight move operator keeps variables satisfying their global bounds, and takes the possibility to modify variables from satisfied constraints, thus expanding the range of operations to choose from. We will show in Section 9 that although ILPs do not have classical convexity, our tight move operator still has good theoretical properties to produce promising solutions and avoid visiting unnecessary search space. ## 6 Lift Move Operator In this section, we introduce the lift move operator, which is the key technique of the \(Improve\) mode. The property of lift move operator is that it can improve the quality of the objective function while maintaining feasibility. For this purpose, we propose a way to compute a variable's new candidate values called local domain reduction, and a specialized scoring function for lift move. ### Local Domain Reduction In order to maintain the feasibility of a feasible solution, we must ensure that it satisfies every constraint. Therefore, we propose the local domain reduction to compute such a range to change a variable. For a variable \(x_{j}\) in a feasible solution \(\alpha\), its **local feasible domain**, denoted as \(ld(x_{j},\alpha)\), is an interval for \(x_{j}\) that when \(x_{j}\) varies within this interval and all other variables stay unchanged, the satisfiability of all constraints will be kept unchanged. We call the local domain reduction the process to compute this local feasible domain \(ld(x_{j},\alpha)\). In order to compute \(ld(x_{j},\alpha)\), we consider the feasible domain of \(x_{j}\) in \(con_{i}\), which is denoted as \(ldc(x_{j},con_{i},\alpha)\) meaning the \(x_{j}\) can vary within this interval while keeping the satisfiability of \(con_{i}\), assuming other variables in \(\alpha\) keep unchanged, where \(con_{i}\) is a constraint containing \(x_{j}\). Specifically, let \(\Delta=b_{i}-A_{i}\cdot\alpha(x)\), \(ldc(x_{j},con_{i},\alpha)\) is derived according to the sign of \(A_{ij}\). If \(A_{ij}\;<\;0\), then \(ldc(x_{j},con_{i},\alpha)=\left[\alpha(x_{j})+\left\lceil\frac{\Delta}{A_{ij}} \right\rceil,+\infty\right)\), otherwise \(ldc(x_{j},con_{i},\alpha)=\left(-\infty,\alpha(x_{j})+\left\lfloor\frac{\Delta }{A_{ij}}\right\rfloor\right]\). Then with \(ldc(x_{j},con_{i},\alpha)\), we calculate the local feasible domain of \(x_{j}\) as follows: \[ld(x_{j},\alpha)=\left(\cap_{i}ldc(x_{j},con_{i},\alpha)\right)\cap bd(x_{j})\] where \(bd(x_{j})\) is the global bound of \(x_{j}\), i.e., \(x_{j}^{l}\leq x_{j}\leq x_{j}^{u}\). ### Lift Move Clearly, according to the above, moving \(x_{j}\) within integers of \(ld(x_{j},\alpha)\) does not break the feasibility of \(\alpha\) as long as the other variables are kept constant. So once the current solution \(\alpha\) is feasible, we can choose a reasonable integer in \(ld(x_{j},\alpha)\) for updating \(x_{j}\) to improve the quality of the objective function. We create the lift move operator for this purpose, which is based on the local domain reduction. For a feasible solution \(\alpha\), the **lift move operator**, denoted as \(lm(x_{j},\alpha)\), assigns \(x_{j}\) to the upper or lower bound of \(ld(x_{j},\alpha)\) to improve the objective function at most. Specifically, let \(c_{j}\) denote the coefficient of \(x_{j}\) in \(c^{T}x\), an \(lm\) operation is described as follows: * If \(c_{j}<0\), then \(lm(x_{j},\alpha)\) assign \(\alpha(x_{j})\) to the upper bound of \(ld(x_{j},\alpha)\). * If \(c_{j}>0\), then \(lm(x_{j},\alpha)\) assign \(\alpha(x_{j})\) to the lower bound of \(ld(x_{j},\alpha)\). ### Scoring Function for Lift Move There are multiple variables in the objective function that could be used to construct a \(lm\) operation. To guide the search in \(Improve\) mode, we customize a scoring function to select a \(lm\) operation. Since all \(lm\) operations will maintain the feasibility of the solution, we propose **lift score** to measure the improvement of the objective function. Definition 6.: The lift score of an \(lm\) operation \(op\) is defined as \[score_{lm}(op)=obj(\alpha)-obj(\alpha^{\prime})\] where \(\alpha\) and \(\alpha^{\prime}\) denotes the assignment before and after performing \(op\). In the _Improve_ mode, we pick the operation with the best \(score_{lm}(op)\). ## 7 Local-ILP Algorithm Based on the ideas in previous sections, we develop a local search solver for integer linear programming called Local-ILP. As described in Section 3, after initialization, the algorithm works in three modes, namely \(Search\), \(Improve\), and \(Restore\) mode to iteratively modify \(\alpha\) until a given time limit is reached. This section is dedicated to the details of the initialization and the three modes of local search, as well as other optimization techniques. **Initialization**: Local-ILP generates an assignment \(\alpha\), by assigning the variables one by one until all variables are assigned. As for a variable \(x_{j}\), if \(x_{j}^{l}>0\), it is assigned with \(x_{j}^{l}\); if \(x_{j}^{u}<0\), it is assigned with \(x_{j}^{u}\). Otherwise, the variable is set to 0. ### Search Mode In \(Search\) mode (Algorithm 3), the goal of the algorithm is to find the first feasible solution. If there exist positive \(tm\) operations or _ptm_ operations in violated constraints, the algorithm chooses an operation by a two-level heuristic deciding whether to apply \(tm\) or \(ptm\): it first searches for a positive \(tm\) operation with the greatest \(tm\) score (line 1-2); if no such operation exists, it searches for a positive \(ptm\) operation with the greatest \(tm\) score (line 3-4). If the algorithm fails to find any positive operation, it first updates the weights of constraints according to the weighting scheme described in Section 5 (line 6). Then, it picks a random violated constraints to choose a \(tm\) operation with the greatest \(tm\) score (line 7). ### Improve Mode The \(Improve\) mode (Algorithm 4) seeks to improve the quality of the objective function's value while maintaining the feasibility of a feasible solution. If no such operation is found, the algorithm will break feasibility, obtain an infeasible solution, and enter \(Restore\) mode. ``` Input: ILP instance: \(Q\), a feasible solution: \(\alpha\) 1if\(\exists\) positive lm operationthen 2\(op:=\) such an operation with the greatest \(score_{lm}\) ; 3 4else\(op:=\) a unit incremental move in the objective function within variables' global bounds ; 5 6 perform \(op\) to modify \(\alpha\); ``` **Algorithm 4**Process for Improve Mode **Input:** ILP instance: \(Q\), a feasible solution: \(\alpha\) ``` 1if\(\exists\) positive lm operationthen 2\(op:=\) such an operation with the greatest \(score_{lm}\) ; 3 4else\(op:=\) a unit incremental move in the objective function within variables' global bounds ; 5 6 perform \(op\) to modify \(\alpha\); ``` **Algorithm 5**Process for Improve Mode If there exist positive \(lm\) operations, the algorithm selects the one with the highest \(score_{lm}\) (line 1-2). If the algorithm fails to find any positive \(lm\) operation, it randomly picks a variable \(x_{j}\) in the objective function, and performs a simple **unit incremental move** in \(x_{j}\) according to its coefficient \(c_{j}\) (line 4-5), specifically, if \(c_{j}<0\), then \(\alpha(x_{j})=\alpha(x_{j})+1\); otherwise \(\alpha(x_{j})=\alpha(x_{j})-1\). If a unit incremental move will break the global bound of a variable, we randomly select another. A unit incremental move that keeps all global bounds must exist; otherwise, all variables are at the corresponding bound value that improves the objective function, which means the current assignment is the optimal solution and we could finish the search. ### Restore Mode For an infeasible solution obtained from the \(Improve\) mode, the \(Restore\) mode (Algorithm 5) focuses on repairing the feasibility to obtain a new high-quality feasible solution. ``` Input: ILP instance: \(Q\), an infeasible solution: \(\alpha\) 1if\(\exists\) positive tm operation in violated constraintsthen 2\(op:=\) such an operation with the greatest \(score_{tm}\) ; 3elseif\(\exists\) positive tm operation in satisfied constraintsthen 4\(op:=\) such an operation with the greatest \(score_{tm}\) ; 5 6else 7 update constraint weights ; 8if\(\exists\) positive tm operation in a random violated constraintthen 9\(op:=\) such an operation with the greatest \(score_{tm}\) ; 10 11else\(op:=\) move a random variable in largest-weighted violated constraint to its corresponding global bound value ; 12 13 perform \(op\) to modify \(\alpha\); ``` **Algorithm 5**Process for Restore Mode If there exist positive \(tm\) operations, the algorithm uses a two-level heuristic to choose one: It first searches for a positive \(tm\) operation with the greatest \(tm\) score in violated constraints (line 1-2); if no such operation exists, it searches for a positive \(tm\) operation with the greatest \(tm\) score in satisfied constraints (line 3-4). If the algorithm fails to find any positive operations, it first updates weights similarly to the \(Search\) mode (line 6). Then, it picks a random violated constraints, and searches for a positive \(tm\) operation with the greatest \(tm\) score in it (line 7-9). If no such operation exists, let \(con_{k}\) denote the violated constraint with the largest weight, the algorithm randomly selects a variable \(x_{j}\) in \(con_{k}\) and moves its value to its global bound value based on the coefficient: if \(A_{kj}<0\), then \(\alpha(x_{j})=x_{j}^{u}\); otherwise \(\alpha(x_{j})=x_{j}^{l}\). ### Optimization Techniques The classical techniques of local search are applied in our algorithm, including random sampling, the tabu forbidding strategy (Pappalardo and Ozkok 2013), and restart. **Random Sampling**: We use a sampling method called Best from Multiple Selections, dubbed as BMS (Cai 2015). For all constraints that need to be considered in \(Search\) mode and \(Restore\) mode, the algorithm randomly samples a certain number of them, and for all operations derived from the sampled constraints that could be selected to apply, the algorithm randomly samples a certain number of these operations and selects the one with the highest \(score_{tm}\). There are in total seven parameters in the algorithm to control the number of samples, including: * \(c_{v}\) for violated constraints, and \(o_{v}\) for \(tm\) operations in sampled violated constraints * \(c_{s}\) for satisfied constraints, and \(o_{s}\) for \(tm\) operations in sampled satisfied constraints * \(c_{p}\) for \(tm\) operations in sampled violated constraints to construct \(ptm\) operations, and \(o_{p}\) for \(ptm\) operations constructed in sampled \(tm\) operations * \(o_{r}\) for \(tm\) operations in a random violated constraint. **Forbidding Strategy**: We employ a forbidding strategy, the tabu strategy, to address the cycle phenomenon (i.e., revisiting some searched regions). This is a popular practice in local search algorithms. After an operation is executed, the tabu strategy forbids the reverse operation in the following \(tt\) iterations, where \(tt\) is a parameter usually called tabu tenure. The tabu strategy is directly applied to Local-ILP. If a \(tm\) operation that increases (decreases, resp.) the value of an integer variable \(x_{j}\) is performed, then it is forbidden to decrease (increase, resp.) the value of \(x_{j}\) by \(tm\) operation in the following \(tt\) iterations. **Restart Mechanism:** The search is restarted when \(\alpha^{*}\) is not updated for \(MNI\) iterations, where \(MNI\) is a parameter. At each restart, we hybridize \(\alpha^{*}\) and random integers within variables' bounds to reset \(\alpha\): \(x_{j}\) is assigned to \(\alpha^{*}(x_{j})\) or a random integer within \([x_{j}^{l},x_{j}^{u}]\) with 50% probability of each, respectively. Additionally, all weights are restored to 1 when restarting. ## 8 Experiments We carry out experiments to evaluate Local-ILP on the MIPLIB data set ([https://miplib.zib.de/](https://miplib.zib.de/)), which is a standard data set including a broad range of types of problems. We compare our Local-ILP with state-of-the-art ILP solvers in terms of their performance on the quality of the best-found solution and the ability to find a feasible solution, and we analyze the critical difference between the considered solvers. Also, experiments are conducted to analyze the effectiveness of the proposed new operators. Additionally, the MIPLIB dataset records the best known solutions for its instances, and we've established new records for 6 instances in the MIPLIB by Local-ILP. ### Experiment Preliminaries #### 8.1.1 Implementation Local-ILP is implemented in C++ and compiled by g++ with '-O3' option. There are four types of parameters: the steps to restart, the tabu tenure for the tabu scheme, the updating parameters for the weighting scheme, and the sampling numbers for the BMS heuristic. They are set as Table 1 for all instances. We use the mersenne twister algorithm (Matsumoto and Nishimura 1998) to generate random numbers and always use the default random seed 5489 of the GNU ISO C++ Library. #### 8.1.2 Competitors We compare Local-ILP with the latest state-of-the-art commercial and non-commercial ILP solvers, namely Gurobi 10.0.0 (2022) and SCIP 8.0.1(2021). For Gurobi, we use both its exact and heuristic versions. The binaries of all competitors are downloaded from their websites. For all of the competitors, we always use their default parameter settings. \begin{table} \begin{tabular}{c|c|c|c c|c c c c c c c} \hline Module & Restart & Tabu & \multicolumn{4}{c|}{Weighting} & \multicolumn{4}{c}{BMS Heuristic} \\ \hline Parameter & \(MNI\) & \(tt\) & \(IL\) & \(wf\) & \(sp\) & \(c_{v}\) & \(o_{v}\) & \(c_{s}\) & \(o_{s}\) & \(c_{p}\) & \(o_{p}\) & \(o_{r}\) \\ \hline Value & 1500000 & 3+rand(10) & 1000 & 10 & 0.03\% & 4 & 63750 & 82 & 305 & 60 & 70 & 150 \\ \hline \end{tabular} \end{table} Table 1: Parameter setting for all instances. #### 8.1.3 Benchmarks MIPLIB is widely recognized as the standard benchmark set for integer linear programming problems, and is the most commonly used dataset in the current research literature. Our experiments are carried out with the union of MIPLIB 2003 (Achterberg et al., 2006), MIPLIB 2010 (Koch et al., 2011) and MIPLIB 2017 (Gleixner et al., 2021), selecting the ILP instances tagged hard and open, where the open instances are those that no solver has yet reported having successfully solved, and the hard instances are those that the tested solvers were unable to solve in one hour. We set them as the test set because they represent the problems that are the most challenging and hard to solve in MIPLIB. As Local-ILP is not an exhaustive solver, infeasible instances are excluded, resulting in a benchmark consisting of 121 instances. #### 8.1.4 Experiment Setup All experiments are carried out on a server with AMD EPYC 7763 64-Core 2.45GHz and 2048G RAM under the system Ubuntu 20.04.4. For each instance, each solver is executed by one thread with time limits of 10, 60, and 300 s. Each solver outputs the best-found solution at the end of the execution. We evaluate a solver's ability to find a high-quality feasible solution in a reasonable time, which is very meaningful in reality and is a key metric for measuring solvers, and also their ability to find a feasible solution for a problem in a reasonable time, which is another crucial measure of solver performance and ensures the usability of a solver. For each given time limit, we count _#win_, the number of instances where a solver finds the best solution among all solutions output by tested solvers, and _#feas_, the number of instances where a solver can find a feasible solution within this time limit. Note that, for each instance, if the best solution is found by more than one solver, e.g., multiple solvers find the optimal solution, this instance is counted as \(\#win\) for all these solvers; and if all solvers find no solution, this instance is not counted as \(\#win\) for any solver. We organize the overall results into two types(\(\#win\) and \(\#feas\)) and report the number of each type from each solver. For each time limit setting, the best results are marked in **bold**. ### Results on MIPLIB In the MIPLIB dataset, each instance may contain multiple types of constraints ([https://miplib.zib.de/statistics.html](https://miplib.zib.de/statistics.html)), such as knapsack constraints, set covering constraints, and so on. We categorize all instances by the type of their main constraint class, i.e., the instance contains this type of constraint the most. Obviously, there might be more than one main constraint class, if an instance contains multiple main constraint classes, i.e., the same number of most frequently appearing types of constraints, we mark it as hybrid. The results of the comparison with state-of-the-art solvers in terms of the quality of the best-found solution and the ability to find a feasible solution are shown in Tables 2 and 3, respectively. #### 8.2.1 The Quality of the Best Found Solution For each solver, we present the number of instances where it found the best solution among all candidate solvers. As shown in Table 2, Local-ILP performs best for 9 types of all 15 main constraint classes in the 10s time limit, 8 types in the 60s time limit, and 4 types in the 300s time limit. Local-ILP wins the most types in the 10s and 60s time limits, and the second most in the 300s time limit. In particular, Local-ILP exhibits the best performance for the types of hybrid, mixed binary, and variable bound main constraint classes over all time limits. Overall, for the ability to find high-quality solutions(\(\#win\)) in all instances, SCIP performs the worst. SCIP wins only 6 instances of the dataset in each time limit setting. Local-ILP and Gurobi perform much better than SCIP. For the comparison between Local-ILP and Gurobi, Local-ILP consistently performs best in the 10s and 60s time limits, but \begin{table} \begin{tabular}{l|l|l l l|l l l|l l l l} \hline \multirow{2}{*}{Main Constraint Class} & \multirow{2}{*}{\#inst} & \multicolumn{6}{c|}{\#win} \\ \cline{3-11} & & \multicolumn{3}{c|}{10 s} & \multicolumn{3}{c|}{60 s} & \multicolumn{3}{c}{300 s} \\ \cline{3-11} & & Local-ILP & SCIP & Gurobi & \multicolumn{3}{c|}{Local-ILP} & SCIP & Gurobi & \multicolumn{3}{c}{Local-ILP} & SCIP & Gurobi \\ \cline{3-11} & & & exact & heur & \multicolumn{3}{c}{exact} & \multicolumn{3}{c}{heur} & \multicolumn{3}{c}{exact} & \multicolumn{3}{c}{heur} & \multicolumn{3}{c}{exact} & \multicolumn{3}{c}{heur} \\ \hline Aggregations & 2 & 0 & 0 & **1** & 0 & 0 & 0 & **1** & 0 & 0 & **1** \\ Bin Packing & 2 & **1** & 0 & **1** & **1** & 0 & 0 & **2** & **2** & 0 & 0 & 1 & **2** \\ Singleton & 2 & 0 & **1** & 0 & **1** & 0 & 0 & **1** & **1** & 0 & 0 & **1** & **1** \\ Equation Knapsack & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Knapsack & 4 & **2** & 0 & 0 & **2** & 2 & 0 & 0 & **3** & 1 & 0 & 1 & **4** \\ Set Packing & 5 & 1 & 0 & **3** & 2 & **2** & 0 & 1 & **2** & 0 & 1 & **2** \\ Cardinality & 6 & **1** & 0 & **1** & **1** & **2** & 0 & 0 & 1 & 0 & 0 & 1 & **2** \\ Hybrid & 7 & **3** & 0 & 1 & 1 & **4** & 0 & 1 & 1 & **3** & 0 & 1 & 2 \\ Mixed Binary & 8 & **3** & 1 & 2 & **2** & **3** & 0 & 2 & 0 & **3** & 1 & 1 & 0 \\ Set Partitioning & 9 & 3 & 0 & 3 & **4** & **3** & 1 & **3** & 2 & 1 & 0 & **4** & **4** \\ Set Covering & 11 & **4** & 0 & **4** & 3 & 2 & 1 & 3 & **5** & 1 & 1 & 5 & **6** \\ Precedence & 13 & **7** & 0 & 5 & 6 & **6** & 1 & 2 & 3 & 4 & 0 & 4 & **5** \\ General Linear & 15 & 1 & 2 & **6** & **6** & 0 & 1 & 6 & **8** & 0 & 2 & 6 & **8** \\ Variable Bound & 16 & **10** & 0 & 3 & 2 & **11** & 0 & 3 & 3 & **7** & 0 & 3 & **7** \\ Invariant Knapsack & 18 & **8** & 2 & 4 & 4 & **7** & 2 & 6 & 6 & 6 & 2 & **8** & **8** \\ \hline Total & 121 & **44** & 6 & 34 & 35 & **42** & 6 & 30 & 38 & 28 & 6 & 37 & **52** \\ \hline \end{tabular} \end{table} Table 2: Empirical results on comparing Local-ILP with SCIP and Gurobi in terms of the quality of the best-found solution, with 10s, 60s and 300s time limits. #inst denotes the number of instances in each class. in the 300s time limits, Gurobi wins more instances than Local-ILP, especially its heuristic version. #### 8.2.2 The Ability to Find a Feasible Solution Here we present the results of the number of instances in which a feasible solution could be found by each solver. As shown in Table 3, Local-ILP performs best with 13 types of all main constraint classes in the 10s and 60s time limits, and 12 types in the 300 time limit. Most importantly, Local-ILP performs best in the most types over all time limits. In general, for the ability to find a feasible solution( \(\#feas\) ) in all instances, SCIP is still the worst solver with the least \(\#feas\). It is obvious that Local-ILP and Gurobi perform better than SCIP. More encouragingly, Local-ILP consistently outperforms Gurobi in all the time limit settings. ### Critical Difference Analysis In this subsection, we use the critical difference analysis to evaluate the statistical differences between the considered solvers on MIPLIB for each time limit. First, the Friedman Test (Friedman 1937) was conducted under the null hypothesis that all algorithms performances are equal. After rejecting the null hypothesis, the Nemenyi post-hoc test was applied to all pairwise comparisons. Finally, the results are shown in Figure 2 in the form of a critical difference diagram (Garcia and Herrera 2008). Note that \begin{table} \begin{tabular}{l|l|l l l l|l l l l|l l l l} \hline \hline \multirow{2}{*}{Main Constraint Class} & \multirow{2}{*}{\#inst} & \multicolumn{6}{c|}{60 s} & \multicolumn{6}{c}{300 s} \\ \cline{3-13} & & local-ILP & SCIP & Gurobi & local-ILP & SCIP & Gurobi & local-ILP & SCIP & Gurobi & \\ & & & \multicolumn{3}{c}{exact heur} & \multicolumn{3}{c}{hear} & exact & heur & \multicolumn{3}{c}{exact heur} & \multicolumn{3}{c}{exact heur} \\ \hline Aggregations & 2 & **1** & 0 & **1** & **1** & **1** & **1** & **1** & **1** & **1** & **1** & **1** & **1** \\ Bin Packing & 2 & **1** & 0 & **1** & **1** & **2** & 1 & **2** & **2** & 1 & **2** & **2** \\ Singleton & 2 & **2** & **2** & **2** & **2** & **2** & **2** & **2** & **2** & **2** & **2** & **2** & **2** \\ Equation Knapsack & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Knapsack & 4 & **4** & 3 & 3 & 3 & **4** & 3 & 3 & **4** & **4** & 3 & 3 & **4** \\ Set Packing & 5 & **4** & 3 & **4** & **4** & **4** & **4** & **4** & **4** & **4** & **4** & **4** & **4** \\ Cardinality & 6 & **2** & 1 & **2** & **2** & **3** & 1 & 2 & 2 & **3** & **2** & **3** & **3** \\ Hybrid & 7 & 4 & 2 & **4** & **4** & **5** & 3 & 4 & 4 & **5** & 4 & 4 & 4 \\ Mixed Binary & 8 & **4** & 1 & 2 & 2 & **5** & 1 & 3 & 3 & **5** & 2 & 3 & 3 \\ Set Partitioning & 9 & 5 & 3 & **7** & **7** & **7** & 4 & **7** & **7** & 5 & **7** & **7** \\ Set Covering & 11 & **9** & 8 & **9** & **9** & **9** & 8 & **9** & **9** & **9** & 8 & **9** & **9** \\ Precedence & 13 & **12** & 11 & **12** & **12** & **12** & **12** & **12** & **12** & **12** & **12** & **12** & **12** \\ General Linear & 15 & **9** & 6 & 8 & 8 & 9 & 7 & **11** & **11** & 11 & 9 & **12** & **12** \\ Variable Bound & 16 & **14** & 10 & 13 & 13 & **14** & 12 & **14** & 13 & **14** & 12 & **14** & **14** & **14** \\ Invariant Knapsack & 18 & **12** & 9 & **12** & **12** & **13** & 11 & 12 & 12 & 13 & 13 & **14** & 13 \\ \hline Total & 121 & **83** & 59 & 80 & 80 & **90** & 70 & 86 & 86 & **92** & 78 & 90 & 90 \\ \hline \hline \end{tabular} \end{table} Table 3: Empirical results on comparing Local-ILP with SCIP and Gurobi in terms of the ability to find a feasible solution, with 10s, 60s and 300s time limits. #inst denotes the number of instances in each class. we utilize a R package called scmamp (Calvo and Santafe Rodrigo 2016), which is available at [https://github.com/b0rxa/scmamp](https://github.com/b0rxa/scmamp). The top line of the diagram is the axis on which the average ranks of algorithms are plotted; the lower the average ranks, the better the algorithm. The critical difference is displayed above each subfigure, and algorithms that are not significantly different at the 0.05 level of significance are connected. From Figure 2, in the setting of 10s and 60s time limits, Local-ILP and Gurobi are connected, indicating that the performances of Local-ILP and Gurobi (both versions) are very close.Moreover, Local-ILP outperforms other competitors in the 10s time limit. With the 300s time limit, Local-ILP is worse than Gurobi, while still significantly better than SCIP. ### Effectiveness of Proposed New Techniques To analyze the effectiveness of our proposed new techniques in Local-ILP, we tested 6 variations of our Local-ILP algorithm as follows: \(\bullet\) To analyze the effectiveness of the tight move. We modify Local-ILP by replacing the \(tm\) operator with the operator that directly modifies an integer variable by a fixed increment \(inc\), leading to two versions v_fix_1 and v_fix_5, where \(inc\) is set as 1 and 5, respectively. Figure 2: Critical difference diagram about Local-ILP, SCIP, the exact version of Gurobi, the heuristic version of Gurobi on benchmark with time limits of 10, 60 and 300 seconds. * To analyze the effectiveness of the _lift move_ and \(Improve\) mode, we modify Local-ILP by removing the \(Improve\) mode from the framework and using only \(Search\) and \(Restore\) modes, leading to the version v_no_improve. To analyze the effectiveness of the \(Restore\) mode, we modify Local-ILP by removing the \(Restore\) mode from the framework and restarting form the random assignments when \(Improve\) mode stuck in a local optimum, leading to the version v_no_restore. * To compare different ways to escape from the local optimum in \(Improve\) Mode. We modify Local-ILP by replacing the unit incremental move with operators that have larger step sizes, leading to two versions v_per_bound and v_per_random, where the step size is set as the distance to the bound of the variable and a random size between 1 and the bound, respectively. We compare Local-ILP with these modified versions on the benchmark, with 10s, 60s, and 300s time limits. The results are presented in Tables 4, 5, and 6. Local-ILP outperforms all other variations, confirming the effectiveness of the strategies. best known objective values for 6 instances. In this test, we use the same data set as in Section 8.1.3, but include the instances marked as "infeasible". These six instances have different main constraint types, which simultaneously demonstrate the strong solving power of Local-ILP and its applicability to diverse types of problems. They are shown in Table 7. ## 9 Theoretical Aspects of our Algorithm In this section, we provide theoretical aspects of our algorithm, including the underlying properties, theoretical analysis of operators, and explanations of the different behaviors of the three modes. We have briefly explained the observations we used to design our algorithm in Section 3. We will formalize these ideas and show how our operators take advantage of them. We assume an instance of ILP of the form: \[Minimize c^{T}x\] \[subject\ to Ax\leq b \tag{2}\] \[x\in\mathbb{Z}^{n}\] As before, we use \(F(x)=c^{T}x\) to denote the objective function. For the convenience of analysis, here we do not distinguish variables' bounds from other constraints. We assume at least one of the coefficients in the objective function is non-zero: \(\exists j\in\{1,...,n\}\), \(c_{j}\neq 0\) \begin{table} \begin{tabular}{l|l l l} \hline \multicolumn{1}{l|}{TimeLimit} & \multicolumn{3}{c}{\#win} \\ \cline{2-4} \multicolumn{1}{c|}{} & Local-ILP & v\_per\_bound & v\_per\_random \\ \hline 10s & **59** & 24 & 39 \\ 60s & **61** & 27 & 46 \\ 300s & **59** & 27 & 48 \\ \hline \end{tabular} \end{table} Table 6: Empirical results on comparing Local-ILP with v_per_bound and v_per_random. \begin{table} \begin{tabular}{l|l l l} \hline Instance & Local-ILP & Previous Objective & Constraint Classification \\ \hline sorrell7 & **-197** & -196 & variable bound \\ supportcase22 & **117** & N/A & set covering, aggregations, bin packing and mixed binary \\ cdc7-4-3-2\({}^{1}\) & **-294** & -289 & set packing \\ ns1828997 & **8** & 9 & precedence, invariant knapsack, variable bound and cardinality \\ scpm1 & **544** & 554 & set covering \\ scpm2 & **490** & 501 & set covering \\ \hline \end{tabular} \end{table} Table 7: New records to open instances otherwise this ILP instance is trivial and has optimal objective value \(0\). We denote the index set \(I=\{1,...,m\}\), \(J=\{1,...,n\}\), \(J_{\neq 0}=\{j|c_{j}\neq 0,j=1,...,n\}\), \(J\neq\emptyset\). Moreover, we assume that there is no variable that is free (does not appear in any constraint, has both infinite upper/lower bounds, and does not appear in the objective function). We assume the ILP has a finite optimal solution \(\boldsymbol{x}^{*}\) and a finite optimal objective value \(opt\). We assume \(\boldsymbol{x}^{*}\) exists (while it does not have to be unique), as our algorithm aims to find good solutions and does not try to prove an ILP is infeasible. The assumption of a finite optimal objective value is natural for ILP since otherwise there is no meaningful optimal objective value. ### Boundary Solutions Now we present the concept of boundary solutions that will be used to analyze our operators, and show that all optimal solutions are boundary solutions. For an ILP instance in the form of Formula (2), let polyhedron \(P=\{\boldsymbol{x}\in\mathbb{R}^{n}|A\boldsymbol{x}\leq b\}\), then the set of feasible solutions of the ILP could be described as \(P\cap\mathbb{Z}^{n}\), and all feasible solutions belong to the **integer hull**\(P_{I}=conv(P\cap\mathbb{Z}^{n})\) where \(conv(S)\subseteq\mathbb{R}^{n}\) for \(S\subseteq\mathbb{Z}^{n}\) denotes the convex hull of a set of points \(S\). Let \(U=\cup\{\boldsymbol{e}_{j},-\boldsymbol{e}_{j}\}\), \(j\in J\). We call \(\boldsymbol{x}+\boldsymbol{d},\boldsymbol{d}\in U\) the **neighbors** of \(\boldsymbol{x}\). Given \(P=\{\boldsymbol{x}\in\mathbb{R}^{n}|A\boldsymbol{x}\leq b\}\), we define the set of boundary points of \(P\): Definition 7.: \(\boldsymbol{x}\in\mathbb{Z}^{n}\) is a **boundary point** of \(P\) if \(\boldsymbol{x}\in P\) and \(\exists\boldsymbol{d}\in U\), \(\boldsymbol{x}+\boldsymbol{d}\notin P\). The set of boundary points of \(P\) is denoted by \(\delta(P)\). This definition says a boundary point has at least one neighbor that is out of feasible region, which is similar to the definition of boundary in topology. We call a solution \(x\) of Formula (2) a **boundary solution** if and only if \(\boldsymbol{x}\) is a boundary point of the polyhedron of its LP relaxation \(P\). Then we exhibit the first property of ILP we use: every optimal solution is a boundary solution. Proposition 1.: _For an ILP instance as the Formula (2), its any optimal solution is a boundary solution, and also_ \[\min_{\boldsymbol{x}\in P\cap\mathbb{Z}^{n}}F(\boldsymbol{x})=\min_{ \boldsymbol{x}\in\delta(P)\cap\mathbb{Z}^{n}}F(\boldsymbol{x})\] Proof of Proposition 1.: Let's consider an optimal solution \(\boldsymbol{x}^{*}\) of Formula (2), as we assumed, \(\boldsymbol{x}^{*}\) exists and is finite. For a \(j\in J_{\neq 0}\), let \(\boldsymbol{d}=\boldsymbol{e}_{j}\) if \(c_{j}<0\), and \(\boldsymbol{d}=-\boldsymbol{e}_{i}\) if \(c_{j}>0\). Assume \(\boldsymbol{x}^{*}+\boldsymbol{d}\in P\), then since \(\boldsymbol{x}^{*}+\boldsymbol{d}\in\mathbb{Z}^{n}\), \(\boldsymbol{x}^{*}+\boldsymbol{d}\) is a feasible solution of Formula (2) and \[F(\boldsymbol{x}^{*}+\boldsymbol{d})=\boldsymbol{c}^{T}\boldsymbol{x}^{*}+ \boldsymbol{c}^{T}\boldsymbol{d}=F(\boldsymbol{x}^{*})-|c_{j}|<F(\boldsymbol{ x}^{*})\] then there is another feasible solution with strictly smaller objective, contradiction with the assumption that \(\mathbf{x}^{*}\) is optimal, so \(\mathbf{x}^{*}+\mathbf{d}\notin P\) and \(\mathbf{x}^{*}\) is a boundary solution. Q.E.D. This proposition states that if we restrict our search space to \(\delta(P)\) we could still obtain an optimal solution. Remark 1. The search space of an ILP instance in the form of Formula (2) can be reduced to \(\delta(P)\) and all optimal solutions are kept. Then \(\delta(P)\) can be considered a **complete** search space for ILP, in the sense that it would not miss the optimal solution. Corollary 1: _Let \(\mathbf{x}^{*}\) be an optimal solution of (2), for any \(\mathbf{d}\in U\) s.t. \(\mathbf{c}^{T}\mathbf{d}<0\), there is \(\mathbf{x}^{*}+\mathbf{d}\notin P\)._ We also show that the concept of \(\delta(P)\) is significant: it is not a definition that is so general that it naturally contains all optimal solutions. We consider a polyhedral \(P=\{\mathbf{x}\in\mathbb{R}^{n}|A\mathbf{x}\leq b\}\), there could be different ILP instances specified by \(P\) with different objective functions. We show a simple fact that for given \(P\) and \(\delta(P)\), any smaller subset of \(\delta(P)\) may miss an optimal solution for some ILP instances consisting of \(P\) with some objective function. **Fact 3**: _For polyhedral \(P=\{\mathbf{x}\in\mathbb{R}^{n}|A\mathbf{x}\leq b\}\), for any boundary point \(\mathbf{x}\in\delta(P)\), there is an ILP instance consist of \(P\) with some objective function such that \(\mathbf{x}\) is the optimal solution._ Since \(\mathbf{x}\in P\) and \(\mathbf{x}+\mathbf{e}_{j}\notin P\) for some \(j\), it is easy to obtain such an instance by just setting the objective function \(F(\mathbf{x})=\mathbf{e}_{j}^{T}\mathbf{x}\). It's trivial to see that \(\mathbf{x}\) is optimal. For an ILP with multiple optimal solutions, missing an optimal solution does not mean producing the wrong optimal solution, but for an ILP that has a unique optimal solution, this issue does. So \(\delta(P)\) is a significant characterization to ensure the optimal solution exists. ### Search Spaces We just showed that restricting the search space to \(\delta(P)\) would not miss any optimal solution. Now we show how our operators make use of this property. We first analyze tight move operator and lift move operator individually, and then show that all feasible solutions obtained by our algorithm are boundary solutions. #### 9.2.1 Tight Move Operator We show a property of the tight move operator: if a solution obtained by the tight move operator is feasible, then it is a boundary solution. We introduce some notions to facilitate the analysis of operators. Given \(\boldsymbol{x}\in\mathbb{Z}^{n},\ \forall j\in J,i\in I,A_{ij}\neq 0\), let \(\phi_{ji}:\mathbb{Z}^{n}\rightarrow\mathbb{Z}^{n}\) s.t. \(\phi_{ji}(\boldsymbol{x})=\boldsymbol{x}^{\prime}\) if \(\boldsymbol{x}^{\prime}\) is obtained from \(\boldsymbol{x}\) by \(tm(x_{j},con_{i})\). Proposition 2: _Any feasible solution obtained by \(tm\) operator is a boundary solution: for \(\boldsymbol{x}\in\mathbb{Z}^{n}\), \(\forall j\in J,i\in I\), if \(\boldsymbol{x}^{\prime}=\phi_{ji}(\boldsymbol{x})\) and \(\boldsymbol{x}^{\prime}\in P\), then \(\boldsymbol{x}^{\prime}\in\delta(P)\)._ Proof of Proposition 2. Let \(\Delta=b_{i}-(A_{i})\cdot\boldsymbol{x}\), W.L.O.G. we assume \(A_{ij}>0\), and \(\Delta<0\) the other cases of \(A_{ij}\) and \(\Delta\) could be showed similarly. From Definition 1 we know \(\boldsymbol{x}^{\prime}=\boldsymbol{x}-min(\left|\left\lfloor\frac{\Delta}{A _{ij}}\right\rfloor\right|,\left|x_{j}^{l}-x_{j}\right|)\cdot\boldsymbol{e}_ {j}\). Since \(A_{ij}>0\), we consider \(\boldsymbol{x}^{\prime}+\boldsymbol{e}_{j}\): (1)If \(min(\left|\left\lfloor\frac{\Delta}{A_{ij}}\right\rfloor\right|,\left|x_{j}^ {l}-x_{j}\right|)=\left|\left\lfloor\frac{\Delta}{A_{ij}}\right\rfloor\right|\), since \(\Delta<0,A_{ij}>0\), \(\left|\left\lfloor\frac{\Delta}{A_{ij}}\right\rfloor\right|=-\left\lfloor \frac{\Delta}{A_{ij}}\right\rfloor\) then \[A_{i}\cdot(\boldsymbol{x}^{\prime}+\boldsymbol{e}_{j})=A_{i}\cdot\boldsymbol{ x}+A_{i}(\left\lfloor\frac{\Delta}{A_{ij}}\right\rfloor+1)\boldsymbol{e}_{j}>A_{i} \cdot\boldsymbol{x}+A_{ij}(\frac{\Delta}{A_{ij}})=b_{i}\] that is \(A_{i}\cdot(\boldsymbol{x}^{\prime}+\boldsymbol{e}_{j})>b_{i}\), which means \(\boldsymbol{x}^{\prime}+\boldsymbol{e}_{j}\notin P\), since assumed \(\boldsymbol{x}^{\prime}\in P\), then \(\boldsymbol{x}^{\prime}\in\delta(P)\). (2) If \(min(\left|\left\lfloor\frac{\Delta}{A_{ij}}\right\rfloor\right|,\left|x_{j}^ {l}-x_{j}\right|)=\left|x_{j}^{l}-x_{j}\right|\), \(\boldsymbol{x}^{\prime}=\boldsymbol{x}-(\left|x_{j}^{l}-x_{j}\right|)\cdot \boldsymbol{e}_{j}\), we consider \(\boldsymbol{x}^{\prime}-\boldsymbol{e}_{j}\). Since \(x_{j}\) always satisfy its global bound(by definition, all our algorithm will not break the global bound), \(x_{j}^{l}-x_{j}\leq 0\), then \[(\boldsymbol{x}^{\prime}-\boldsymbol{e}_{j})_{j}=x_{j}-\left|x_{j}^{l}-x_{j} \right|-1=x_{j}+(x_{j}^{l}-x_{j})-1<x_{j}^{l}\] (\(\boldsymbol{x}^{\prime}-\boldsymbol{e}_{j}\)) violated the global bound of the variable \(x_{j}\), so \(\boldsymbol{x}^{\prime}-\boldsymbol{e}_{j}\notin P\), since assumed \(\boldsymbol{x}^{\prime}\in P\) then \(\boldsymbol{x}^{\prime}\in\delta(P)\). Q.E.D. #### 9.2.2 Lift Move Operator For the lift move operator, we show another property, that it maps a feasible solution to a boundary solution: Given \(\boldsymbol{x}\in\mathbb{Z}^{n}\), \(\forall j\in J\), let \(\chi_{j}:\mathbb{Z}^{n}\rightarrow\mathbb{Z}^{n}\) s.t. \(\chi_{j}(\boldsymbol{x})=\boldsymbol{x}^{\prime}\) if \(\boldsymbol{x}^{\prime}\) is obtained from \(\boldsymbol{x}\) by \(lm(x_{j},\boldsymbol{x})\). Proposition 3: _The lift move operator maps a feasible solution to a boundary solution: for \(\boldsymbol{x}\in P\cap\mathbb{Z}^{n}\), \(\forall j\in J\), \(\chi_{j}(\boldsymbol{x})\in\delta(P)\)._ Proof of Proposition 3. Let \(\boldsymbol{x}^{\prime}=\chi_{j}(\boldsymbol{x})\), \(\Delta=b_{i}-(A_{i})\cdot\boldsymbol{x}\). W.L.O.G assume \(c_{j}<0\) and \(A_{ij}>0\), from Definition 5, \(lm(x_{j},\boldsymbol{x})\) changes \(\boldsymbol{x}^{\prime}_{j}\) to the upper bound of \(ld(x_{j},\boldsymbol{x})\). Since \(ld(x_{j},\boldsymbol{x})=(\cap_{i}ldc(x_{j},con_{i},\boldsymbol{x}))\cap bd(x_{j})\), the upper bound of \(ld(x_{j},\boldsymbol{x})\) is either the upper bound of \(bd(x_{j})\) or of \(ldc(x_{j},con_{i},\boldsymbol{x})\) for some \(i\). In the former case, \(\mathbf{x}^{\prime}+\mathbf{e}_{j}\) exceeds the upper bound of \(bd(x_{j})\). In the latter case, from definition \(ldc(x_{j},con_{i},\mathbf{x})=\left(-\infty,x_{j}+\left\lfloor\frac{\Delta}{A_{ij}} \right\rfloor\right]\) thus \(\mathbf{x}^{\prime}=\mathbf{x}+\left\lfloor\frac{\Delta}{A_{ij}}\right\rfloor\mathbf{e}_ {j}\), then \[A_{i}(\mathbf{x}^{\prime}+\mathbf{e}_{j})=A_{i}(\mathbf{x}+\left\lfloor\frac{\Delta}{A_{ij} }\right\rfloor\mathbf{e}_{j}+\mathbf{e}_{j})>A_{i}(\mathbf{x}+\frac{\Delta}{A_{ij}}\mathbf{e}_ {j})=A_{i}\mathbf{x}+\Delta=b_{i}\] thus \(A_{i}(\mathbf{x}^{\prime}+\mathbf{e}_{j})>b_{i}\) which means \(\mathbf{x}^{\prime}+\mathbf{e}_{j}\notin P\). So in both case we have \(\mathbf{x}^{\prime}+\mathbf{e}_{j}\notin P\). Moreover, it's easy to check \(\mathbf{x}^{\prime}\in P\) by definition that \(ldc()\) is computed satisfying all constraints and \(bd()\) for the variable's bounds, and thus \(\mathbf{x}^{\prime}\in\delta(P)\). Q.E.D. #### 9.2.3 Our Algorithm As seen from Algorithms 3 - 5, our algorithm adopts a strategy to apply the lift move operator for feasible solutions and tight move for infeasible solutions. This coincides with the properties we showed in the preceding sections; additionally, we show here that the perturbations are also consistent with this strategy, and we can have the following argument for our algorithm: **Proposition 4**: _Any feasible solution obtained in our algorithm is a boundary solution._ Proof of Proposition 4. In Algorithm 3 - 5 we can see that our algorithm has 4 ways to generate a new assignment: (1) apply a tight move operator to an infeasible solution (2) apply a lift move operator to a feasible solution (3) (perturbation) apply a unit incremental move when no positive \(lm\) operation is found in Improve mode (4) (perturbation) move a variable's value to one side of its global bound From Proposition 2 and Proposition 3, we know that the feasible solutions generated by case (1) and (2) must be boundary solutions. The case (3) will generate an infeasible solution because when there is no positive \(lm\) operation, there is no operation to modify one variable's value to get a better feasible solution. In this case, a unit incremental move must generate an infeasible solution; otherwise it contradicts the condition that no positive \(lm\) operation is found. In case (4), it is trivial that the generated solution is a boundary solution if it is feasible as one variable is set to one side of its global bound. In total, all feasible solutions generated by our algorithm are boundary solutions. Q.E.D. Note that our algorithm may also visit infeasible solutions, thus we cannot say our algorithm search totally in boundary solutions. But this property shows our algorithm avoids visiting points in \(P\setminus\delta(P)\), which is very helpful for problems with large variable domains. ### Connectivity of Solutions We have seen that our algorithm has good properties to avoid visiting some regions that don't contain optimal solutions. Meanwhile, we will show that boundary solutions and optimal solutions have good connectivity by our operators. **Tight move operator** We show that every boundary solution \(\boldsymbol{x}\) can be reached by a tight move operator from some infeasible solution. **Proposition 5**: _For every \(\boldsymbol{x}\in\delta(P)\), there exists \(\boldsymbol{x}^{\prime}\notin P\), s.t. \(\exists j\in J,i\in I\), \(\phi_{ji}(\boldsymbol{x}^{\prime})=\boldsymbol{x}\)_ Proof of Proposition 5. Since \(\boldsymbol{x}\in\delta(P)\), \(\exists\boldsymbol{d}\in U\), \(\boldsymbol{x}+\boldsymbol{d}\notin P\), let \(\boldsymbol{x}^{\prime}=\boldsymbol{x}+\boldsymbol{d}\) we assume \(\boldsymbol{d}\) corresponds to the variable \(x_{j}\), i.e. \(\boldsymbol{d}=\boldsymbol{e}_{j}\) or \(-\boldsymbol{e}_{j},j\in J\), W.L.O.G. assume \(\boldsymbol{d}=-\boldsymbol{e}_{j}\) and \(\boldsymbol{x}=\boldsymbol{x}^{\prime}+\boldsymbol{e}_{j}\). Since \(\boldsymbol{x}^{\prime}\notin P\) and in our algorithm all global bounds will not be violated, there is a \(i\in I\), s.t. \(con_{i}\) is violated and \(A_{i}\boldsymbol{x}^{\prime}>b_{i}\), since \(\boldsymbol{x}\) is feasible \(A_{i}\boldsymbol{x}=A_{i}(\boldsymbol{x}+\boldsymbol{e}_{j})<=b_{i}\). By definition of tight move operator, we can compute \(tm(x_{j},con_{i})\) increases \(x_{j}\) by 1, which means \(\phi_{ji}(\boldsymbol{x}^{\prime})=\boldsymbol{x}+\boldsymbol{e}_{j}= \boldsymbol{x}\). Q.E.D. **Proposition 6**: _Every optimal solution \(\boldsymbol{x}^{*}\) could be reached from some infeasible solution by a tight move operator._ Proof of Proposition 6. Combine Proposition 1 and Proposition 5. Q.E.D. **Lift move operator** We show that for the optimal solution \(\boldsymbol{x}^{*}\), if it has a neighbor that is feasible and non-optimal, then this neighbor can reach \(\boldsymbol{x}^{*}\) by a lift move operator. **Proposition 7**: _Let \(\boldsymbol{x}=\boldsymbol{x}^{*}+\boldsymbol{d}\), \(\boldsymbol{d}\in U\), if \(\boldsymbol{x}\in P\) and \(F(\boldsymbol{x})>F(\boldsymbol{x}^{*})\) then \(\exists j\in J\), \(\chi_{j}(\boldsymbol{x})=\boldsymbol{x}^{*}\)._ Proof of Proposition 7. Since \(\boldsymbol{d}\in U\), we assume \(\boldsymbol{d}\) corresponds to the variable \(x_{j}\), i.e. \(\boldsymbol{d}=\boldsymbol{e}_{j}\) or \(-\boldsymbol{e}_{j}\), \(j\in J\), W.L.O.G. assume \(\boldsymbol{d}=-\boldsymbol{e}_{j}\) and \(\boldsymbol{x}^{*}=\boldsymbol{x}+\boldsymbol{e}_{j}\), then since \(F(\boldsymbol{x})>F(\boldsymbol{x}^{*})\) we know \(c_{j}<0\). To show the upper bound of \(ld(x_{j},\mathbf{x})\) is equal to \(x_{j}^{*}\), we consider \(\mathbf{x^{\prime}=x}+2\mathbf{e}_{j}\). There must be \(\mathbf{x^{\prime}\notin P}\) otherwise \(F(\mathbf{x^{\prime}})=F(\mathbf{x}^{*})+c_{j}<F(\mathbf{x}^{*})\) is a feasible objective and contradicts with the optimality of \(\mathbf{x}^{*}\). So the upper bound of \(ld(x_{j},\mathbf{x})\) must \(<x_{j}^{*}+1\), and since \(x_{j}^{*}\in P\), the upper bound of \(ld(x_{j},\mathbf{x})\) is \(>=x_{j}^{*}\), thus the upper bound of \(ld(x_{j},\mathbf{x})\) is equal to \(x_{j}^{*}\), so \(\chi_{j}(\mathbf{x})=\mathbf{x}^{*}\), that is, \(\exists j\), \(\mathbf{x}^{*}\) can be reached from \(\mathbf{x}\) by \(lm(x_{j},\mathbf{x})\). Q.E.D. **Combination of tight move and lift move operators** Note that our algorithm applies lift move only for feasible solutions and tight move only for infeasible solutions, we then show this design that combining the two operators results in better connectivity. We show that the optimal solution of the ILP problem could be reached by solutions that differ in one dimension of the optimal solution, either feasible or infeasible, and could be at a large distance: Proposition 8: _Let \(\mathbf{x}^{*}\) be an optimal solution of Formula (2), for any \(\mathbf{x}=\mathbf{x}^{*}+t\cdot\mathbf{d}\), \(t\in\mathbb{Z}\), \(\mathbf{d}\in U\) there is_ _(i) \(\mathbf{x}\) is an optimal solution._ _or (ii) for some \(j\in J\), \(\chi_{j}(\mathbf{x})=\mathbf{x}^{*}\)_ _or (iii) for some \(j\in J\), \(i\in I\), either \(\phi_{ji}(\mathbf{x})\) is an optimal solution, or \(\chi_{j}(\phi_{ji}(\mathbf{x}))\) is an optimal solution._ Proof of Proposition 8. Consider \(\mathbf{x}=\mathbf{x}^{*}+t\cdot\mathbf{d}\), \(\mathbf{d}\in U\): (i) if \(\mathbf{x}\in P\) and \(F(\mathbf{x})=F(\mathbf{x}^{*})\), then (i) holds (ii) if \(\mathbf{x}\in P\) and \(F(\mathbf{x})>F(\mathbf{x}^{*})\), since \(\mathbf{x}=\mathbf{x}^{*}+t\cdot\mathbf{d}\) we know \(\mathbf{c}\cdot\mathbf{d}>0\) by \(F(\mathbf{x})>F(\mathbf{x}^{*})\), since \(\mathbf{d}\in\{\mathbf{e}_{j},-\mathbf{e}_{j}\}\) for some \(j\in J\), following the same argument as in Proposition 7, the bound of \(ld(x_{j},\mathbf{x})\) associated with sign of \(c_{j}\) is exactly \(x_{j}^{*}\), so \(\chi_{j}(\mathbf{x})=\mathbf{x}^{*}\) and (ii) holds. (iii) if \(\mathbf{x}\notin P\), since \(\mathbf{x}^{*}\in P\), and \(P\) is convex, then there is a \(t^{\prime}\in\{1,...,t\}\) s.t. \(\forall t^{\prime\prime}\in\{t^{\prime},...,t\}\), \(\mathbf{x}-t^{\prime\prime}\mathbf{d}\in P\) and \(\forall t^{\prime\prime}\in\{0,...,t^{\prime}-1\}\), \(\mathbf{x}-t^{\prime\prime}\mathbf{d}\notin P\). Then there exists a constraint \(con_{i}\) satisfied with \(\mathbf{x}-t^{\prime}\mathbf{d}\) and violated with \(\mathbf{x}-(t^{\prime}-1)\mathbf{d}\). Assume \(\mathbf{d}\) correspond to variable \(x_{j}\), as constraints are linear we know \(\phi_{ji}(\mathbf{x})=\mathbf{x}-t^{\prime}\cdot\mathbf{d}\). Let \(\mathbf{x}^{\prime}=\mathbf{x}-t^{\prime}\mathbf{d}\). If \(F(\mathbf{x}^{\prime})=F(\mathbf{x}^{*})\), \(\mathbf{x}^{\prime}\) is also an optimal solution, then \(\phi_{ji}(\mathbf{x})\) is an optimal solution. If \(F(\mathbf{x}^{\prime})>F(\mathbf{x}^{*})\), since \(\mathbf{x}^{\prime}=\mathbf{x}-t^{\prime}\mathbf{d}\in P\) we just follow the proof of (ii), \(\chi_{j}(\mathbf{x}^{\prime})=\mathbf{x}^{*}\). Then \(\exists i,j\), \(\chi_{j}(\phi_{ji}(\mathbf{x}))=\mathbf{x}^{*}\). So either \(\phi_{ji}(\mathbf{x})\) is an optimal solution, or \(\chi_{j}(\phi_{ji}(\mathbf{x}))\) is an optimal solution and (iii) holds. Q.E.D. The above properties show that our algorithm combining tight move and lift move operators, could both avoid visiting unnecessary regions and maintain good connectivity between the expected solutions (boundary and optimal solutions). ### Different Behaviors in Three Modes The three modes of our algorithm have different functionalities, the search mode is used to find a feasible solution, or say, locate \(P\) in the full domain of variables; the improve mode tries to generate a better solution while keeping the feasibility, and could be regarded as walking inside \(P\) in the direction that improves the objective function; and the restore mode is used when an infeasible solution is reached from a perturbation in improve mode and want to reach another feasible solution in \(P\) again. Thus, different settings of operators provide different behaviors of three modes, adapted to their different purposes. **Score functions.** For search and restore modes, they both aim at finding feasible solutions, and both use \(tm\) operator. The \(tm\) score function is then focused on minimizing the degree of violation of constraints, which could be regarded as a "distance" to boundary solutions of \(P\). For improve mode, the purpose is to improve the objective function. Since solutions are all feasible in improve mode, the only measure is the objective value, which is set as the \(lm\) score function. **Step size and perturbations.** As our algorithm does not have a fixed step size to generate new assignments, in each mode, the actual step size and degree of perturbation depend on different operators. We firstly enter the search mode, as its aim is to locate \(P\) in the full domain of variables. For this purpose, a larger discrepancy should be realized by the search strategy in order to get a rough location of \(P\) faster. This is handled in search mode, Algorithm 3 line 7 allows it to pick non-positive \(tm\) operations, this provides a larger degree of perturbation and results in a larger discrepancy in the full domain. Then we enter the improve mode, and try to get a good solution. Since we showed the \(lm\) operator always reaches boundary solutions, the step size is automatically adapted to the actual status. Once the improve mode encounters a local optimum, i.e., no \(lm\) operation to choose to stay in \(P\), then it goes out of the region of \(P\) by taking a unit incremental move in a variable with only step size 1, to jump out of the local optimum. The smallest step size is set here, due to that, since we have already reached \(P\), it is not desired to go out with a large distance from \(P\) as all feasible solutions are in \(P\). Therefore, the smallest step size help to enter \(P\) again. The Tabu strategy is also applied to reduce cycling. In the restore mode, the purpose is also to find a feasible solution, from outside of \(P\). As we know, the current solution is not far from \(P\), a smaller degree of perturbation is adopted compared to Search mode, and in Algorithm 5, positive \(tm\) operations are prioritized, giving a smaller chance to apply non-positive \(tm\) operations, thus a smaller discrepancy. ## 10 Conclusions This work proposed new characterizations of ILP with the concept of boundary solutions. Motivated by the new characterizations, we proposed our new efficient local search solver for integer linear programming, which is the first local search solver for general ILP validated on widely different types of problems. The main features of our solver include a new framework adopting three different modes, and two new operators designed for general ILPs, the _tight move_ and _lift move_ operators, with tailored scoring functions for each. Experiments show that, in solving large-scale hard integer linear programming problems within a reasonably short time, our solver is competitive and complementary to the state-of-the-art commercial solver Gurobi, and significantly outperforms the state-of-the-art non-commercial solver SCIP. More encouragingly, our solver established new records for 6 MIPLIB open instances. We also presented the theoretical analysis of our algorithm, which shows our algorithm could avoid visiting unnecessary regions and also maintain good connectivity of targeted solutions. ## Endnotes \({}^{1}\)This instance is modeled from the subspace code problem (Kohnert and Kurz 2008, Honold et al. 2015). A better solution to the original problem (corresponding to objective value -333 of the MPS instance) has been found in (Heinlein et al. 2019), in which authors prescribed a subgroup of the automorphism group to find the solution rather than solving this ILP instance. For this ILP instance, no better solution has been reported either in the MIPLIB website or from the literature. ## Acknowledgments This work is supported by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDA0320000 and XDA0320300, and NSFC Grant 62122078.
2310.04158
Victima: Drastically Increasing Address Translation Reach by Leveraging Underutilized Cache Resources
Address translation is a performance bottleneck in data-intensive workloads due to large datasets and irregular access patterns that lead to frequent high-latency page table walks (PTWs). PTWs can be reduced by using (i) large hardware TLBs or (ii) large software-managed TLBs. Unfortunately, both solutions have significant drawbacks: increased access latency, power and area (for hardware TLBs), and costly memory accesses, the need for large contiguous memory blocks, and complex OS modifications (for software-managed TLBs). We present Victima, a new software-transparent mechanism that drastically increases the translation reach of the processor by leveraging the underutilized resources of the cache hierarchy. The key idea of Victima is to repurpose L2 cache blocks to store clusters of TLB entries, thereby providing an additional low-latency and high-capacity component that backs up the last-level TLB and thus reduces PTWs. Victima has two main components. First, a PTW cost predictor (PTW-CP) identifies costly-to-translate addresses based on the frequency and cost of the PTWs they lead to. Second, a TLB-aware cache replacement policy prioritizes keeping TLB entries in the cache hierarchy by considering (i) the translation pressure (e.g., last-level TLB miss rate) and (ii) the reuse characteristics of the TLB entries. Our evaluation results show that in native (virtualized) execution environments Victima improves average end-to-end application performance by 7.4% (28.7%) over the baseline four-level radix-tree-based page table design and by 6.2% (20.1%) over a state-of-the-art software-managed TLB, across 11 diverse data-intensive workloads. Victima (i) is effective in both native and virtualized environments, (ii) is completely transparent to application and system software, and (iii) incurs very small area and power overheads on a modern high-end CPU.
Konstantinos Kanellopoulos, Hong Chul Nam, F. Nisa Bostanci, Rahul Bera, Mohammad Sadrosadati, Rakesh Kumar, Davide-Basilio Bartolini, Onur Mutlu
2023-10-06T11:15:20Z
http://arxiv.org/abs/2310.04158v3
# Victima: Drastically Increasing Address Translation Reach ###### Abstract. Address translation is a performance bottleneck in data-intensive workloads due to large datasets and irregular access patterns that lead to frequent high-latency page table walks (PTWs). PTWs can be reduced by using (i) large hardware TLBs or (ii) large software-managed TLBs. Unfortunately, both solutions have significant drawbacks: increased access latency, power and area (for hardware TLBs), and costly memory accesses, the need for large contiguous memory blocks, and complex OS modifications (for software-managed TLBs). We present Victima, a new _software-transparent_ mechanism that drastically increases the translation reach of the processor by leveraging the underutilized resources of the cache hierarchy. The **key idea** of Victima is to repurpose 12 cache blocks to store clusters of TLB entries, thereby providing an additional low-latency and high-capacity component that backs up the last-level TLB and thus reduces PTWs. Victima has two main components. First, a PTW cost predictor (PTW-CP) identifies costly-to-translate addresses based on the frequency and cost of the PTWs they lead to. Leveraging the PTW-CP, Victima uses the valuable cache space only for TLB entries that correspond to costly-to-translate pages, reducing the impact on cached application data. Second, a TLB-aware cache replacement policy prioritizes keeping TLB entries in the cache hierarchy by considering (i) the translation pressure (e.g., last-level TLB miss rate) and (ii) the reuse characteristics of the TLB entries. Our evaluation results show that in native (virtualized) execution environments Victima improves average end-to-end application performance by 7.4% (28.7%) over the baseline four-level radix-tree-based page table design and by 6.2% (20.1%) over a state-of-the-art software-managed TLB, across 11 diverse data-intensive workloads. Victima delivers similar performance as a system that employs an optimistic 128K-entry L2 TLB, while avoiding the associated area and power overheads. Victima (i) is effective in both native and virtualized environments, (ii) is completely transparent to application and system software, (iii) unlike large software-managed TLBs, does not require contiguous physical allocations, (iv) is compatible with modern large page mechanisms and (iv) incurs very small area and power overheads of 0.04% and 0.08%, respectively, on a modern high-end CPU. The source code of Victima is freely available at [https://github.com/CMU-SAFARI/Victima](https://github.com/CMU-SAFARI/Victima). ## 1. Introduction Address translation is a significant performance bottleneck in modern data-intensive workloads (Bostanci et al., 2017; Bostanci et al., 2018; Bostanci et al., 2019; Bostanci et al., 2019; Bostanci et al. an STLB introduces complex hardware/software interactions (e.g., evicting data from a hardware TLB to an STLB) and requires modifications in OS software. Section 3.2 provides a detailed quantitative analysis of STLBs. _Opportunity:_**Leveraging the Cache Hierarchy.** Rather than expanding hardware TLBs or introducing large software-managed TLBs, a cost-effective method to drastically increase translation reach is to store the existing TLB entries within the existing cache hierarchy. For example, a 2MB L2 cache can fit 128\(\times\) the TLB entries a 2048-entry L2 TLB holds. When a TLB entry resides inside the L2 cache, only one low-latency (e.g., \(\approx\) 16 cycles) L2 access is needed to find the virtual-to-physical address translation instead of performing a high-latency (e.g., \(\approx\) 137 cycles as shown in SS3) PTW. One potential pitfall of this approach is the potential reduction of caching capacity for application data, which could ultimately harm end-to-end performance. However, as we show in SS3 and as shown in multiple prior works [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37], modern data-intensive workloads, tend to (greatly) underutilize the cache hierarchy, especially the large L2/L3/L4 caches. This is because many modern working sets exceed the capacity of the cache hierarchy and many data accesses exhibit low spatial and temporal locality [38, 39, 40, 30, 31, 32, 33]. Therefore, the underutilized cache blocks can likely be repurposed to store TLB entries without replacing useful program data and harming end-to-end application performance. **Our goal** in this work is to increase the translation reach of the processor's TLB hierarchy by leveraging the underutilized resources in the cache hierarchy. We aim to design such a practical technique that: (i) is effective in both native and virtualized execution environments, (ii) does not require or rely on contiguous physical allocations, (iii) is transparent to both application and OS software and (iv) has low area, power, and energy costs. To this end, we present Victima, a new _software-transparent_ mechanism that drastically increases the translation reach of the TLB by leveraging the underutilized resources of the cache hierarchy. The **key idea** of Victima is to repurpose L2 cache blocks to store clusters of TLB entries. Doing so provides an additional low-latency and high-capacity component to back up the last-level TLB and thus reduces PTWs. Victima has two main components. First, a PTW cost predictor (PTW-CP) identifies costly-to-translate addresses based on the frequency and cost of the PTWs they lead to. Leveraging the PTW-CP, Victima uses the valuable cache space only for TLB entries that correspond to costly-to-translate pages, reducing the impact on cached application data. Second, a TLB-aware cache replacement policy prioritizes keeping TLB entries in the cache hierarchy by considering (i) the translation pressure (e.g., high last-level TLB miss rate) and (ii) the reuse of the TLB entries. **Key Mechanism.** Victima gets triggered on last-level TLB misses and evictions. On a last-level TLB miss, if PTW-CP predicts that the page will be costly-to-translate in the future, Victima transforms the data cache block that contains the last-level PT entries (PTEs) (fetched during the PTW) into a cluster of TLB entries to enable direct access to the corresponding PTEs using a virtual address without walking the PT. On a last-level TLB eviction, if PTW-CP makes a positive prediction, Victima issues a PTW in the background to bring the PTEs of the evicted address into the L2 cache, and Victima transforms the fetched PTE entries into a TLB entry. This way, if the evicted TLB entry is accessed again in the future, Victima can directly access the corresponding PTE without walking the PT. Victima (i) is effective in both native and virtualized environments, (ii) is completely transparent to application and system software, (iii) unlike large software-managed TLBs, does not require contiguous physical allocations, and (iv) is compatible with modern large page mechanisms (e.g., Transparent Huge Pages in Linux [41]). **Key Results.** We evaluate Victima with an extended version of the Sniper simulator [42] (which is open-source [43]) using 11 data-intensive applications from five diverse benchmark suites (GraphBIG [44], GUPS [45], XSBench [46], DLRM [47] and GenomicsBench [48]). Our evaluation yields four major results that show Victima's effectiveness. First, in native execution environments, Victima improves performance by 7.4% on average over the baseline system that uses a four-level radix-tree-based PT, yielding 3.3% and 6.2% higher performance compared to a system with an optimistic 64K-entry L2 TLB and a system with a state-of-the-art software-managed L3 TLB [17], respectively. At the same time, Victima delivers similar performance as a system that employs an optimistic 128K-entry L2 TLB, while avoiding the associated area and power overheads. Second, in virtualized environments, Victima improves performance by 28.7% over the baseline nested paging mechanism [12], and outperforms an ideal shadow paging mechanism [49] by 4.9% and a system that employs a state-of-the-art software-managed TLB [17] by 20.1%. Third, Victima achieves such performance benefits by reducing L2 TLB miss latency by 22% (60%) on average in native (virtualized) execution environments compared to the baseline system (nested paging [12]). Fourth, all of Victima's benefits come at a modest cost of 0.04% area overhead and 0.08% power overhead compared to a modern high-end CPU [50]. This paper makes the following major contributions: * We observe a new opportunity to reuse the existing underutilized cache resources in order to store TLB entries and increase the translation reach of the processor's TLB hierarchy at low cost and low overheads. * We propose Victima, a new _software-transparent_ mechanism that drastically increases the translation reach of the processor by carefully and practically leveraging the underutilized resources of the cache hierarchy. The **key idea** of Victima is to repurpose L2 cache blocks to store clusters of TLB entries for costly-to-translate pages, thereby providing an additional low-latency and high-capacity component to back up the last-level TLB and reducing the number of PTWs. * We evaluate Victima using a diverse set of data-intensive applications and demonstrate its effectiveness in both native and virtualized environments. Victima achieves high performance benefits by effectively reducing last-level TLB miss latency compared to both realistic and optimistic baseline systems, with very modest area and power overheads compared to a modern high-end CPU. * We open-source Victima and all necessary traces and scripts to completely reproduce results at [https://github.com/CMU-SAFARI/Victima](https://github.com/CMU-SAFARI/Victima). Background ### The Virtual Memory Abstraction Virtual memory is a cornerstone of most modern computing systems that eases the programming model by providing a convenient abstraction to manage the physical memory [22, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72]. The operating system (OS), transparently to application software, maps each virtual memory address to its corresponding physical memory address. Doing so provides a number of benefits, including: (i) application-transparent memory management, (ii) sharing data between applications, (iii) process isolation, and (iv) page-level memory protection. Conventional virtual memory designs allow any virtual page to map to any free physical page. Such a flexible address mapping enables two important key features of virtual memory: (i) efficient memory utilization, and (ii) sharing pages between applications. However, such a flexible address mapping mechanism has a critical downside: it creates the need to store a large number of virtual-to-physical mappings, as for every process, the OS needs to store the physical location of every virtual page. ### Page Table (PT) The PT is a per-process data structure that stores the mappings between virtual and physical pages. In modern x86-64 processors, the PT is organized as a four-level radix-tree [73]. Even though the radix-tree-based PT optimizes for storage efficiency, it requires multiple pointer-chasing operations to discover the virtual-to-physical mapping. To search for a virtual-to-physical address mapping, the system needs to _sequentially_ access each of the four levels of the page table. This process is called _page table walk (PTW)_. Figure 1 shows the PTW assuming (i) an x86-64 four-level radix-tree PT whose base address is stored in the CR3 register, and (ii) 4KB pages. As shown in Figure 1, a single PTW requires four sequential memory accesses 1 to discover the physical page number. The processor uses the first 9-bits of the virtual address as offset (Page Map Level4; PML4) to index the appropriate entry of the PT within the first level of the PT 1. The processor then reads the pointer stored in the first level of the PT to access the second-level of the PT 1. It uses the next 9-bit set (Page Directory Page table; PDF) from the virtual address to locate the appropriate entry within the second level. This process continues iteratively for each subsequent level of the PT (Page Directory; PD 1 and Page Table; PT 1). Eventually, the processor reaches the leaf level of the PT, where it finds the final entry containing the physical page number corresponding to the given virtual address 1. ARM processors use a similar approach, with the number of levels varying across different versions of the ISA [74]. ### Virtualized Environments In virtualized environments, each memory request requires a two-level address translation: (i) from guest-virtual to guest-physical, and (ii) from guest-physical to host-physical. The dominant technique to perform address translation in virtualized environments is Nested Paging (NP) [12, 13]. In NP, the system uses two page tables: the guest page table that stores guest-virtual to guest-physical address mappings and the host page table that stores guest-physical to host-physical address mappings. To search for the mapping between a guest-virtual page to a host-physical page, NP performs a two-dimensional walk, since a host page table walk is required for each level of the guest page table walk. Therefore, in a virtualized environment with a four-level radix-tree-based PT, NP-based address translation can cause up to 24 sequential memory accesses (a 6\(\times\) increase in memory accesses compared to the native execution environment). ### Memory Management Unit (MMU) When a user process generates a memory (i.e., instruction or data) request, the processor needs to translate the virtual address to its corresponding physical address. Address translation is a critical operation because it sits on the critical path of the memory access flow: no memory access is possible unless the requested virtual address is first translated into its corresponding physical address. Given that frequent PTWs lead to high address translation overheads, modern cores comprise of a specialized memory management unit (MMU) responsible for accelerating address translation. Figure 2 shows an example structure of the MMU of a modern processor [75], consisting of three key components: (i) a two-level hierarchy of translation lookaside buffers (TLBs), (ii) a hardware page table walker, and (iii) page walk caches (PWCs). L1 TLBs are highly- or fully-associative caches that directly provide the physical address for recently-accessed virtual pages at very low latency (i.e., typically within 1 cycle). There are two separate L1 TLBs, one for instructions (L1 I-TLB) and one for data (L1 D-TLB). Modern TLBs make use of multiple page sizes beyond 4KB in order to (i) cover large amounts memory with a single entry and (ii) maintain compatibility with modern Oes that transparently allocate large pages [76, 77, 78, 79]. For example, an Intel Cascade Lake core [75] employs 2 L1 D-TLBs, one for 2MB pages and one for 4KB pages. Translation requests that miss in the L1 TLBs 1 are forwarded to a unified L2 TLB, that stores translations for both instructions and data. In case of an L2 TLB miss, the MMU triggers a PTW 1. PTW is performed by a dedicated hardware page table walker capable of performing multiple concurrent PTWs. In order to reduce PTW latency, page table walkers are equipped with page Figure 1: Four-level radix-tree page table walk in x86-64 ISA. Figure 2: Structure of the Memory Management Unit (MMU) of a modern processor. walk caches (PWC), which are small dedicated caches for each level of the PT (for the first three levels in x86-64). In case of a PWC miss, the MMU issues the request(s) for the corresponding level of the PT to the conventional memory hierarchy. To accelerate address translation in virtualized execution environments that use Nested Paging (Han et al., 2015), as shown in Figure 3, the MMU is additionally equipped with (i) a nested TLB that stores guest-physical-to-host-physical mappings and (ii) an additional hardware page table walker that walks the host PT (while the other one walks the guest PT). Upon an L2 TLB miss, the MMU triggers a guest PTW to retrieve the guest-physical address. On a PWC miss, the guest Page Table Walker must retrieve the guest PT entries from the cache hierarchy. However, to access the cache hierarchy that operates on host-physical addresses, the guest PTW must first translate the host-virtual address to the host-physical address using a host PTW. To avoid the host PTW, the MMU probes the nested TLB to search for the host-virtual-to-host-physical translation. Only in case of a nested TLB miss the MMU triggers the host PTW. ## 3. Motivation As shown in multiple prior academic works and industrial studies (Han et al., 2015; Han et al., 2015; Han et al., 2015; Han et al., 2015; Han et al., 2015; Han et al., 2015), various modern data-intensive workloads experience severe performance bottlenecks due to address translation. For example, a system that (i) employs a 1.5K-entry L2 TLB and (ii) uses both 4KB and 2MB pages, experiences a high MPKI of 39, averaged across all evaluated workloads (see Fig. 5).1 At the same time, as we show in Figure 4, the average latency of a PTW is 137 cycles.2 Based on our evaluation results, frequent L2 TLB misses in combination with high-latency PTWs lead to an average of 30% of total execution cycles spent on address translation. Footnote 1: 58 describes our evaluation methodology in detail. Footnote 2: The x-axis of Figure 4 is cut off (at 190 cycles) since only 0.2% of the PTWs take more than 190 cycles to complete. Maximum observed PTW latency is 608 cycles. Previous works propose various solutions to reduce the high cost of address translation and increase the translation reach of the TLBs such as employing (i) large hardware TLBs (Han et al., 2015; Han et al., 2015; Han et al., 2015) or (ii) backing up the last-level TLB with a large software-managed TLB (Han et al., 2015; Han et al. suggested by CACTI 7.0 (CACTI, 2017)), compared to the baseline system that employs a two-level TLB hierarchy (with a 1.5K-entry 12-cycle L2 TLB). We observe that a large 64K-entry L3 TLB with a very aggressive 15-cycle access latency leads to a 2.9% performance increase compared to the baseline system. The performance gains are lower compared to employing a 64K-entry L2 TLB (4.0%). This is because, for applications that experience low L2 TLB hit rates, employing an L3 TLB results in a higher L3 TLB hit latency (L2 TLB miss latency + L3 TLB hit latency) compared to using a large L2 TLB. We conclude that employing a large L3 TLB is not universally beneficial, and the performance gains heavily depend on the L2 TLB hit rates and L3 TLB access latencies. ### Large Software-Managed TLBs Previous works (Liu et al., 2017; Liu et al., 2018; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019) propose using large software-managed TLBs to reduce PTWs. However, software-managed TLBs suffer from four key disadvantages. First, to look up a software-managed TLB (STLB), the processor fetches STLB entries from the main memory into the cache hierarchy. At the same time, the hit rate of STLBs likely does not justify the cost of fetching STLB entries from the main memory. Hence, the total latency of accessing STLB entries and performing PTWs is comparable to the latency of performing PTWs in the baseline system. To validate our claim, Figure 9 shows the average L2 TLB miss latency in (i) the baseline system in native execution, (ii) a system with a state-of-the-art L3 STLB (Liu et al., 2019) in native execution, (iii) the baseline system that employs nested paging (NP) (Liu et al., 2019) in virtualized execution and (iv) a system with a state-of-the-art L3 STLB (Liu et al., 2019) and NP (Liu et al., 2019) in virtualized execution. We observe that the average L2 TLB miss latency in a system with an STLB is 122 cycles, which is comparable to the baseline system (128 cycles). However, the average L2 TLB miss latency in the system with NP in virtualized execution is 275 cycles, which is higher than the average L2 TLB miss latency in a system with an L3 STLB (220 cycles) in virtualized execution, making the STLB a more attractive solution in virtualized execution environments. Second, allocating an STLB in software requires contiguous physical address space (on the order of 10's of MB), which is difficult to find in environments where memory is heavily fragmented, such as data centers (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019) and in cases where memory capacity pressure is high (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019). Third, resizing an STLB throughout the execution of the program to match the program's needs is challenging due to the large data movement cost of migrating the TLB entries betweeen different software data structures (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019). Fourth, integrating a software-managed TLB in the address translation pipeline requires OS and hardware changes to support (i) flushing and updating software STLB entries during a TLB shootdown (Liu et al., 2019; Liu et al., 2019), (ii) handling evictions from the hardware TLB to the STLB (Liu et al., 2019; Liu et al., 2019). ### Opportunity: Storing TLB Entries Inside the Cache Hierarchy Instead of expanding hardware TLBs or introducing large software-managed TLBs, we posit that a cost-effective method to drastically increase the translation reach of the TLB hierarchy is to store the existing TLB entries within the existing cache hierarchy. For example, a 2MB L2 cache can fit 128x the TLB entries a 2048-entry L2 TLB holds. When a TLB entry resides inside the L2 cache, only one low-latency (i.e., \(\approx\) 16 cycles) L2 access is needed to find the virtual-to-physical address translation instead of performing a high-latency (i.e., \(\approx\) 137 cycles on average) PTW. To better understand the potential of caching TLB entries in the cache hierarchy, we conduct a study where for every L2 TLB miss, the translation request is _always_ served from the L1 cache (_TLB-hit-L1_), L2 cache (_TLB-hit-L2_) or the LLC (_TLB-hit-LLC_). Figure 10 shows the reduction in address translation latency provided by TLB-hit-[L1, L2, LLC] compared to the baseline system. We observe that, even when servicing every L2 TLB miss from the LLC (which takes \(\approx\)35 cycles to access), L2 TLB miss latency is reduced by 71.9% on average across 11 workloads. We conclude that caching TLB entries inside the cache hierarchy can potentially greatly reduce the address translation latency. ### Cache Underutilization One potential pitfall of storing TLB entries inside the cache hierarchy is the potential reduction of caching capacity for application data, which could ultimately harm end-to-end performance. However, as shown in prior works (Liu et al., 2019; Liu et al., 2019; Liu et al. workloads, tend to (greatly) underutilize the cache hierarchy, especially the large L2/L3/L4 caches. This is because modern working sets exceed the capacity of the cache hierarchy and data accesses exhibit low spatial and temporal locality [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. Figure 11 shows the reuse-level distribution of blocks in the L2 cache across our evaluated data-intensive workloads (note that y-axis starts from 75%). We observe that on average 92% of the cache blocks experience no reuse (i.e., 0 reuse) after being brought to the L2 cache (i.e., these blocks are _not_ accessed while they reside inside the L2 cache). In contrast, only 8% of blocks experience reuse higher than 1 (i.e., they are accessed more than once while they reside inside the L2 cache). We conclude that a large fraction of the underutilized cache blocks can be repurposed to store TLB entries _without_ replacing useful program data and harming end-to-end application performance. ### Our Goal **Our goal** is to increase the translation reach of the processor's TLB hierarchy by leveraging the underutilized resources in the cache hierarchy. We aim to design such a practical technique that: (i) is effective in both native and virtualized execution environments, (ii) does not require or rely on contiguous physical allocations, (iii) is transparent to both application and OS software and (iv) has low area, power, and energy costs. To this end, our key idea is to store TLB entries in the cache hierarchy. ## 4. Victima: Design Overview We present Victima, a new _software-transparent_ mechanism that drastically increases the translation reach of the TLB by leveraging the underutilized resources of the cache hierarchy. The **key idea** of Victima is to repurpose L2 cache blocks to store clusters of TLB entries. Doing so provides an additional low-latency and high-capacity component to back up the last-level TLB and thus reduces PTWs. Victima has two main components. First, a PTW cost predictor (PTW-CP) identifies costly-to-translate addresses based on the frequency and cost of the PTWs they lead to. Leveraging the PTW-CP, Victima uses the valuable cache space only for TLB entries that correspond to costly-to-translate pages, reducing the impact on cached application data. Second, a TLB-aware cache replacement policy prioritizes keeping TLB entries in the cache hierarchy by taking into account (i) the translation pressure (e.g., high last-level TLB miss rate) and (ii) the reuse characteristics of the TLB entries. Figure 12 shows the translation flow in Victima compared to the one in a conventional baseline processor [50]. In the baseline system (Fig. 12 top), (i) whenever an entry is evicted from the L2 TLB, the evicted TLB entry is not cached anywhere. Hence, (i) the TLB entry is dropped and (ii) a high-latency PTW is required to fetch it when it is requested again. In contrast, Victima (Fig. 12 bottom) stores into the L2 cache (i) entries that get evicted from for applications that experience a high number of capacity misses in the TLB hierarchy. Victima's functionality seamlessly applies to virtualized environments as well. In virtualized execution, where Victima stores into the L2 cache both (i) conventional TLB entries that store direct guest-virtual-to-host-physical mappings as well as (ii) nested TLB entries that store guest-physical-to-host-physical mappings. ## 5. Victima: Detailed Design We describe in detail (i) how the L2 cache is modified to store TLB entries, (ii) how Victima inserts TLB entries into the L2 cache, (iii) how address translation flow changes in the presence of Victima, (iv) how Victima operates in virtualized environments and (v) how Victima maintains TLB entries coherent. We use as the reference design point a modern x86-64 system that employs 48-bit virtual addresses (VA) and 52-bit physical addresses (PA) [73]. ### Modifications to the L2 Cache We minimally modify the L2 cache to (i) support storing TLB entries and (ii) enable a TLB-aware replacement policy that favors keeping TLB entries inside the L2 cache taking into account address translation pressure (e.g., L2 TLB MPKI) and the reuse characteristics of TLB entries. **TLB Blocks.** We introduce a new cache block type to store TLB entries in the data store of the L2 cache, called the TLB block. Figure 13 shows how the same address maps to (i) a conventional Figure 11. Reuse-level distribution of L2 cache blocks. Figure 12. Address translation flow in a conventional baseline processor [50] and Victima. L2 data cache block and (ii) an L2 cache block that contains TLB entries for 4KB or 2MB pages. Each cache entry can potentially store a data block or a TLB block. A conventional data block is (typically) accessed using the PA while a TLB block is accessed using the VA. Victima modifies the cache block metadata layout to enable storing TLB entries. First, an additional bit is needed to distinguish between a data block versus a TLB block. Second, in a conventional data block, the size of the tag of a 1MB, 16-way associative L2 cache consists of \(52-log_{2}(1024)-log_{2}(64)=36\) bits. However, in a TLB block, the tag consists of only 23 bits and is computed as \(48-log_{2}(4KB)-log_{2}(1024)-log_{2}(8)=23\) bits which is smaller than the tag needed for a data block.3 We leverage the unused space in TLB blocks to (i) avoid aliasing and (ii) store page size information. Footnote 3: Each 6-byte TLB block can store up to 8-byte PTEs. Victima uses the 3 least significant bits of the virtual page number to identify and access a specific PTE. To prevent aliasing between the virtual addresses (VAs) of different processes, 11 unused bits of the tag are reserved for storing the address-space identifier (ASID) or the virtual-machine identifier (VMID) of each process. The rest of the bits are used to store page size information. Given a 48-bit VA and 52-bit PA, we can spare 11 bits for the ASID/VMID. As the VA size becomes larger, e.g., 57 bits, fewer bits can be spared for the ASID/VMID (4 bits in case of 57-bit VA and 52-bit PA). However, modern operating systems do not use more than 12 ASIDs/core (Sandel et al., 2017) in order to avoid expensive lookups in the ASID table. Hence, when using 57-bit VAs and 52-bit PAs, even with only 4 bits left for the ASID, there is no risk of aliasing. For a cache with 64-byte cache lines, it is possible to uniquely tag and avoid aliasing between TLB entries (without increasing the size of the cache's hardware tag entries) only if \((PA_{length}>VA_{length}-9)\).4 In cases where this condition is not met, an alternative approach is to reduce the number of TLB entries in the TLB block (e.g., by storing 7 PTEs instead of 8 PTEs) and use the remaining bits for the tag/ASID/VMID. Previous works (e.g., (Sandel et al., 2017) propose such solutions to enable efficient sub-block tagging in data caches. Footnote 4: If \(PA_{length}\leq(VA_{length}-9)\), a single VA can map to different TLB blocks. This is because the tag of the TLB block does not fit inside the hardware tag entry of the L2 cache. **TLB-aware Cache Replacement Policy**. We extend the conventional state-of-the-art SRIP cache replacement policy (Sandel et al., 2017) to prioritize storing TLB entries of an application for longer time periods if the application experiences high address translation overheads (i.e., L2 TLB MPKI greater than 5). Listing 5.1 shows the pseudocode of the block insertion function, replacement candidate function, and cache hit function for SRIP in the baseline system and Victima (changes compared to baseline SRIP are marked in blue). Upon insertion of a TLB entry inside the L2 cache (insertBlockInL2(block) Line 1), the re-reference interval (analogous to reuse distance) is set to 0 (Line 6), marking the TLB entry as a block with a small reuse distance. This way, TLB entries are unlikely to be evicted soon after their insertion. Upon selection of a replacement candidate (chooseReplacementCandidate() Line 10), if the selected replacement candidate is a TLB block (Line 23) and translation pressure is high (Line 23), SRIP makes one more attempt to find a replacement candidate that is _not_ a TLB block (Line 23). If no such candidate is found, the TLB block is evicted from the L2 cache and is dropped (i.e., not written anywhere else). Upon a cache hit to a TLB entry (updateOnL2CacheMit(index) Line 28), the re-reference interval is reduced by three instead of one (Line 32) to provide higher priority to the TLB entry compared to other data blocks (Line 34). ``` 1functioninsertBlockInL2(block): 2//lf inserting a TLB block and TLB pressure is high 3//set the reference-interval to 0 to provide high priority 4//assuming that reuse in near future will be high 5if(block==TLB andTLB_MPKI>5) 6rip_counter[block]=0 7else 8rip_counter[block]=RBP_MAX 9 10functionchooseReplacementCandidate(): 11//replace an invalid block if possible 12forfrom$to$,associatedity=1: 13if(block1)==invalid$return$ 14//Checkthere-referenceinterval of each block in the set 15forfrom$to$,RBP_MAX 16//searchfor$blockwith$,RBP_MAX 17for$from$to$,RBP_MAX 18chosen_block=chosenBlockoutHigh will be costly-to-translate, Victima employs a Page Table Walk cost predictor (PTW-CP). Figure 14 depicts Victima's operations on an L2 TLB miss or eviction. **Inserting a TLB Block into the L2 Cache upon an L2 TLB Miss**. When an L2 TLB miss occurs, the MMU consults the PTW-CP to find out if the page is predicted to be costly-to-translate in the future ( in Fig. 14). If the prediction is positive, the MMU checks if the corresponding TLB block already resides inside the L2 cache. If it does, no further action is needed. If not, the MMU first waits until the PTW is completed. When the last level of the PT is fetched, the MMU transforms the cache block that contains the PTEs to a TLB block by updating the metadata of the block. The MMU (i) replaces the existing tag with the tag of the virtual page region, (ii) sets the TLB bit to mark the cache block as TLB block, and (iii) updates the ASID and the page size information associated with the TLB block. This way, the TLB block containing the consecutive PTE entries is directly accessible using the corresponding virtual page numbers and the ASID of the application _without_ walking the PT. Storing several (e.g., 8 in our implementation) TLB entries for consecutive virtual pages inside the same L2 cache TLB block can be highly beneficial for applications whose memory accesses exhibit high spatial locality and frequently access neighboring pages. **Inserting a TLB Block into the L2 Cache upon an L2 TLB Eviction**. When an L2 TLB eviction occurs, the MMU consults the PTW-CP to find out if the page is predicted to be costly-to-translate in the future ( in Fig. 14). If the outcome of the prediction is positive, the MMU checks if the corresponding TLB block already resides in the L2 cache. If it does, no further action is needed. If it does not, the MMU issues in the background a PTW for the corresponding TLB entry. When the last level of the page table is fetched, the MMU follows the same procedure as the L2 TLB miss-based insertion (i.e., transforms the cache block that contains the PTEs to a TLB block). This way, if the evicted TLB entry (or any other TLB entry in the block) is accessed again in the future, Victima can directly access the corresponding PTE without walking the PT. **Page Table Walk Cost Predictor: Functionality**. The PTW cost predictor (PTW-CP) is a small comparator-based circuit that estimates whether the page is among the top 30% most costly-to-translate pages. Using it, Victima predicts if a page will cause costly PTWs in the future and decides whether the MMU should store the corresponding TLB block inside the L2 cache. To make this decision, PTW-CP uses two metrics associated with a page: (i) PTW frequency and (ii) PTW cost, both of which are embedded inside the PTE of the corresponding page. Figure 15 shows the structure and the functionality of PTW-CP. PTW frequency is stored as a 3-bit counter in the unused bits of the PTE and is incremented after every PTW that fetches the corresponding PTE. PTW cost is also stored as a 4-bit counter in the unused bits of the PTE and is incremented every time the PTW leads to at least one DRAM access. Both counters are updated by the MMU after every PTW that fetches the corresponding PTE. If any of the two counters overflows, its value remains at the maximum value throughout the rest of the program's execution. On an L2 TLB miss or eviction, the PTW-CP waits until the corresponding PTE is fetched inside the L2 TLB. PTW-CP fetches the two counters from the TLB entry that contains the PTE, passes them through a tree of comparators, and calculates the result. If the L2 cache experiences high MPKI (i.e., data exhibits low locality, meaning that caching data is not that beneficial), the PTW-CP is bypassed and the TLB entry is inserted inside the L2 cache without consulting the PTW-CP. **Page Table Walk Cost Predictor: Feature Selection**. Our development of PTW-CP's architecture involves a systematic and empirical approach to (i) identify the most critical features for making high-accuracy predictions and (ii) create an effective predictor while minimizing hardware overhead and inference latency. Initially, we collect a set of 10 per-page features related to address translation, as shown in Table 1. From these 10 features, we methodically identify a small subset that would maximize accuracy while minimizing prediction time and storage overhead. Table 2 shows the architectural characteristics and the performance of three different multi-layer perceptron-based neural networks (NN) [92] and of our final comparator-based model. First, we evaluate three different NN architectures with different feature sets to gain insights about the most critical features (for accuracy). The first NN (NN-10) uses all 10 features, the second NN (NN-5) uses a set of 5 features (PTW cost, PTW frequency, PWC hits, L2 TLB evictions, and accesses to the page), and the third (NN-2) uses only 2 features, the PTW frequency and the PTW cost. We use four metrics to evaluate the performance of each model: accuracy, precision, recall, and F1-score. Accuracy is the fraction of correct predictions, \begin{table} \begin{tabular}{|l|l|} \hline **Feature (per PTE)** & **Biss Description** \\ \hline Page: State & 1 & The size of the page (iMS or 2MB) \\ **Page Table Walk Frequency** & **3** & **4** \\ **Page Table Walk Cost** & **4** & **6** \\ **Purge Table Walk Cost** & **4** & **6** \\ **Purge Talk Cost** & **5** & **6** \\ **Purge Talk Cost** & **5** & **6** \\ **L1 TLB Misses** & **5** & **6** \\ **L2 TLB Misses** & **5** & **6** \\ **L2 Cache Infs** & **5** & **6** \\ **L1 TLB Evictions** & **5** & **6** \\ **L2 TLB Evictions** & **6** & **6** \\ **L3 TLB Evictions** & **6** & **6** \\ \hline \end{tabular} \end{table} Table 1: Per-Page Feature Set Figure 14: Insertion of a TLB block into the L2 cache upon (i) an L2 TLB miss and (ii) an L2 TLB eviction. Figure 15: Page Table Walk Cost Predictor. precision is the fraction of correct positive (i.e., costly-to-translate) predictions, and recall is the fraction of correct negative predictions. F1-score is the harmonic mean of precision and recall. In the context of PTW-CP, making negative predictions when the page is actually costly-to-translate leads to performance degradation, while making positive predictions when the page is actually _not_ costly-to-translate leads to L2 cache pollution. From Table 2, we observe that NN-10 achieves the highest performance, with an F1-score of 90.42%. By reducing the number of features to 5, NN-5 still achieves high performance reaching 89.89% F1-score while NN-2 leads to an F1-score of 80.66%. At the same time, NN-2 is 7.75x smaller than NN-10 and 90.5x smaller than NN-5 which makes it an attractive solution for PTW-CP as it achieves reasonable accuracy with small hardware overhead. To gain a better understanding of the prediction pattern of NN-2, Fig. 16 shows the predictions of the network for all possible PTW frequency and PTW cost value pairs. We observe that the network exhibits a clear prediction pattern that separates costly-to-translate pages from non-costly-to-translate pages: PTW frequency-cost value pairs that fall inside the boundaries of the bounding box (rectangle spanning from the bottom-left corner (1,1) to the top-right corner (12,7) as drawn on Fig. 16) are classified as costly-to-translate by NN-2, while PTW frequency-cost value pairs that fall outside the bounding box are classified as non-costly-to-translate. Many of the PTW frequency-cost value pairs never occur during the execution of the applications we evaluate and are not classified by NN-2. Table 2 demonstrates that a simple comparator approach that mimics the functionality the bounding box shown in Fig. 16, achieves an F1-score of 80.66% without any performance loss compared to NN-2. The comparator-based model requires only 24 bytes of storage, 251x less than NN-10, 2923x less than NN-5 and 32x less than NN-2. The comparator-based model requires only (i) four comparators to compare the two counters with the edges of the bounding box, i.e., (1,1) and (12,7) and (ii) can make a prediction in a single cycle. The comparator-based model is the PTW-CP architecture that we use in Victim. ### Address Translation Flow with Victima Figure 17 demonstrates the address translation flow in a system that employs Victima. When an L2 TLB miss occurs, the MMU in parallel (i) initiates the PTW 1 and (ii) looks up the corresponding TLB block in the L2 cache 1. In contrast to regular L2 data block lookups, which are performed using the physical address, a TLB block lookup is performed using the virtual page number (VPN) and the address-space identifier (ASID) of the translation request. The size of the VPN is not known a priori, so Victima probes the L2 cache twice in parallel, once assuming a 4KB VPN and once assuming a 2MB VPN. If the tag (either the tag of the 4KB VPN or the tag of the 2MB VPN) and the ASID matches with a block that has the TLB-entry bit set, the translation request is served by the L2 cache 1, the PTW is aborted, and the TLB entry is inserted into the L2 TLB. If the TLB entry is not found in the L2 cache, the PT Walker runs to completion and resolves the translation 1. ### Victima in Virtualized Environments We demonstrate how Victima improves address translation in virtualized environments. The key idea is to insert both (i) TLB entries and (ii) _nested_ TLB entries into the L2 cache to increase the translation reach of the processor's TLB hierarchy for both guest-virtual-to-guest-physical and guest-physical-to-host-physical address translations and avoid both (i) guest-PTWs and (ii) host-PTWs. A nested TLB block is a block of 8 nested TLB entries that correspond to 8 contiguous host-virtual pages. To distinguish between conventional TLB blocks and nested TLB blocks, Victima extends the cache block metadata with an additional bit to mark a block as a nested TLB block. Figure 18 shows how Nested TLB blocks are inserted into the L2 cache in a system that employs Victima and nested paging [12] in virtualized execution. (conventional TLB blocks are allocated as described in SS5.2). Figure 16: Prediction pattern of NN-2. The bounding box separates the PTW cost-frequency pairs that lead to positive predictions (inside the box) from the ones that lead to negative predictions (outside the box). Figure 17: Address translation flow in a system with Victima. Figure 18: Insertion of a nested TLB block into the L2 cache upon (i) a nested TLB miss and (ii) a nested TLB eviction \begin{table} \begin{tabular}{|l|l l l l|} \hline **Model Parameters** & NN-10 & NN-5 & NN-2 & **Comparator** \\ \hline _a1_ features & 10 & 5 & 2 & **2** \\ Number of Layers & 4 & 4 & 6 & N/A \\ Size of Hidden Layers & 16 & 54 & 4 & N/A \\ Size (0) & 6024 & 70152 & 776 & **24** \\ Recall & 93.345 & 92.445 & 89.624 & **80.615** \\ Accuracy & 92.135 & 91.725 & 82.905 & **82.505** \\ Precision & 87.48\% & 87.47\% & 73.33\% & **73.34\%** \\ F1-score & 90.42\% & 89.89\% & 80.64\% & **80.64\%** \\ \hline \end{tabular} \end{table} Table 2: Comparison of Different Types of PTW-CP Figure 18: Address translation flow in a system with Victima. the PTW-CP to find out if the host-virtual page will be costly-to-translate in the future 1. If the prediction is positive, the MMU checks if the corresponding nested TLB block already resides inside the L2 cache 2. If it does, no further action is needed. If not, the MMU first waits until the host-PTW is completed. When the last level of the host-PT is fetched 3, the MMU transforms the cache block that contains the host-PTEs to a nested TLB block by updating the metadata of the block 4. The MMU (i) replaces the existing tag with the tag of the host-virtual page region, (ii) sets the nested TLB bit to mark the cache block as a nested TLB block, and (iii) updates the ASID (or VMID) and the page size information. **Inserting a Nested TLB Block into the L2 Cache upon a Nested TLB Eviction.** When a Nested TLB eviction occurs, the MMU consults the PTW-CP to find out if the host-virtual page will be costly-to-translate in the future 1. If the outcome of the prediction is positive, the MMU checks if the corresponding nested TLB block already resides in the L2 cache 2. If it does, no further action is needed. If it does not, the MMU issues in the background a host-PTW for the corresponding TLB entry 2. When the last level of the host-PT is fetched 3, the MMU transforms the cache block that contains the host-PTEs to a nested TLB block. 4. **Address Translation Flow.** Figure 18 shows the address translation flow of a system that employs Victima and nested paging (Victima and nested paging (Victima and nested paging (Victima and nested TLB) in virtualized execution). If a nested TLB miss occurs 1, the MMU probes the L2 cache to search for the nested TLB entry 2. If the nested TLB entry is found inside the L2 cache, the host-PTW gets skipped 3. If it is not found, the host-PTW is performs the guest-physical-to-host-physical address translation 4. ## 6. TLB Maintenance Operations Modern ISAs provide specific instructions used by the OS to invalidate TLB entries and maintain correctness in the presence of (i) context switches and (ii) modifications of virtual-to-physical address mappings (called TLB shootdowns) that occur due to physical page migration, memory de-allocation etc. Different ISAs provide different instructions for TLB invalidations. For example, the ARM v8 architecture (Victima and Victima, 2015, 2016) defines multiple special instructions to invalidate TLB entries with each instruction handling a distinct case (e.g., invalidating a single TLB entry vs invalidating all TLB entries with a specific ASID). x86-64 provides a single instruction, INWLPG, which corresponds to invalidating one single TLB entry (Victima and Victima, 2015). In Victima, whenever a TLB invalidation is required, the corresponding TLB entries in the L2 cache need to be invalidated. In this section, following the example of the ARM specification, which is a super-set of other specifications we know of, we discuss in detail how Victima supports TLB invalidations due to context-switches and TLB shootdowns. ### Context Switches TLB flushing occurs when the OS switches the hardware context and schedules another process (or thread) to the core. In this case, the OS makes a decision on whether or not the TLB entries across the TLB hierarchy should be invalidated, which depends on the ASIDs of the current and to-be-executed processes (in practice Linux uses only 12 different ASIDs per core even though the processor can support up to 4096 ASIDs). In Victima, if the OS flushes the entire TLB hierarchy, all the TLB blocks in L2 cache need to be invalidated as well. If the OS performs a partial flush based on the ASID, all the TLB blocks in L2 cache with the corresponding ASID need to be evicted. In the corner case that Victima uses fewer bits for the ASID, i.e., when L2 cache tag is not large enough to store enough ASID bits to cover the ASID of the process, all the TLB blocks inside the L2 cache get invalidated during a context switch (the L1 and L2 TLB entries can still be invalidated using the ASID). Based on our evaluation setup, for a 2MB L2 cache which is occupied by 50% by TLB blocks, the total time to complete the invalidation procedure is on the order of 100 ns. The invalidation procedure happens in parallel with the L2 TLB invalidation and is negligible compared to context switch completion times (order of \(\mu\)s (Victima and Victima, 2015; Victima and Victima, 2015)). **(i)** **Invalidating all TLB entries**. To invalidate all the TLB blocks inside the L2 cache, the L2 TLB first sends an invalidation command to the L2 cache controller. The cache controller probes in parallel all cache banks to invalidate all the TLB blocks of every L2 cache set. For each way, if the TLB entry bit is set, the TLB block is invalidated. **(ii)** **Invalidating all TLB entries with a specific ASID**. To invalidate all TLB blocks with a specific ASID, the L2 TLB first sends an invalidation command to the L2 cache controller with the corresponding ASID. For every cache block, if the TLB entry bit is set and the ASID matches the ASID of the invalidation request, the TLB block is invalidated. If the size of the ASID of the invalidation command is larger (e.g., 4 bits) than the supported ASID (e.g., 3 bits), then all the TLB blocks inside L2 cache are flushed. However, we believe this is an uncommon case, because, e.g., Linux uses only 12 ASIDs/core (Victima and Victima, 2015). ### TLB Shooddowns A TLB shootdown occurs when the CPU needs to invalidate stale TLB entries on local and remote cores. It is caused by various memory management operations that modify page table entries, such as de-allocating pages (unmap()), migrating pages, page permission changes, deduplication, and memory compaction. As shown in previous works (Victima and Victima, 2015), TLB shootdowns take order of \(\mu\)s time to complete due to expensive inter-processor interrupts (IPIs). In Victima, if the system performs a TLB shootdown, the corresponding TLB blocks need to be invalidated in the L2 cache. We explain how for two different TLB shootdown-based invalidations: **(i)** **Invalidating a single TLB entry given VA and ASID**. Invalidating a specific TLB entry by VA and ASID only requires sending an invalidation command with the VA and the ASID to the L2 cache controller. Since each TLB block contains eight contiguous Figure 19. Address translation flow in a system with Victima in a virtualized execution environment. TLB entries, invalidating one TLB entry of the TLB block leads to invalidating all eight corresponding TLB entries. **(ii) Invalidating all TLB entries given a range of VAs.** Invalidating a range of VAs requires sending multiple invalidation commands with different VAs to the L2 cache controller. The L2 cache controller accordingly invalidates all the corresponding TLB blocks. ## 7. Area & Power Overhead Victima requires three additions to an existing high-performance core design: (i) two new _TLB Entry_ bits in every L2 cache block (one of TLB entries and one for nested TLB entries) (SS5.1), (ii) the PTV cost estimator (SS5.2) and (iii) the necessary logic to perform tag matching and invalidation of TLB blocks using the _TLB Entry_ bit, the VPN, and the ASID (SS6). Extending each L2 cache block with two _TLB Entry_ bits results in a 0.4% storage overhead for caches with 64B blocks (e.g., in total 8KB for a 2MB L2 cache). PTW-CP requires only (i) 4 comparators to compare the PTE counters with the corresponding thresholds and (ii) 4 registers to store the thresholds. To support tag matching/invalidation operations for TLB blocks, we extend the tag comparators of the L2 cache with a bitmask to distinguish between tag matching/invalidation for TLB blocks and tag matching/invalidation for conventional data blocks. Based on our evaluation with McPAT [97], all additional logic requires 0.04% area overhead and 0.08% power overhead on top of the high-end Intel Raptor Lake processor [50]. ## 8. Evaluation Methodology We evaluate Victima using an extended version of the Sniper Multicore Simulator [42]. This simulator and its documentation are freely available at [https://github.com/CMU-SAFARI/Victima](https://github.com/CMU-SAFARI/Victima). We extend Sniper to accurately model: (i) TLBs that support multiple page sizes, (ii) the conventional radix page table walk, (iii) page walk caches, (iv) nested TLBs and nested paging [12] and, (vi) the functionality and timing of all the evaluated systems. Table 3 shows the simulation configuration of (i) the baseline system and (ii) all evaluated systems. **Workloads.** Table 4 shows all the benchmarks we use to evaluate Victima and the systems we compare Victima to. We select applications with high L2 TLB MPKI (\(>5\)), which are also used in previous works [85; 87; 101; 102]. We evaluate our design using seven workloads from the GraphBig [44] suite, XSBench [46], the Random access workload from the GUPS suite [45], Sparse Length Sum from DLRM [47] and kmer-count from GenomicsBench [48]. We extract the page size information for each workload from a real system that uses Transparent Huge Pages [77; 100] with both 4KB and 2MB pages. Each benchmark is executed for 500M instructions. **Evaluated Systems in Native Execution.** We evaluate six different systems in native execution environments: (i) _Radix_: Baseline x86-64 system that uses the conventional (1) two-level TLB hierarchy and (2) four-level radix-based page table, (ii) _POM-TLB_: a system equipped with a large 64K-entry software-managed L3 TLB [17] and the TLB-aware SRRIP policy (SS5.1) at the L2 cache, (iii) _Opt. L3TLB-64K_: a system equipped with a 64K-entry L3 TLB with an optimistic 15-cycle access latency, (iv) _Opt. L2TLB-64K_: a system equipped with a 64K-entry L2 TLB with an optimistic 12-cycle access latency, (v) _Opt. L2TLB-128K_: a system equipped with a 128K-entry L2 TLB with an optimistic 12-cycle access latency, and (vi) _Victima_: a system that employs Victima and the TLB-aware SRRIP policy (SS5.1) at the L2 cache. **Evaluated Systems in Virtualized Execution.** We evaluate four different systems in virtualized execution environments: (i) _Nested Paging (NP)_: Baseline x86-64 system that uses (1) a two-level TLB hierarchy and (2) a 64-entry Nested TLB and employs Nested Paging [12], (ii) _POM-TLB_: a system equipped with a large 64K-entry software-managed L3 TLB [17] and the TLB-aware SRRIP policy (SS5.1) at the L2 cache, (iii) _I-SP_: a system that employs an ideal version of shadow paging [12; 49] where (1) only a four-level radix shadow page table walk is needed to discover the virtual-to-physical \begin{table} \begin{tabular}{l l l} \hline \hline **Suite** & **Workload** & **Dataset size** \\ \hline \multirow{4}{*}{GraphBNG [44]} & Between Correctly (BG). Read-first search & \multirow{4}{*}{8 GB} \\ & (GPS), Connected components (CC), Graph coloring (CC), PageRank (PB), Triangle counting (TC), & \\ & Short-path (SP) & \\ \hline XSBench [46] & Particle-Simulation (CS) & 9 GB \\ \hline GUPS [44] & Random-access (RNR) & 10 GB \\ \hline DLBM [47] & Sparse-length sum (TLBM) & 10.3 GB \\ \hline GromovicBench [48] & k-mer counting (GEN) & 33 GB \\ \hline \hline \end{tabular} \end{table} Table 4. Evaluated Workloads \begin{table} \begin{tabular}{l|l} \hline \hline **Decline System** & **Baseline System** \\ \hline **Core** & 4-way OGO 386-64.26GHz \\ \hline \multirow{4}{*}{**MMU**} & L1-TLB 128 entry, **i-way** since, 1-cycle latency \\ \cline{2-3} & L1-TLB (4KB) 64-entry, 4-way assoc, 1-cycle latency \\ \cline{2-3} & L1-TLB (2MB) 32-entry, 4-way assoc, 1-cycle latency \\ \cline{2-3} & L2-TLB 136-tree, 12-way assoc, 1-cycle latency \\ \cline{2-3} & 3-Split Page Walk Caches: 32 entry, 4-way assoc, 2-cycle latency \\ \hline \multirow{4}{*}{**L1 Cache**} & L1-Cache: 32 KB, 8-way assoc, 4-cycle access latency \\ \cline{2-3} & L1-Cache: 32 KB, 8-way assoc, 4-cycle access latency \\ \cline{2-3} & L10 replacement policy, TP-stride prefelcter [98] \\ \hline \multirow{2}{*}{**L2 Cache**} & 2 MB, 16-way assoc, 1-cycle latency \\ \cline{2-3} & SRRIP replacement policy [91]; Stream prefelcter [99] \\ \hline \multirow{2}{*}{**L3 Cache**} & 2 MB/Notec, 16-way assoc, 35-cycle latency \\ \cline{2-3} & DRB 4-9 41.26 10-node offset \\ \cline{2-3} & Memory per node: 256GB-1TB \\ \hline \multirow{4}{*}{**POM-TLB [17]**} & \multirow{4}{*}{64K-entry L3 software-managed TLB 16-way assoc} \\ \cline{2-3} & TLB-aware SRRIP replacement policy (§5.1) \\ \cline{2-3} & 1.58-entry L2 TLB, 12-cycle latency \\ \cline{2-3} & 64K-entry L3 TLB, 64-entry 13-cycle latency \\ \hline \multirow{4}{*}{**Opt. L2 TLB-64K**} & \multirow{4}{*}{64K-entry L3 TLB, 64-way assoc, 64-cycle latency} \\ \cline{2-3} & 64K-entry L2 TLB, 16-way assoc, 64-cycle latency \\ \cline{2-3} & 128-entry L2 TLB, 16-way assoc, 64-cycle latency \\ \cline{2-3} & 2DTW, Guet PTE-Four-level Radix, Host PTE: Four-level Radix \\ \cline{2-3} & 64-entry Nested T2L, 1-cycle latency \\ \hline \multirow{4}{*}{**Metal Shadow Paging**} & 1D Shadow PTW instead of 2D PTW \\ \cline{2-3} & Updates to shadow page table cause no performance overheads \\ \hline \multirow{2}{*}{**Victima**} & M00I consults PTW-CP only 41 L2 cache MPKI \(<\) 5(§3.2) \\ \cline{2-3} & TLB-aware SRRIP replacement policy (§5.1) \\ \hline \hline \end{tabular} \end{table} Table 3. Simulation Configuration and Simulated Systems translation and (2) the updates to the shadow page table are performed without incurring performance overhead, and (iv) _Victima_: a system that employs Victima and caches both TLB and nested TLB entries in the L2 cache which is equipped with the TLB-aware SRRIP policy at the L2 cache. ## 9 Evaluation Results ### Native Execution Environments Figure 20 shows the execution time speedup provided by POM-TLB, Opt. L3TLB-64K, Opt. L2TLB-64K, Opt. L2TLB-128K and Victima compared to Radix. We make two key observations: First, Victima on average respectively outperforms Radix, POM-TLB, Opt. L3TLB-64K, Opt. L2TLB-64K, by 7.4%, 6.2%, 4.4%, 3.3%. In RND, which follows highly irregular access patterns, Victima improves performance by 28% over Radix. Second, Victima achieves similar performance gains as Opt.L2-TLB 128K without the latency/area/power overheads associated with an 128K-entry TLB. To better understand the performance benefits achieved by Victima, we examine the impact of Victima on (i) the number of PTWs and (ii) the L2 TLB miss latency. Figures 21 shows the reduction in PTWs achieved by POM-TLB, L2 TLB-64K, L2 TLB-128K and Victima over Radix, in a native execution environment, across 11 workloads. We observe that Victima reduces the number of PTWs by 50%, POM-TLB by 37%, L2 TLB-64K by 37% and L2 TLB-128K by 48% on average across all workloads. L2 TLB-128K and Victima lead to similar reductions in PTWs, which explains the similar performance gains of the two mechanisms. Figure 22 shows the reduction in L2 TLB miss latency for POM-TLB and Victima over Radix. Victima and POM-TLB respectively reduce L2 TLB miss latency by 22% and 3% over Radix. We observe that the latency of accessing POM-TLB nearly nullifies the potential performance gains of reducing PTWs. We conclude that Victima delivers significant performance gains compared to all evaluated systems due to the reduction in the number of PTWs which in turn leads to a reduction in the total L2 TLB miss latency. ### Diving Deeper into Victima #### 9.2.1. Translation Reach Figure 23 shows the translation reach of a processor that uses Victima averaged across 500K execution epochs.5 We observe that the average translation reach provided by Victima is 36x larger (220 MBs) than the maximum reach offered by the L2 TLB of the baseline system that uses a two-level TLB hierarchy. This is due to the fact that each cache block can cover 32KBs (16MB) of memory while each L2 TLB block covers 4KB (2MB) per entry and the L2 cache typically has significantly more blocks than the L2 TLB (e.g., a 2MB cache has 21\(\times\) the blocks of a 1.5K-entry L2 TLB). Footnote 5: Each epoch consists of 1K instructions and we assume 4KB pages for simplicity. #### 9.2.2. Reuse of TLB Blocks Figure 24 shows the reuse distribution of the TLB blocks in the L2 cache (we measure a block's reuse once the block gets evicted from the L2 cache). We observe that the majority of TLB blocks (65%) experience high reuse (i.e., accessed more than 20 times before getting evicted from the L2 cache) due to (i) the accuracy of the PTW-CP (82% average accuracy across all workloads) and (ii) the prioritization of the TLB blocks by the TLB-aware replacement policy used in the L2 cache. We conclude that Victima effectively utilizes underutilized L2 cache resources to store high-reuse TLB blocks. In a system without Victima, accessing these TLB blocks would lead to high-latency PTWs. #### 9.2.3. Sensitivity to L2 Cache Size Figure 25 shows the reduction in PTWs achieved by Victima for four different L2 cache sizes, ranging from 1MB up to 8MB. We observe that Victima increasingly reduces PTWs with increasing L2 cache sizes. For the 8MB cache configuration, Victima achieves the highest reduction in PTWs, 63% compared to Radix. This can be attributed to the fact that a larger L2 data cache allows for caching more TLB blocks, thereby increasing the translation reach of the processor. #### 9.2.4. Sensitivity to L2 Cache Replacement Policy Figure 26 shows the performance of Victima when employing the TLB-aware SRRIP Figure 24. Reuse-level distribution of TLB blocks in L2 cache. Figure 21. Reduction in PTWs provided by POM-TLB, L2 TLB-64K, L2 TLB-128K, and Victima over Radix. Figure 23. Translation reach provided by TLB blocks stored in L2 cache (assuming 4KB page size). Figure 22. L2 TLB miss latency in POM-TLB and Victima normalized to Radix. replacement policy at the L2 cache compared to employing a conventional TLB-agnostic SRRIP replacement policy. We observe that employing the TLB-aware SRRIP leads to 1.8% higher performance compared to the conventional SRRIP. We conclude that Victima can deliver high performance with both TLB-aware and TLB-agnostic replacement policies. ### Virtualized Environments Figure 27 shows the execution time speedup of POM-TLB, I-SP and Victima over Nested Paging, in a virtualized execution environment, across 11 workloads. We observe that Victima outperforms Nested Paging on average by 28.7%, I-SP by 4.9%, and POM-TLB by 20.1%, across all workloads. To better understand the performance speedup achieved by Victima, we examine the impact of Victima on (i) the number of guest and host PTWs and (ii) the L2 TLB miss latency and the nested TLB miss latency. Figure 28 shows the reduction in guest and host PTWs for all the configurations. We observe that Victima leads to significant reductions in both guest PTWs (50%) and host PTWs (99%). The host PTW is the major bottleneck in NP, and Victima almost eliminates it by caching nested TLB blocks inside the L2 cache. Figure 29 shows the L2 TLB miss latency for all the configurations normalized to NP. We observe that Victima minimizes host PTW latency to as low as 1% of the baseline while reducing the guest translation by 60%, 6% more than I-SP, which performs only four PT accesses to find out the guest-virtual to host-physical translation. We conclude that caching both nested and conventional TLB entries in the L2 cache allows Victima to achieve high performance in both native and virtualized environments. ## 10. Related Work To our knowledge, Victima is the first software-transparent mechanism that proposes caching TLB entries in the cache hierarchy to increase the translation reach of the processor. We have already comprehensively compared Victima to (i) systems that employ large hardware TLBs and large software-managed TLBs [(17)] in native execution environments and (ii) systems that employ nested paging [(12)], large software-managed TLBs [(17)] and ideal shadow paging [(49)] in virtualized environments in SS9.1 and SS9.3. In this section, we qualitatively compare Victima to other related prior works that propose solutions to reduce address translation overheads. **Efficient TLBs and Page Walk Caches (PWCs).** Many prior works focus on reducing address translation overheads through efficient TLB and PWC designs [(103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116)]. Such techniques involve: (i) prefetching TLB and page table entries [(111, 112, 113, 114, 115, 116)], (ii) TLB-specific replacement policies [(117, 103, 118)], (iii) employing software-managed TLBs [(117, 118, 119, 110, 112, 113, 114, 115, 116)], (v) sharing TLBs across cores [(114, 115, 116)], (v) employing efficient PWCs [(118, 116, 118)], and (vi) PT-aware cache management [(119, 87, 120)] (e.g., pinning PTEs in the LLC [(119)]). Although such techniques may offer notable performance improvements, as the page table size increases, their effectiveness reduces. This is because they rely on (i) the existing TLB hierarchy that is unable to accommodate the large number of TLB entries required by data-intensive applications or (ii) new hardware/software translation structures that pose a significant trade-off between performance and area/energy efficiency (SS3). In contrast, Victima repurposes the _existing_ underutilized resources of the cache hierarchy to drastically increase address translation reach and thus does _not_ require additional structures to store translation metadata. For example, as we show in SS3.2, employing a software-managed TLB to back up the L2 TLB is not effective in native environments as the latency of the PTW is similar to the latency of accessing the software-managed TLB. In SS9.3 and SS9.1, we compare Victima against state-of-the-art software-managed TLB, POM-TLB [(17)] and show that Victima outperforms POM-TLB by 6.2% (20.1%) in native (virtualized) environments by storing TLB entries in the high-capacity and low-latency L2 cache. **Alternative Page Table Designs.** Various prior works focus on alternative page table designs [(121, 122, 123, 124, 125, 126, 127, 101, 9, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116)] to accelerate PTWs. For example, Skarlatos et al. [(85)] propose replacing the radix-tree-based page table with a Cuckoo hash table [(128)] to parallelize accesses to the page table and reduce PTW latency. Park et Figure 28. Reduction in host and guest PTWs provided by POM-TLB and Victima in a virtualized system with NP. Figure 27. Speedup provided by POM-TLB, I-SP and Victima in a virtualized system with NP. Figure 26. Performance improvement provided by Victima with the TLB-aware SRRIP replacement policy over Victima with TLB-agnostic SRRIP. Figure 25. Victima’s reduction in PTWs across different L2 cache sizes. al. [87] propose a flat page table design in combination with a page-table-aware replacement policy to reduce PTW latency. Victima is complementary to these techniques as it reduces PTWs while these techniques reduce PTW latency. **Employing Large Pages.** Many works propose hardware and software mechanisms for efficient and transparent support for pages of varying sizes [129, 130, 140, 79, 141, 142]. For example, Ram et al. [76] propose harnessing memory resources to provide 1GB pages to applications in an application-transparent manner. Guvenilir et al. [130] propose modifications to the existing radix-based page table design to support a wide range of different page sizes. As we discuss in SS5.1, Victima is able to cache TLB entries for any page size and thus is compatible with large pages. **Contiguity-Aware Address Translation.** Many prior works enable and exploit virtual-to-physical address contiguity to perform low-latency address translation [143, 144, 145, 80, 15, 81, 15, 80]. For example, in [1], the authors propose pre-allocating arbitrarily-large contiguous physical regions (10-100's of GBs) to drastically increase the translation reach for specific data structures of the application. Karakostas et al. [144] propose the use of multiple dynamically-allocated contiguous physical regions, called ranges, to provide efficient address translation for a small number of large memory objects used by the application. Alverti et al. [81] propose an OS mechanism that enables efficient allocation of large contiguous physical regions. These works can significantly increase translation reach, but, in general have two drawbacks: (i) they require system software modifications and (ii) their effectiveness heavily depends on the availability of free contiguous memory blocks. In contrast to these works, Victima increases the translation reach of the processor without requiring (i) contiguous physical memory allocations or (ii) modifications to the system software. **Address Translation in Virtualized Environments.** Various works propose techniques to reduce address translation overheads in virtualized environments [151, 152, 153, 154, 155, 156, 157, 158, 159, 49, 160, 161, 49]. For example, Ghandi et al. [49] propose a hybrid address translation design for virtualized environments that combines shadow paging and nested paging. In SS9.3, we show that Victima is effective in virtualized environments and outperforms an ideal shadow paging design by 4.9% by storing both TLB entries and nested TLB entries in the cache hierarchy. **Virtual Caching & Intermediate Address Spaces.** Another class of works focuses on delaying address translation by using techniques such as virtual caching [155, 156, 157, 158, 159, 160, 161] and intermediate address spaces [162, 163, 164, 102]. Virtually-indexed caches reduce address translation overheads by performing address translation only after a memory request misses in the LLC [165, 165, 157, 158]. Hajinazar et al. [163] propose the use of virtual blocks mapped to an intermediate address space to delay address translation until an LLC miss. Victima is orthogonal to these techniques and can operate with both (i) virtually-indexed caches6 and (ii) intermediate address spaces by storing TLB blocks with intermediate-to-physical address mappings in the cache hierarchy. Footnote 6: Victima distinguishes between data blocks and TLB entries by using a tag bit in the cache block, regardless of whether the cache is virtually- or physically-indexed. ## 11 Conclusion Data-intensive workloads experience frequent and long-latency page table walks. This paper introduces Victima, a software-transparent technique that stores TLB entries in the cache hierarchy to drastically increase the translation reach of the processor and thus reduces the occurence of page table walks. Our evaluation shows that Victima provides significant performance improvements in both native and virtualized environments. Victima presents a practical opportunity to improve the performance of data-intensive workloads with small hardware changes, modest area and power overheads, and no modifications to software, by repurposing the underutilized resources of the cache hierarchy. ## Acknowledgments We thank the anonymous reviewers of MICRO 2023 for their encouraging feedback. We thank the SAFARI Research Group members for providing a stimulating intellectual environment. We acknowledge the generous gifts from our industrial partners: Google, Huawei, Intel, Microsoft, and VMware. This work is supported in part by the Semiconductor Research Corporation and the ETH Future Computing Laboratory.
2307.16193
Minimal numerical ingredients describe chemical microswimmers's 3D motion
The underlying mechanisms and physics of catalytic Janus microswimmers is highly complex, requiring details of the associated phoretic fields and the physiochemical properties of catalyst, particle, boundaries, and the fuel used. Therefore, developing a minimal (and more general) model capable of capturing the overall dynamics of these autonomous particles is highly desirable. In the presented work, we demonstrate that a coarse-grained dissipative particle-hydrodynamics model is capable of describing the behaviour of various chemical microswimmer systems. Specifically, we show how a competing balance between hydrodynamic interactions experienced by a squirmer in the presence of a substrate, gravity, and mass and shape asymmetries can reproduce a range of dynamics seen in different experimental systems. We hope that our general model will inspire further synthetic work where various modes of swimmer motion can be encoded via shape and mass during fabrication, helping to realise the still outstanding goal of microswimmers capable of complex 3-D behaviour
Maximilian R. Bailey, C. Miguel Barriuso Gutiérrez, José Martín-Roca, Vincent Niggel, Virginia Carrasco-Fadanelli, Ivo Buttinoni, Ignacio Pagonabarraga, Lucio Isa, Chantal Valeriani
2023-07-30T10:20:33Z
http://arxiv.org/abs/2307.16193v1
# Minimal numerical ingredients describe chemical microswimmers' 3-D motion ###### Abstract The underlying mechanisms and physics of catalytic Janus microswimmers is highly complex, requiring details of the associated phoretic fields and the physiochemical properties of catalyst, particle, boundaries, and the fuel used. Therefore, developing a minimal (and more general) model capable of capturing the overall dynamics of these autonomous particles is highly desirable. In the presented work, we demonstrate that a coarse-grained dissipative particle-hydrodynamics model is capable of describing the behaviour of various chemical microswimmer systems. Specifically, we show how a competing balance between hydrodynamic interactions experienced by a squirmer in the presence of a substrate, gravity, and mass and shape asymmetries can reproduce a range of dynamics seen in different experimental systems. We hope that our general model will inspire further synthetic work where various modes of swimmer motion can be encoded via shape and mass during fabrication, helping to realise the still outstanding goal of microswimmers capable of complex 3-D behaviour. ## 1 Introduction In the last two decades, potential applications for directed transport at length scales where thermal fluctuations are important have prompted the development of a range of synthetic microswimmers, each with its intricacies [1, 2, 3]. Amongst the various synthetic active materials, Janus catalytic microswimmers remain one of the most popular due to their straightforward fabrication protocols, simple experimental set-ups, and good reproducibility between experiments [4, 5]. Typically, these are spherical particles asymmetrically modified with a catalytic material leading to the production of asymmetric local chemical gradients in the presence of a "fuel", causing propulsion via self-phoresis. [6, 7, 8, 9]. Such microswimmers generally move in 2-D (xy) due to their density mismatch with the surrounding fluid and attractive interactions with the underlying substrate [10]. However, controlling the motion of microswimmers in 3-D is appealing from an applications perspective, and there is a growing body of work on active materials capable of motion in all dimensions [11, 12, 13, 14, 15, 16, 17, 18, 19]. The rational design of chemical microswimmers displaying tailored motion in 3-D would greatly profit from models which can capture empirical observations [16]. However, their experimental simplicity masks a complex system of chemical and mass transfer relationships, the underlying mechanisms and physics of which are still the subject of ongoing debate [20, 21]. It is generally accepted that a more complete description of such "chemically active colloids" requires the full solution of their phoretic fields, including details of the colloids, the substrate, and the solution composition [22, 23, 24]. Unfortunately, such detailed descriptions call for a high level of technical expertise, are system-specific due to phoretic mobility parameters, have only been solved for spherical and ellipsoidal structures, and do not easily allow the inclusion of thermal fluctuations, which are critical when describing the dynamics of micron-scale objects. To avoid the consideration of chemical phoretic fields and only account for hydrodynamic flows, the "squirmer" model is frequently invoked [25]. This model was first proposed for microorganisms such as _Paramecia_ and _Volvox_[26, 27], and is now used as a generic description for various active systems. Squirmers swim due to a self-generated, usually stationary [28] and axi-symmetric velocity field which is typically evaluated as tangential across its surface. Considered as a spherical rigid body, the squirmer can be defined using two modes describing its swimming velocity and force-dipole (B\({}_{1}\) and B\({}_{2}\) respectively [29]). Promisingly, it has been shown that a bottom-up model of Janus self-diffusophoretic microswimmers - accounting for the unique interaction potentials of the different chemical species with the separate hemispheres of a Janus particle - will produce flow fields characteristic of spherical squirmers [30]. Recent experimental studies investigating tracer flows around chemical microswimmers [31] and their directed motion under flow ("rheotaxis") [32, 33] have also indicated that the flow fields generated by such active agents are characteristic of squirmer-type systems. Therefore, "coarse-graining" Janus chemical microswimmers as squirmers provides a viable approach to model their behaviour despite neglecting the (albeit important) contributions arising from chemical fields [23, 24] and an asymmetric surface [30]. A number of numerical strategies have been implemented to simulate the dynamics of squirmers, amongst which a multi-particle-collision dynamics (MPCD) description of the solvent is perhaps the most frequently invoked due to its ability to include thermal fluctuations at a lower computational cost than e.g. the Lattice Boltzmann method [34, 35, 36, 37, 29, 30, 38, 39, 40, 41]. Like MPCD, dissipative particle dynamics (DPD) [42] coarse-grains the solvent as point-like fluid particles (packets of fluid molecules), and thus inherently includes the effects of thermal noise while solving the Navier-Stokes equations [43]. By utilising a particle-based approach to model the solvent, the computational cost is significantly reduced, allowing the simulation of hydrodynamic interactions in systems where advection dominates over diffusion (high Peclet number, Pe) [44]. In DPD, the solvent particles themselves act as local thermostats due to their competing stochastic and dissipative pair-wise terms, conserving momentum and thus accounting for hydrodynamics and thermal fluctuations [45]. Additionally, DPD makes use of softer inter-particle potentials, allowing greater timesteps compared to MPCD, thereby enabling simulations of larger systems at longer timescales [46, 47]. Therefore, DPD emerges as a suitable candidate to model microswimmers as squirmers, as it can handle hydrodynamics in high Pe systems (where directed motion dominates over diffusion) with (relatively) large time-steps, while properly dealing with thermal fluctuations. For these reasons, a DPD framework capturing the hydrodynamic interactions of microswimmers in the presence of confining boundaries - by applying tangential solvent forces around the swimmer - was recently developed by some of the authors [48]. As DPD is already implemented in the open-access molecular dynamics program Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [49], this provides the opportunity to exploit a range of in-built functions to extend previous studies, with the goal of mimicking the behaviour of chemical microswimmers above a substrate. Here, we further develop this model to consider the influence of mass and shape asymmetries on the dynamics of spherical microswimmers in the presence of a bounding substrate, mirroring common experimental conditions for chemical active colloids [50, 51, 52, 48]. The strength of our approach lies in its modularity, which allows the simple inclusion of these asymmetries that may have otherwise presented significant complications when considering other numerical schemes. We find that the interplay between hydrodynamic interactions, gravity, bottom-heaviness, and shape is sufficient to qualitatively capture the 2- and 3-D physics of a range of catalytic (and photo-catalytic) Janus microswimmer systems [18, 17, 19], i.e. while neglecting contributions from chemical [52] and light [53, 54] gradients. Specifically, there is a competition between the hydrodynamic attraction to the substrate that grows with the swimming speed and the active force required to overcome gravity, which determines the ability of the microswimmer to enter the bulk. This balance can be furthermore adjusted by introducing mass or shape asymmetry to the particles. Our coarse-grained approach thus opens the door to the informed design of chemical microswimmers whose dynamics can be encoded via shape or mass during fabrication, helping to realise the still outstanding goal of active colloids capable of truly 3-D dynamics. ## 2 Numerical method Following the approach outlined in [48], we use an in-house extension of the open source LAMMPS package [49] to simulate the motion of squirmer-like microswimmers, modelled as raspberry-type structures (see Figure 1). The thermal energy of the solvent particles \(k_{B}T\) is set to \(1\), the solvent density \(\rho=5.9\), and the dissipative force \(\gamma=1\). The simulation length-scales are as described in Figure 1 a., normalised with respect to the radius of 1 DPD "filler" particle, while the masses of DPD particles are set to \(1\) unless introducing mass asymmetry (see below). The simulation time-scale \(\tau\) is set according to these parameters, and numerically integrated at \(0.01\tau\) time-steps. When present, substrates are modelled as overlapping DPD particles, properly aligned along the x- and y-axes to ensure a flat repulsive potential from the DPD conservative force \(F_{c}=100\). The microswimmers consist of 1 central "thruster" particle generating the flow fields which are experienced by the solvent particles within a hydrodynamic radius \(R_{h}\) of the thruster, which in turn apply reaction forces to the "filler" particles (18-20 DPD particles, depending on whether or not shape asymmetry is introduced to the microswimmer, see Figure 1 a.), resulting in the net propulsion force of the microswimmer. The mass of specific filler particles can be adjusted, allowing us to introduce mass asymmetry, and thus model microswimmers equipped with heavy "caps" (e.g. Pt-coated catalytic microparticles). We define mass asymmetry as \(m_{asymm}=\frac{m_{hamisphere}+m_{cap}}{m_{hemisphere}}\) (where \(m_{hemisphere}\) and \(m_{cap}\) are the mass of a particle hemisphere and the heavy cap respectively), noting that the motion can thus be defined with the heavier cap at the back (CB) or at the front (CF) of the swimmer with respect to the swimming direction. In all cases, we use a squirmer active stress parameter \(\beta=B_{2}/B_{1}=-2.5\), i.e. a weak pusher, based on qualitative matching of simulated trajectories to the behaviour described in [19]. We note that this squirmer parameter corresponds to that experimentally determined by Campbell et al. [31] for a similar Pt catalytic system as that studied in [19], which provides further evidence for the suitability of our selection. Solvent conditions are selected to ensure that all microswimmers with radius \(R\) studied remain in the low Reynolds (Re) number regime (\(Re=\frac{v_{p}\cdot R}{\mu}<0.2\)[55], also see Supporting Information - Figure S1), where \(v_{p}\) is the particle speed, and specifically by controlling for the kinematic viscosity \(\nu\) via the solvent parameters [42]. To map different simulations to experiments, we modulate the ratio of swimming velocity \(V_{swim}\) to sedimentation velocity \(V_{gravity}\). A minimum of 2000 time steps are used for equilibrating solvent conditions in all simulations, as were periodic boundary conditions. The number of solvent DPD particles depends on the simulation dimensions, ranging from 47002 for a (20x20x20, xyz) box, to 57197 solvent DPD particles for a (16x16x40) box. Figure 1: a) Schematic representation of a 2D section of the colloid using a raspberry model with cap-front (CF) mass imbalance and shape asymmetry. The colloid is built from 18 DPD _filler_ particles (red and blue), distributed on the surface of a sphere radius \(R_{in}=1\), each with a DPD cut-off for interactions with other DPD particles (e.g. solvent particles), resulting in an overall effective microswimmer radius of \(R=2\). Mass imbalance is introduced by increasing the mass of the particles constituting the cap (blue). Shape asymmetry is implemented by introducing an additional particle \(P_{shape}\) (green) displaced a distance \(R_{shift}\) from an off-axis filler particle. Here, the equatorial 2D slice containing the shape asymmetry is shown, thus only depicting the 8 filler particles within this section. Solvent particles (purple) interact with the colloid particles with a DPD cutoff of \(R_{fs}=2\) while between them we set \(R_{ss}=0.58\) to achieve a lower Reynolds number [42]. The region in which the hydrodynamic propulsion takes place is the spherical shell between the outer surface of the microswimmer (dashed black circle) and \(R_{H}=4\) (dotted black circle), solvent particles within this region are subjected to a force field (light purple arrows), consistent with a pusher-type squirmer, emanating from the center of mass of the colloid and fixed with its internal frame of reference. Then, for each solvent particle in this region an equal and opposite reaction force is applied to its nearest colloid particle, resulting in a net self-propulsion force \(\vec{F}_{p}\). b-d) Snapshot of the full simulation box (b.), as well as the definition of the angles \(\theta_{x,y,z}\) (c.) and the azimuthal angle \(\Phi\) (d.) Graphics presented here and elsewhere were generated in part using the visualisation software Ovito [56]. ## 3 Results & Discussion ### Role of mass asymmetry and hydrodynamic interactions in microswimmer motion To begin, we consider colloids characterised by hydrodynamic interactions with a surface and bottom-heaviness, and study their dynamics with or without activity. Specifically, we reproduce the 'classic' Pt-SiO\({}_{2}\) Janus chemical microswimmers to determine whether our DPD raspberry particle model captures their dynamics. We investigate the chemical microswimmers studied by Niggel et al. [57], as the 3-D rotations of the Janus particles with fluorescent surface asperities can be tracked via correlation-based image analysis. To reproduce their experimental findings, we simulate the microswimmers with CB motion and \(m_{asymm}=1.081\). Fitting the short-time mean-square-displacement (MSD) of our experimental particles (\(MSD=4\cdot D_{T}\cdot\Delta t+v_{p}^{2}\cdot\Delta t^{2}\), while acknowledging its shortcomings [58]), we obtain a Peclet number \(Pe=\frac{v_{p}\cdot R}{D_{T}}\sim 300\) (where \(v_{p},D_{T}\) are the fitted swimming speed and translational diffusion coefficient from the MSD, respectively, and \(R\) is the particle radius), which we set to \(\sim 100\) in simulations of our active microswimmers to ensure that they remain in the low Reynolds regime as described above. Following our experimental findings, we also set the gravitational force that \(V_{gravity}\sim V_{swim}\), for \(V_{swim}=<v_{0}>\), where \(v_{0}\) is the microswimmer's instantaneous velocity over time. As we have access to the structural coordinates of our DPD raspberries, we are able to follow their rotational dynamics via singular value decomposition (SVD) and calculate the cumulative mean-squared-angular-displacement (MSAD) (see Figure 2, [59]). By doing so, we are able to reproduce the findings presented in [57]. Specifically, we find that for passive particles, the presence of a heavy cap and a substrate reduces the rotation "out-of-plane" (\(\theta_{x,y}\)) compared to in plane (\(\theta_{z}\)) rotations (see Figure 1 c.) due to bottom-heaviness (see Figure 2 a., [60]), while the introduction of activity saturates the out-of-plane rotation at much lower values (Figure 2 b.). We attribute the confinement of possible rotations to the hydrodynamic attraction between the microswimmer and the substrate due to its generated flow fields [61], mirrored by later theoretical investigations into the role of the produced phoretic fields [10]. Therefore, we see that the rotational dynamics of chemical microswimmers near confining boundaries are qualitatively well captured when only considering hydrodynamic interactions (and bottom-heaviness). We then map our squirmer-inspired DPD model to the experimental findings presented by Carrasco-Fadanelli and Buttinoni [19]. Specifically, we adjust \(V_{swim}:\)\(V_{gravity}\) to reproduce the different conditions studied in their work, while using the same value for mass asymmetry of their Janus polystyrene spheres coated with a Pt thin film (2.8 \(\mu\)m with a coating of 4 nm, \(m_{asymm}=1.22\)) with CB motion. When \(V_{swim}<V_{gravity}\), the gravitational torque from the heavier cap aligns the particle's internal orientation axis away from the substrate (see Figure 3 b.), however the particle is unable to leave the substrate due to the force of gravity (see Figure 3 a,c). As \(V_{swim}\) is increased and becomes larger than \(V_{gravity}\), the particle is able to leave the substrate when its internal orientation axis is directed upwards (see Figure 3 d-f, and Supporting Information Figure S2), assisted by its bottom-heaviness [52]. Notably, we find that we can reproduce the structure of the microswimmer dynamics previously se Figure 2: Rotational dynamics of CB microswimmers as a function of scaled time, simulated to reproduce the properties of the chemical active colloids studied in [57]. a) Mean-squared-angular-displacement (MSAD) of the passive colloid (no activity). b) Introducing activity to the microswimmer dampens its rotational diffusivity, particularly about the x- and y- axes, which saturate due to the hydrodynamic coupling to the substrate. We note the scaling of the MSAD with time here is with \(\sqrt{\Delta t}\), and is therefore quadratic. Error bars depict the standard error of the mean from 2401 frames (sub-sampled from 50000 simulation steps). (see Supporting Information Figure S3), without requiring the previously hypothesised self-shadowing effects [53, 54]. The two scenarios qualitatively capture the behaviour described in [19], where the gravitational torque due to the mass asymmetry introduced by the denser Pt cap favours an anti-parallel orientation of the microswimmer with gravity, competing with hydrodynamic interactions and thermal fluctuations. Interestingly, by further increasing \(V_{swim}\), the simulated microswimmer is once again confined to 2-D motion in the xy plane (see Figure 3 g-i). However, here the confinement arises due to the internal orientation of the microswimmer, rather than its inability to overcome the applied gravitational force. For fast swimmers, the flow fields created by the microswimmer lead to a strong hydrodynamic attraction to the substrate, effectively "trapping" the particle in 2-D (compare insets in Figures 3 a. and g.). This is similar to the "sliding-state" behaviour proposed by Uspal and coworkers as a result of hydrodynamic flows from self-generated phoretic gradients [10], and mirrors the experimental observations of many chemical microswimmer systems, including that discussed in Figure 2. To summarise, we find that hydrodynamic interactions and gravitational forces on a mass-asymmetric spherical microswimmer are sufficient to qualitatively capture the physics of chemical microswimmers above a substrate under different conditions. We also confirm the importance of gravity on both the translational and rotational dynamics of active colloids suspended just above a planar surface. ### Influence of shape asymmetry on microswimmer 3-D dynamics Motivated by the ability of the reported DPD squirmer-like microswimmers to qualitatively reproduce the behaviour of different chemical microswimmers, we then investigate strategies to promote the "lift-off" of particles from the substrate, and thus observe 3-D motion. Recently, photocatalytic microswimmers synthesised using functional nanoparticles via "Toposelective Nanoparticle Attachment" were shown to display quasi 3-D behaviour [18, 17], characterised by 2-D motion interdispersed with "rollercoaster"-like looping behaviour. This happens in spite of their high speeds with respect to their sedimentation speed, putting them in a regime closer to the third case highlighted in Figure 3 (g-i), as well as their CF motion which should in fact promote orientation into the wall due to top-heaviness. We note the use of surface-bound functional nanoparticles incorporates rough asperities to the microswimmers, which in turn introduce asymmetry in the drag acting on the surface of the microswimmers. In fact, explicitly considering non-axisymmetric flow fields in spherical squirmers was very recently shown to induce body rotations and thus complex patterns of motion [62]. To investigate the potential effect of such shape-asymmetries on the motion of microswimmers, we simulate the motion of our DPD raspberries with introduced shape asymmetry in the absence of gravity or bounding substrates, this time with CF swimming (see Figure 4). Specifically, we introduce weightless "shape" particles (\(P_{shape}\)) along the axis of the existing "filler" particles, shifted by different distances (normalised by the DPD particle radius - \(R_{shift}\)) to control the extent of its protrusion (see Figure 1 a.). The filler particle axis along which the shape particle is introduced can be changed to modify the extent of asymmetry, i.e. with respect to the swimming direction (See Supporting Information SI S5). We focus on the case where \(P_{shape}\) is introduced so as to maximise the shape-asymmetry possible using this approach (see Figure 1 a., Supporting Information Figure S5, "1 Edge" case). To quantify the effect that increasing \(R_{shift}\) has on the dynamics of our bulk microswimmers, we calculate the angle, \(\theta\), between the vectors describing the particle's internal orientation axis, **A**, (governing the direction of the self-propulsion force, see Figure 1 a.), and the particles displacement vector, **B**, (\(\theta=arccos(\frac{\textbf{A}\cdot\textbf{B}}{|A||B|})\)), where **A** is defined by 3 particles along the microswimmers body axis at time \(t\) and \(\textbf{B}=[x_{t+\Delta t}-x_{t},y_{t+\Delta t}-y_{t},z_{t+\Delta t}-z_{t})\). We note that the value of \(\langle\theta\rangle\) is dependent on the \(\Delta t\) over which the displacement vector is evaluated. We find a positive correlation between \(R_{shift}\) and \(\langle\theta\rangle\) (coefficient of determination\(=0.953\), see Figure 4 a.), which we attribute to the increasing torque experienced by the microswimmer due to the solvent forces applied to \(P_{shape}\) at the distance \(R_{shift}\). We hypothesise that the non-zero value for \(\langle\theta\rangle\) at \(R_{shift}=0\) arises due to the presence of thermal fluctuations from the solvent and their effect on microswimmer motion (see Supporting Information Figure S4 for further discussion). The source of the growing divergence between the internal orientation of the particles and their swimming velocity can also be observed in the MSAD of the microswimmers as the trends become increasingly ballistic with larger \(R_{shift}\) (see Supporting Information S6). Notably, the fitted angular velocity \(\omega\) (ballistic component of the MSAD, \(MSAD=2\cdot D_{R}\Delta t+\omega^{2}\cdot\Delta t^{2}\), where \(D_{R}\) is the rotational diffusion coefficient) grows with shape asymmetry, and we determine a strong linear relationship between \(\omega\) and \(\langle\theta\rangle\) (see Figure 4 a., bottom inset - coefficient of determination\(=0.981\)). We furthermore note a linear growth of the "rotational" Peclet (\(Pe_{R}=\frac{\omega\cdot\Delta t}{\sqrt{D_{r}\cdot\Delta t}}\)) with \(R_{shift}\) (see Figure 4 b., coefficient of determination\(=0.982\)), demonstrating the increasingly deterministic orientational motion of the microswimmer as shape asymmetry becomes more pronounced. Finally, we note that introducing \(P_{shape}\) along axes resulting in a reduced shape asymmetry with respect to **A** have a negligible effect on \(\theta\), indicating the importance of appropriate shape selection (see Supporting Information Figures S5, S7, and S8). After establishing the effect of shape asymmetry on microswimmer motion in the absence of external fields and confining boundaries, we then introduce gravity and a substrate to simulate the experimental conditions in [18] using Figure 3: Dynamics of CB microswimmers, simulated to reproduce the properties of the chemical active colloids studied in [19]. \(z/R\) refers to the height that a microswimmer reaches above the substrate with respect to its radius \(R\) (a., d., and g.). \(\Phi\) is the azimuthal angle of the microswimmer (b, e, and h) as defined in Figure 1, and \(v_{0}\) is the time-series of the microswimmer’s instantaneous velocity (c., f., and i., with \(V_{swim}=<v_{0}>/R\)). The insets in a. and g. show the solvent velocity flow fields around the (fixed) microswimmer at the different self-propulsion forces. Here, we progressively increase the swimming to sedimentation velocity (\(V_{swim}:V_{gravity}\)) from top to bottom to qualitatively map onto the different systems studied by Carrasco-Fadanelli and Buttinoni. The first two cases (a-c, d-f) are in good agreement with the findings presented in [19], while the final case (g-i) recaptures the typical dynamics observed for chemical active colloids, where phoretic particles are essentially confined to 2-D (also see Figure 2 b. ). \(m_{asymm}=1.25\). We define \(V_{swim}:V_{sediment}\sim 1.5\) to prevent the microswimmers from escaping too far into the bulk. Under these conditions, microswimmers with no or low shape asymmetry are unable to leave the bottom substrate due to the joint forces of gravity and hydrodynamic attraction (see Figure 5 a.). However, beyond a threshold value of \(R_{shift}>0.25\), the microswimmers display 3-D motion and begin to loop into the bulk, demonstrated by the emerging tail in \(P(z/R)\). This behaviour is mirrored in the shoulder of the probability distribution of \(\Phi\), \(P(\Phi)\) (Figure 5 b.) - the angle that the internal orientation of the particle makes with the substrate (see Figure 1 d.) - as the positive values indicate that the particle points upwards and away into the bulk, thus overcoming gravity. We thus propose that the angular velocity \(\omega\), introduced by the shape asymmetry of the microswimmers, drives its internal swimming orientation away from the substrate, competing with the effects of gravity and hydrodynamic wall interactions (as well as thermal fluctuations). Above a certain threshold shape asymmetry - governed by the dynamics of the system - the microswimmer is able to escape the substrate and move out-of-plane. In summary, we demonstrated that shape asymmetry is an important design parameter to control the dynamics of chemical microswimmers, in particular to promote 3-D motion. Specifically, the presence of these introduced asperities explains the unconventional swimming patterns observed in some experimental systems [18, 17]. ## 4 Conclusions By extending the DPD numerical framework described in [48] to include the effects of mass and shape asymmetries, we have demonstrated that hydrodynamic interactions, gravity, and thermal fluctuations are sufficient to capture the dynamics of a range of experimental chemically active colloidal systems [57, 19, 18, 17]. Promisingly, the use of DPD particles to build the overall structure of the microswimmer enables us to access to a range of more complex shapes than those described here, whose formulation would otherwise be either intractable or highly challenging when using other numerical modelling frameworks, or explicitly including chemical fields. We believe that our numerical simulations could be used in the design of new chemical colloidal swimmers, e.g. via sequential-capillarity-particle-assembly (sCAPA) [63] or two-photon polymerization direct laser writing (2PP-DLW) [64], with the goal of realising chemical microswimmers capable of 3-D motion. Furthermore, this modular approach to generating complex structures enables Figure 4: Swimming dynamics of CF microswimmers with respect to their cap orientation, in the absence gravity and bounding substrates, as a function of introduced shape asymmetry \(R_{shift}\) (colour coded from blue to red with increasing \(R_{shift}\)). a) Increasing divergence \(\langle\theta\rangle\) in the internal body axis and the observed swimming direction is measured with increasing \(R_{shift}\). We note that the values of \(\langle\theta\rangle\) are a function of the time-step evaluated (here, \(\sqrt{D_{T}\Delta t}/R\sim 0.13\)). Inset: linear growth in \(\langle\theta\rangle\) with \(\omega\) values fitted from the microswimmers’ MSAD (see Supporting Information, Figure S6). Graphical inset: angle \(\theta\) between the swimming direction and internal body axis, as defined in the text. The particle depicted has \(R_{shift}=0.5\). b) Rotational Péclet number as a function of \({}_{Rshift}\) for \(\Delta t=1\). Error bars either indicate the standard error of the mean, or for the fits of \(\omega\) and \(D_{R}\) are obtained from the co-variance matrix, accounting for error propagation where relevant, from 1001 frames (sub-sampled from 50000 simulation steps). the study of microswimmer physics considering arbitrary geometries, e.g. swimming above rough surfaces [19], informing experiments into the interactions of active materials with confining boundaries commonplace in applied settings. To conclude, we foresee DPD based approaches to microswimmer modelling to provide opportunities in the development and application of chemically active colloids. _Acknowledgements -_ M.R.B. acknowledges financial support from the ETH Zurich Doc.Mobility Fellowship scheme. I.P. acknowledges support from Ministerio de Ciencia MCIN/AEI/FEDER for financial support under grant agreement PID2021-126570NB-100 AEI/FEDER-EU, and from Generalitat de Catalunya under Program Icrea Academia and project 2021SGR-673. C.V. acknowledges financial support from MINECO under grant agreements EUR2021-122001, PID2019-105343GB-I00, HRRC22/00002, and PID2022-140407NB-C21 _Author Contribution Statement -_ Author contributions are defined based on the CRediT (Contributor Roles Taxonomy). Conceptualization: M.R.B., C.M.B.G., L.I., C.V. Discussions: M.R.B., C.M.B.G., J.M.R., V.N., V.C., I.B., I.P., L.I., C.V. Formal Analysis: M.R.B., C.M.B.G. Funding acquisition: M.R.B., L.I. Investigation: M.R.B. Methodology: M.R.B., C.M.B.G., J.M.R. Software: M.R.B., C.M.B.G., J.M.R. Supervision: L.I., C.V. Validation: M.R.B. Visualization: M.R.B., C.M.B.G., L.I. Writing - original draft: M.R.B., C.M.B.G., J.M.R., V.N., I.B., I.P., L.I., C.V. Figure 5: Probability distributions for CF microswimmers describing their out-of-plane motion \(P(z/R)\) and orientation \(P(\Phi)\), as a function of their introduced shape asymmetry \(R_{shift}\), indicated by the colour code in the legend. a) Above a threshold (\(R_{shift}>0.25\)), the microswimmers are able to leave the substrate and enter the bulk. We note that all particles with \(R_{shift}\leq 0.15\) cannot leave the substrate and therefore have an overlayed point at unity. For visualisation, \(z\) where \(z/R<2\) is set to \(R\), and the presence of the wall at \(z/R=0\) is indicated. The particle depicted at the wall has an orientation \(\Phi\sim-\pi/8\). b) The ability to leave the substrate is reflected in the emergence of a shoulder in \(P(\Phi)\) for \(\Phi>0\), as \(V_{swim}>V_{gravity}\) and therefore the microswimmer can overcome its sedimentation velocity and enter the bulk.
2303.12602
Moduli of Higgs bundles over the five punctured sphere
We look at rank two parabolic Higgs bundles over the projective line minus five points which are semistable with respect to a weight vector $\mu\in[0,1]^5$. The moduli space corresponding to the central weight $\mu_c=(\frac{1}{2}, \dots, \frac{1}{2})$ is studied in details and all singular fibers of the Hitchin map are described, including the nilpotent cone. After giving a description of fixed points of the $\mathbb C^*$-action we obtain a proof of Simpson's foliation conjecture in this case. For each $n\ge 5$, we remark that there is a weight vector so that the foliation conjecture in the moduli space of rank two logarithmic connections over the projective line minus $n$ points is false.
Thiago Fassarella, Frank Loray
2023-03-22T14:38:23Z
http://arxiv.org/abs/2303.12602v1
# Moduli of Higgs bundles over the five punctured sphere ###### Abstract. We look at rank two parabolic Higgs bundles over the projective line minus five points which are semistable with respect to a weight vector \(\mu\in[0,1]^{5}\). The moduli space corresponding to the central weight \(\mu_{c}=(\frac{1}{2},\ldots,\frac{1}{2})\) is studied in details and all singular fibers of the Hitchin map are described, including the nilpotent cone. After giving a description of fixed points of the \(\mathbb{C}^{*}\)-action we obtain a proof of Simpson's foliation conjecture in this case. For each \(n\geq 5\), we remark that there is a weight vector so that the foliation conjecture in the moduli space of rank two logarithmic connections over the projective line minus \(n\) points is false. Key words and phrases:Higgs bundles, parabolic structure, elliptic curve, Hitchin fibration, spectral curve 2010 Mathematics Subject Classification: Primary 34M55; Secondary 14D20, 32G20, 32G34 The second author is supported by CNRS and Centre Henri Lebesgue, program ANR-11-LABX-0020-0. The authors also thank Brazilian-French Network in Mathematics and CAPES-COFECUB 932/19. singular fibers in the moduli space of twisted pairs were studied. In this paper we determine all singular fibers of the Hitchin map in the specific parabolic case \((\mathbb{P}^{1},\Lambda)\). We also describe the locus of fixed points with respect to the \(\mathbb{C}^{*}\)-action given by multiplication on the Higgs field. This motivates to investigate the foliation conjecture [19, Question 7.4] on the moduli space of rank two logarithmic connections with generic residues. The parabolic version of the foliation conjecture has been proved in [16, Corollaries 5.7 and 6.2] for the moduli space of connections over \(\mathbb{P}^{1}\) minus four points (when the weight vector is generic), and recently [12] deals with the case \(\mathbb{P}^{1}\) minus five points by assuming the weight vector \(\mu\) satisfies \(\sum\mu_{i}<1\). Since this last case lies in the unstable zone (any parabolic connection has \(\mu\)-unstable parabolic vector bundle) we turn our attention to the stable zone. After determining the locus of fixed points of the \(\mathbb{C}^{*}\)-action, we obtain a proof of the foliation conjecture in the case \(\mathbb{P}^{1}\) minus five points with the central weight vector \(\mu_{c}=\left(\frac{1}{2},\ldots,\frac{1}{2}\right)\). We also remark that for all \(n\geq 5\) there is a weight vector (in the stable zone), such that the foliation conjecture in the case \(\mathbb{P}^{1}\) minus \(n\) points is false. In our context, every Higgs field having nonvanishing determinant is irreducible, i.e. it does not have any invariant line subbundle, then it is stable for any choice of weight vector. The moduli space \(\mathcal{H}\) associated to the central weight \(\mu_{c}=\left(\frac{1}{2},\ldots,\frac{1}{2}\right)\) is particularly interesting, indeed it is a smooth quasi-projective variety of dimension four and its automorphism group admits a modular realization of \(\left(\mathbb{Z}/2\mathbb{Z}\right)^{4}\) as a subgroup. This subgroup, denoted here by \(\mathbf{El}\), consists of elementary transformations \(elem_{I}\), for each subset \(I\subset\{0,1,\lambda,t,\infty\}\) of even cardinality. We shall consider only Higgs fields which are nilpotent with respect to the parabolic direction. This implies that our moduli space \(\mathcal{H}\) contains an open dense subset \(\mathcal{U}\) isomorphic to the cotangent bundle \(T^{*}\mathcal{S}\), where \(\mathcal{S}\) denotes the moduli space of parabolic vector bundles. It is well known that \(\mathcal{S}\) is a del Pezzo surface of degree four, see [2, 15], its automorphism group has order \(16\) and coincides with the group \(\mathbf{El}\) of elementary transformations [15, 1]. There are \(16\) rational curves \(\zeta_{i}\) with \((-1)\)-self intersection on this surface, we denote by \(\Sigma\) the union of them. The main goal of this paper is to determining all singular fibers of the Hitchin map. The most complicated one is the nilpotent cone \(\mathcal{N}\), consisting of Higgs fields having vanishing determinant. In order to describe it, let us consider the forgetful map \(\mathfrak{for}:\mathcal{H}\dashrightarrow\mathcal{S}\), which forgets the Higgs field. Note that, since \(\mathcal{S}\) admits an embedding in \(\mathcal{H}\), by taking the Higgs field to be zero, it gives one component of \(\mathcal{N}\). Our first goal is the following result. **Theorem 1.1**.: _The nilpotent cone of \(\mathcal{H}\) has exactly \(17\) irreducible components_ \[\mathcal{N}=\mathcal{S}\cup_{i=1}^{16}\mathcal{N}_{i}\] _where \(\mathfrak{for}(\mathcal{N}_{i})=\zeta_{i}\). See Figure 2._ This is Theorem 4.5 in the main text. Before describing the remaining singular fibers, let us briefly introduce the spectral curve. The Hitchin basis, formed by quadratic differentials, is two dimensional. The locus of singular spectral curves is a union of five lines. For instance, the general spectral curve \(X_{s}\) is a smooth curve of genus two, branched over six points \(0,1,\lambda,t,\infty,\rho\) of \(\mathbb{P}^{1}\) and the corresponding Hitchin fiber is isomorphic to the Picard variety \(\operatorname{Pic}^{3}(X_{s})\) which parametrizes degree \(3\) line bundles on \(X_{s}\). A singular spectral curve occurs when the sixth point \(\rho\) coincides with one of the five other points. This leads to a nodal curve \(X_{s}\) of genus \(2\), whose desingularization \(\tilde{X}_{s}\) is an elliptic curve branched over \[\{0,1,\lambda,t,\infty\}\setminus\{\rho\}\] and \(X_{s}\) can be obtained identifying two points \(w_{\rho}^{+}\) and \(w_{\rho}^{-}\) of \(\tilde{X}_{s}\). Let us mention that the compactified Jacobian \(\overline{\operatorname{Pic}}^{0}(X_{s})\), which parametrizes isomorphism classes of torsion free sheaves of rank one and degree zero on \(X_{s}\), is obtained identifying the \(0\)-section with the \(\infty\)-section of the \(\mathbb{P}^{1}\)-bundle \[\mathbf{F}=\mathbb{P}(\mathcal{O}_{\tilde{X}_{s}}(w_{\rho}^{+})\oplus \mathcal{O}_{\tilde{X}_{s}}(w_{\rho}^{-})) \tag{1.1}\] via the translation \(\mathcal{O}_{\tilde{X}_{s}}(w_{\rho}^{+}-w_{\rho}^{-})\) (cf. [18, p. 83]). See Figure 3. We now describe the remaining singular Hitchin fibers. For this, we consider the map \[f:\mathcal{H}\setminus\mathcal{N}\to\mathcal{H}^{pairs}\] to the moduli space of pairs \((E,\theta)\), which forgets the parabolic direction. This map consists of a blowing-up of the locus formed by Higgs fields which are holomorphic at some point \(\rho\in\{0,1,\lambda,\infty\}\) (Lemma 5.2). Now, let \(\det^{-1}(s)\) denote a singular Hitchin fiber, \(s\neq 0\), coming from a singular spectral curve \(X_{s}\) which has a node at \(\rho\). We find that \(\det^{-1}(s)\) has two irreducible components \(\mathbf{F}_{hol}\) and \(\mathbf{F}_{app}\), which are isomorphic to \(\mathbf{F}\). The first one parametrizes Higgs fields which are holomorphic at \(\rho\), it is contracted by \(f\), the second is formed by Higgs fields which are apparent with respect to the parabolic direction over \(\rho\). In addition, the restriction of \(f\) to \(\mathbf{F}_{app}\) gives a desingularization of the compactified Jacobian \(\overline{\operatorname{Pic}}^{3}(X_{s})\). This leads to the following result, which corresponds to Theorem 5.4. **Theorem 1.2**.: _Assume that the spectral curve \(X_{s}\) has a nodal singularity at \(\rho\in\{0,1,\lambda,t,\infty\}\). The corresponding singular fiber \(\det^{-1}(s)\) of the Hitchin map has two irreducible components_ \[\det^{-1}(s)=\mathbf{F}_{hol}\cup\mathbf{F}_{app}\] _which are isomorphic via any elementary transformation_ \[(elem_{I})|_{\mathbf{F}_{hol}}:\mathbf{F}_{hol}\to\mathbf{F}_{app}\] _where \(I\subset\{0,1,\lambda,t,\infty\}\) contains \(\rho\) and has even cardinality. Moreover:_ 1. _Each component is a desingularization of_ \(\overline{\operatorname{Pic}}^{3}(X_{s})\)_, then isomorphic to_ \(\mathbf{F}\)_, and the structure of_ \(\mathbb{P}^{1}\)_-bundle in_ \(\mathbf{F}_{hol}\) _is given by_ \[f|_{\mathbf{F}_{hol}}:\mathbf{F}_{hol}\to\tilde{X}_{s}\simeq\overline{ \operatorname{Pic}}^{3}(X_{s})\setminus\operatorname{Pic}^{3}(X_{s}).\] 2. _The map_ \(f|_{\mathbf{F}_{app}}:\mathbf{F}_{app}\to\overline{\operatorname{Pic}}^{3}(X_{ s})\) _is a desingularization map. See Figure_ 5_._ 3. _The intersection_ \(\mathbf{F}_{hol}\cap\mathbf{F}_{app}\) _is the union of the_ \(0\)_-section and the_ \(\infty\)_-section of_ \(\mathbf{F}_{hol}\)_. See Figure_ 6_._ In particular, we find that each component of the singular fiber \(\det^{-1}(s)\) is a decomposable \(\mathbb{P}^{1}\)-bundle over an elliptic curve, it consists of the desingularization of the compactified Jacobian of the corresponding nodal spectral curve. The whole fiber topologically looks like an elliptic curve times a degenerate elliptic curve (two copies of \(\mathbb{P}^{1}\) meeting in two points), but a suitable translation must be considered, see Remark 5.7. This confirms the guess of C.T. Simpson [20, Discussion]. In the last section of the paper we deal with the moduli space \(\mathcal{C}^{\nu}(\mathbb{P}^{1},\Lambda_{n})\), \(n\geq 5\), of \(SL_{2}\) logarithmic connections over \(\mathbb{P}^{1}\). Here, \(\Lambda_{n}=t_{1}+\cdots+t_{n}\) intends to be the polar divisor, which is supported on \(n\) distinct points, and \(\nu=(\nu_{1},\ldots,\nu_{n})\in\mathbb{C}^{n}\) is a prescribed eigenvalue vector. For each weight vector \(\mu\) and for \((E,\nabla,l)\in\mathcal{C}^{\nu}(\mathbb{P}^{1},\Lambda_{n})\) there exists a unique limit \(\lim_{c\to 0}c\cdot(E,\nabla,l)\) in the moduli space of \(\mu\)-semistable parabolic Higgs bundles. This gives an equivalence relation on \(\mathcal{C}^{\nu}(\mathbb{P}^{1},\Lambda_{n})\) by assuming that two points are equivalent if their limits are the same. The foliation conjecture [19, Question 7.4], in this case, predicts that this decomposition is a Lagrangian (regular) foliation \(\mathcal{F}_{\mu}\). We obtain the following result, which corresponds to Proposition 6.3 and Theorem 6.4. **Theorem 1.3**.: _For the moduli space \(\mathcal{C}^{\nu}(\mathbb{P}^{1},\Lambda_{n})\) we have:_ 1. _For each_ \(n\geq 5\) _there is a weight vector_ \(\mu\) _such that the foliation conjecture is false._ 2. _If_ \(n=5\) _then the foliation conjecture is true with weight vector_ \(\mu_{c}=\left(\frac{1}{2},\ldots,\frac{1}{2}\right)\)_._ We now proceed to describe briefly the contents of the paper. In Section 2 we introduce our moduli spaces of parabolic vector bundles and Higgs bundles over the five punctured projective line, and give some background on elementary transformations. In Section 3 we study the locus of Higgs fields which admit unstable underlying parabolic vector bundle. Then, in Section 4, we give an explicit description of the nilpotent cone, as well as the fixed points with respect to the \(\mathbb{C}^{*}\)-action. In Section 5, we describe the remain singular fibers of the Hitchin map. Finally, in Section 6 we introduce moduli spaces of connections and the foliation conjecture is investigated. ## 2. Basic definitions Let \(\Lambda=0+1+\lambda+t+\infty\) be a divisor on the complex projective line \(\mathbb{P}^{1}\) supported on five distinct points. ### Moduli spaces A rank two _quasiparabolic vector bundle_\((E,l)\), \(l=\{l_{i}\}\), on \(\left(\mathbb{P}^{1},\Lambda\right)\) consists of a holomorphic vector bundle \(E\) of rank two on \(\mathbb{P}^{1}\) and for each \(i\in\{0,1,\lambda,t,\infty\}\), a \(1\)-dimensional linear subspace \(l_{i}\subset E_{i}\). We call \(\Lambda\) the divisor of parabolic points, and the subspaces \(l_{i}\subset E_{i}\) are called parabolic directions. Let us now introduce a notion of stability for quasiparabolic vector bundles. Fix a weight vector \(\mu=(\mu_{1},\ldots,\mu_{5})\) of real numbers \(0\leq\mu_{i}\leq 1\). A quasiparabolic vector bundle \((E,l)\) is \(\mu\)-_semistable_ (respectively \(\mu\)-_stable_) if for every line subbundle \(L\subset E\) we have \[\operatorname{Stab}_{\mu}(L):=\deg E-2\deg L-\sum_{l_{i}=L|_{i}}\mu_{i}+\sum_ {l_{i}\neq L|_{i}}\mu_{i}\geq 0\] (respectively the strict inequality holds). A _parabolic vector bundle_ is a quasiparabolic vector bundle together with a weight vector \(\mu\). We say that a parabolic vector bundle is _semistable_ if the corresponding quasiparabolic vector bundle is \(\mu\)-_semistable_. For each \(d\in\mathbb{Z}\) and a weight vector \(\mu\), there is a _moduli space_\(Bun_{\mu}(\mathbb{P}^{1},\Lambda,d)\), parametrizing rank two parabolic vector bundles on \(\left(\mathbb{P}^{1},\Lambda\right)\), with \(\deg E=d\), which are semistable. Let us fix \(d=0\). It follows from [2], that there is a polytope \(\Delta\subset[0,1]^{5}\) consisting of weight vectors \(\mu\) such that \(Bun_{\mu}(\mathbb{P}^{1},\Lambda,0)\) is nonempty. There are finitely many models \(Bun_{\mu}(\mathbb{P}^{1},\Lambda,0)\), corresponding to different chambers in the wall-and-chamber decomposition of \(\Delta\), coming from to the variation of the GIT. For example, the central weight \(\mu_{c}=(\frac{1}{2},\ldots,\frac{1}{2})\) lies in the interior of a chamber and the moduli space \[\mathcal{S}=Bun_{\mu_{c}}(\mathbb{P}^{1},\Lambda,0) \tag{2.1}\] is a del Pezzo surface of degree four, see also [15]. A _parabolic Higgs bundle_ is a triple \((E,l,\theta)\) where \((E,l)\) is a quasiparabolic vector bundle over \((\mathbb{P}^{1},\Lambda)\) and \(\theta:E\to E\otimes\omega_{\mathbb{P}^{1}}(\Lambda)\) is a traceless homomorphism, which is nilpotent with respect to the parabolic directions. The condition of being nilpotent means that the residual part \(\operatorname{Res}(\theta,i)\) satisfies \(\operatorname{Res}(\theta,i)\cdot l_{i}=0\) and \(\operatorname{Res}(\theta,i)(E_{i})\subset l_{i}\), for each \(i\in\{0,1,\lambda,t,\infty\}\). We say that \(\theta\) is a _parabolic Higgs field._ A line subbundle \(L\subset E\) is called _invariant_ under \(\theta\) if \(\theta(L)\subset L\otimes\omega_{\mathbb{P}^{1}}(\Lambda)\). In addition, \(\theta\) is _irreducible_ if it does not admit invariant line subbundle. A parabolic Higgs bundle \((E,l,\theta)\) is called \(\mu\)-_semistable_ (respectively \(\mu\)-_stable_) if for every line subbundle \(L\subset E\) invariant under \(\theta\), we have \(\operatorname{Stab}_{\mu}(L)\geq 0\) (respectively \(\operatorname{Stab}_{\mu}(L)>0\)). We say that \((E,l,\theta)\) is \(\mu\)-_unstable_ if it is not \(\mu\)-_semistable_. It follows from [6, Propositions 3.1 and 3.2], that every parabolic Higgs field \(\theta\) on \((\mathbb{P}^{1},\Lambda)\) with \(\det\theta\neq 0\) is irreducible, then \(\mu\)-stable for any choice of weight vector. Note also that the condition of being nilpotent implies that the quadratic differential \(\det\theta\) lies in \(\operatorname{H}^{0}(\mathbb{P}^{1},\omega_{\mathbb{P}^{1}}^{\otimes 2}(\Lambda))\), which is a two dimensional vector space. For each weight vector \(\mu\) there is a moduli space \(\mathcal{H}_{\mu}(\mathbb{P}^{1},\Lambda,0)\) parametrizing parabolic Higgs bundles on \((\mathbb{P}^{1},\Lambda)\), with \(\deg E=0\), which are \(\mu\)-semistable [23, 24]. We denote by \[\mathcal{H}=\mathcal{H}_{\mu_{c}}(\mathbb{P}^{1},\Lambda,0) \tag{2.2}\] the moduli space corresponding to the central weight \(\mu_{c}\). It is a smooth four dimensional quasiprojective variety. ### Elementary transformations The automorphism group of \(\mathcal{S}\), cf. (2.1), has order 16, and admits a modular interpretation in terms of the group \(\mathbf{El}\) of elementary transformations [15, 1], which we now describe. Assume that \(I\subset\{0,1,\lambda,t,\infty\}\) has even cardinality and let \[D_{I}=\sum_{i\in I}i\] be the corresponding divisor. We consider the following exact sequence of sheaves \[0\ \to\ E^{\prime}\ \stackrel{{\alpha}}{{\to}}\ E\to\ \bigoplus_{i\in I}E/l_{i}\ \to\ 0\] where \(E/l_{i}\) intends to be a skyscraper sheaf determined by \(E_{i}/l_{i}\). We view \(E^{\prime}\) as a quasi-parabolic vector bundle \((E^{\prime},l^{\prime})\) of rank two over \((\mathbb{P}^{1},\Lambda)\) putting \(l^{\prime}_{i}:=\ker\alpha_{i}\). We call it the _elementary transformation_ of \((E,l)\) over \(D_{I}\): \[elem_{D_{I}}(E,l):=(E^{\prime},l^{\prime}).\] After this correspondence, the determinant line bundle is affected \[\det E^{\prime}=\det E\otimes\mathcal{O}_{\mathbb{P}^{1}}(-D_{I}),\] we then take a square root \(L_{I}\) of \(\mathcal{O}_{\mathbb{P}^{1}}(D_{I})\), in order to obtain \[\det(E^{\prime}\otimes L_{I})=\mathcal{O}_{\mathbb{P}^{1}}.\] The stability condition is preserved if the weight vector \(\mu=(\mu_{1},\ldots,\mu_{5})\) is modified as follows. If \((E,l)\) is \(\mu\)-semistable then \(elem_{D_{I}}(E,l)\) is \(\mu^{\prime}\)-semistable with \(\mu^{\prime}_{i}=\mu_{i}\) if \(i\notin I\) and \(\mu^{\prime}_{i}=1-\mu_{i}\) if \(i\in I\). In particular, when \(\mu\) is the central weight, we obtain an isomorphism \[elem_{I}:\mathcal{S}\to\mathcal{S} \tag{2.3}\] which sends \((E,l)\) to \(elem_{D_{I}}(E,l)\otimes L_{I}\). It follows from basic properties of elementary transformations that \[elem_{I}\circ elem_{J}=elem_{K}\] where \(K=I\cup J\setminus I\cap J\) and the group \(\mathbf{El}\) of transformations of the form \(elem_{I}\), where \(I\) runs over all the subsets of \(\{0,1,\lambda,t,\infty\}\) of even cardinality, gives a modular realization of \(\left(\frac{\mathcal{Z}}{2\mathbb{Z}}\right)^{4}\). Besides this, \(\mathbf{El}\) coincides with the whole automorphism group of \(\mathcal{S}\). Note that, similarly, each correspondence \(elem_{I}\) also acts on Higgs bundles, giving a modular realization of \(\left(\frac{\mathcal{Z}}{2\mathbb{Z}}\right)^{4}\) as subgroup of the automorphism group of \(\mathcal{H}\), which we still denote by \(\mathbf{El}\subset\operatorname{Aut}\mathcal{H}\). See also [6, Section 2.4] and [5, Section 4.2] for more details on elementary transformations. ## 3. Higgs fields having unstable parabolic bundles Let \(\mathcal{S}\) and \(\mathcal{H}\) be as in the previous section. There is an embedding \(\mathcal{S}\to\mathcal{H}\) by taking the Higgs field to be zero. Since the weight vector \(\mu_{c}=\left(\frac{1}{2},\ldots,\frac{1}{2}\right)\) lies in the interior of a chamber, any parabolic vector bundle in \(\mathcal{S}\) is \(\mu_{c}\)-stable. It might happen that \((E,l,\theta)\) is \(\mu\)-semistable with \((E,l)\)\(\mu\)-unstable. For instance an unstable parabolic bundle may be often endowed with an irreducible Higgs field. In this section we shall studying this phenomenon. Let us consider the forgetful map \[\mathfrak{for}:\mathcal{H}\dashrightarrow\mathcal{S}\] which forgets the Higgs field. There is an open subset of \(\mathcal{H}\) where \(\mathfrak{for}\) is well defined, it is formed by Higgs bundles over \(\mathcal{S}\): \[\mathcal{U}=\{(E,l,\theta)\in\mathcal{H}\ :\ \ (E,l)\in\mathcal{S}\}.\] There is an identification between \(\mathcal{U}\) and the cotangent bundle \(T^{*}\mathcal{S}\), by identifying \(T^{*}_{(E,l)}\mathcal{S}\) with \(\mathfrak{for}^{-1}(E,l)\), see [24, Theorem 2.4], so \(\mathcal{H}\) contains the cotangent bundle \(T^{*}\mathcal{S}\) as an open and dense subset. The next result describes which underlying parabolic bundles appear in \(\mathcal{H}\). **Proposition 3.1**.: _Given \((E,l,\theta)\in\mathcal{H}\), then_ * \(E=\mathcal{O}_{\mathbb{P}^{1}}(-d)\oplus\mathcal{O}_{\mathbb{P}^{1}}(d)\)_, with_ \(d\in\{0,1\}\)_;_ * _if_ \(d=0\) _then at most_ \(3\) _parabolic directions lie in the same embedding of_ \(\mathcal{O}_{\mathbb{P}^{1}}\hookrightarrow E\)_;_ * _if_ \(d=1\) _then at most_ \(1\) _parabolic direction lies in_ \(\mathcal{O}_{\mathbb{P}^{1}}(1)\hookrightarrow E\) Proof.: Since \(E\) has degree zero, we can assume \(E=\mathcal{O}_{\mathbb{P}^{1}}(-d)\oplus\mathcal{O}_{\mathbb{P}^{1}}(d)\), with \(d\geq 0\). A Higgs field \[\theta=\left(\begin{array}{cc}\alpha&\beta\\ \gamma&-\alpha\end{array}\right)\] with logarithmic poles on \(\Lambda\) is given by homomorphisms \[\left\{\begin{array}{l}\alpha:\mathcal{O}_{\mathbb{P}^{1}}\to\omega_{ \mathbb{P}^{1}}(\Lambda)\\ \beta:\mathcal{O}_{\mathbb{P}^{1}}(d)\to\mathcal{O}_{\mathbb{P}^{1}}(-d) \otimes\omega_{\mathbb{P}^{1}}(\Lambda)\\ \gamma:\mathcal{O}_{\mathbb{P}^{1}}(-d)\to\mathcal{O}_{\mathbb{P}^{1}}(d) \otimes\omega_{\mathbb{P}^{1}}(\Lambda)\end{array}\right.\] which turns out to be equivalent to give \[\left\{\begin{array}{l}\alpha\in\Gamma(\mathcal{O}_{\mathbb{P}^{1}}(3))\\ \beta\in\Gamma(\mathcal{O}_{\mathbb{P}^{1}}(3-2d))\\ \gamma\in\Gamma(\mathcal{O}_{\mathbb{P}^{1}}(3+2d))\end{array}\right.\] Now if \(d\geq 2\) then \(\beta=0\) and \(\mathcal{O}_{\mathbb{P}^{1}}(d)\) is a destabilizing subbundle. This concludes the first assertion of the statement. Let us assume \(d=0\). An embedding \(\mathcal{O}_{\mathbb{P}^{1}}\hookrightarrow\mathcal{O}_{\mathbb{P}^{1}} \oplus\mathcal{O}_{\mathbb{P}^{1}}\), \(e\mapsto(e,0)\), passing through a parabolic direction \(l_{i}\) over \(t_{i}\) yields \(\gamma\in\Gamma(\mathcal{O}_{\mathbb{P}^{1}}(3)\otimes\mathcal{O}_{\mathbb{P} ^{1}}(-t_{i}))\). Thus at most \(3\) parabolic directions lie in \(\mathcal{O}_{\mathbb{P}^{1}}\), otherwise \(\gamma=0\) and \(\mathcal{O}_{\mathbb{P}^{1}}\) is a destabilizing subbundle. The case \(d=1\) is similar, and hence will be omitted. **Corollary 3.2**.: _Let \((E,l,\theta)\in\mathcal{H}\). Assume that the underlying parabolic bundle \((E,l)\) is \(\mu_{c}\)-unstable. Then we are in one of the following possibilities_ * \(E=L_{1}\oplus L_{2}\)_,_ \(L_{i}\simeq\mathcal{O}_{\mathbb{P}^{1}}\)_,_ \(L_{1}\) _contains_ \(3\) _parabolic directions and_ \(L_{2}\) _contains_ \(2\) _parabolic directions;_ * \(E=\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(1)\) _and_ \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\) _contains every parabolic direction;_ * \(E=\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(1)\)_,_ \(\mathcal{O}_{\mathbb{P}^{1}}(1)\) _contains exactly_ \(1\) _parabolic direction and_ \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\) _contains the remaining_ \(4\) _parabolic directions._ _In particular, \((E,l)\) is decomposable._ Proof.: We first reduce to the case where \(E\) is trivial, up to an elementary transformation. If \(E=\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(1)\), Proposition 3.1 ensures that at most \(1\) parabolic directions lies in \(\mathcal{O}_{\mathbb{P}^{1}}(1)\), and since the family of embeddings \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\hookrightarrow E\) is three dimensional, we can take an \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\) passing through \(3\) parabolic directions outside \(\mathcal{O}_{\mathbb{P}^{1}}(1)\). Now, a transformation \(elem_{I}\) over two of them, transforms \(E\) into the trivial vector bundle. Assume that \(E\) is trivial and \((E,l)\) is \(\mu_{c}\)-unstable. A destabilizing subbundle \(L\subset E\), \(\deg L\leq 0\), satisfies \[-2\deg L-m/2+n/2<0\] where \(m\) is the number of parabolic directions in \(L\) and \(n\) corresponds to the parabolic directions outside \(L\). Hence, \(\deg L\in\{0,-1\}\). In addition, by Proposition 3.1, if \(\deg L=0\) then there are exactly \(3\) parabolic directions in \(L\). If \(\deg L=-1\) then every parabolic direction lies in \(L\), and applying a transformation \(elem_{I}\) over two parabolic points, we reduce to the previous case. Now we may assume that \(E\) is trivial, and there are exactly \(3\) parabolic directions in the same embedding \(\mathcal{O}_{\mathbb{P}^{1}}\hookrightarrow L_{1}\subset E\). We will show that \(\mu_{c}\)-semistability of \(\theta\) implies that there exists another embedding of \(\mathcal{O}_{\mathbb{P}^{1}}\hookrightarrow L_{2}\subset E\) passing through the remaining two parabolic directions. By simplicity, let us assume that parabolic directions \(l_{0},l_{1},l_{\lambda}\) over \(0,1,\lambda\) lie in \(L_{1}\) and let \(L_{2}\) be an embedding of \(\mathcal{O}_{\mathbb{P}^{1}}\) passing through the parabolic direction \(l_{t}\) and we have \(E=L_{1}\oplus L_{2}\). As in the proof of Proposition 3.1, since \(\theta\) is nilpotent with respect to the parabolic directions, then \(\gamma\) vanishes at \(\{0,1,\lambda\}\), \(\beta\) vanishes at \(t\), and \(\alpha\) vanishes at \(\{0,1,\lambda,t\}\). So, we conclude that \(\alpha=0\) and \[\left\{\begin{array}{l}\beta:\mathcal{O}_{\mathbb{P}^{1}}\to\omega_{\mathbb{ P}^{1}}(0+1+\lambda+\infty)\\ \gamma:\mathcal{O}_{\mathbb{P}^{1}}\to\omega_{\mathbb{P}^{1}}(t+\infty).\end{array}\right.\] If the remaining parabolic direction \(l_{\infty}\) over \(\infty\) is outside \(L_{2}\) then the condition to be nilpotent implies that \(\beta\) and \(\gamma\) vanish on it. In this case, \(\gamma\) must be zero, \(L_{1}\) is invariant under \(\theta\) and then \(\theta\) is \(\mu_{c}\)-unstable. When it lies in \(L_{2}\) then \(\beta\) vanishes also at \(\infty\), i.e., \(\beta:\mathcal{O}_{\mathbb{P}^{1}}\to\omega_{\mathbb{P}^{1}}(0+1+\lambda)\). We have shown that \(E=L_{1}\oplus L_{2}\), \(L_{i}=\mathcal{O}_{\mathbb{P}^{1}}\), \(3\) parabolic directions \(l_{0},l_{1},l_{\lambda}\) lie in \(L_{1}\), and the remaining directions \(l_{t},l_{\infty}\) lie in \(L_{2}\). This concludes the proof of the corollary. This corollary implies that there are exactly \(16\)\(\mu_{c}\)-unstable parabolic vector bundles which admit a \(\mu_{c}\)-semistable Higgs field \(\theta\), see Table 1. The group \(\mathbf{El}\) acts transitively on it and Figure 1 shows one of them. \begin{table} \begin{tabular}{|c|c|c|} \hline & \(E\) & \(\{u,v,p,q,r\}=\{0,1,\lambda,t,\infty\}\) \\ \hline \hline 10 & \(\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\) & \(l_{u},l_{v},l_{p}\subset L_{1}\simeq\mathcal{O}_{\mathbb{P}^{1}}\) and \(l_{q},l_{r}\subset L_{2}\simeq\mathcal{O}_{\mathbb{P}^{1}}\) \\ \hline 5 & \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(1)\) & \(l_{u}\subset\mathcal{O}_{\mathbb{P}^{1}}(1)\) and \(l_{v},l_{p},l_{q},l_{r}\subset\mathcal{O}_{\mathbb{P}^{1}}(-1)\) \\ \hline 1 & \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(1)\) & \(l_{0},l_{1},l_{\lambda},l_{t},l_{\infty}\subset\mathcal{O}_{\mathbb{P}^{1}}(-1)\) \\ \hline \end{tabular} \end{table} Table 1. 16 unstable parabolic bundles admitting stable Higgs fields Figure 1. Unstable parabolic bundle which admits stable Higgs field. **Remark 3.3**.: Let us assume we are in the first case of Corollary 3.2: \(E=L_{1}\oplus L_{2}\), \(L_{i}\simeq\mathcal{O}_{\mathbb{P}^{1}}\), \(L_{1}\) contains \(3\) parabolic directions, over \(0,1\) and \(\lambda\), and \(L_{2}\) contains \(2\) parabolic directions, over \(\infty\) and \(t\). We have seen that any \(\mu_{c}\)-semistable Higgs field on it is of the form \[\theta=\left(\begin{array}{cc}0&\beta\\ \gamma&0\end{array}\right)\] with \[\left\{\begin{array}{l}\beta:\mathcal{O}_{\mathbb{P}^{1}}\to\omega_{ \mathbb{P}^{1}}(0+1+\lambda)\\ \gamma:\mathcal{O}_{\mathbb{P}^{1}}\to\omega_{\mathbb{P}^{1}}(t+\infty)\;, \quad\gamma\neq 0\,.\end{array}\right.\] Any other Higgs bundle admitting a \(\mu_{c}\)-unstable parabolic vector bundle can be obtained from this by performing an elementary transformation. In the next result we will determine the complement \(\mathcal{H}\setminus\mathcal{U}\), formed by \(\mu_{c}\)-semistable Higgs bundles which have \(\mu_{c}\)-unstable underlying parabolic bundle. Before that, let us introduce some notation: let \(\mathrm{Higgs}(E,l)\) be the quotient of the vector space \(\Gamma(\mathcal{SE}nd(E,l)\otimes\omega_{\mathbb{P}^{1}}(\Lambda))\) by the automorphism group of the parabolic bundle \((E,l)\). The stability condition has not been considered here, a point of \(\mathrm{Higgs}(E,l)\) lies in \(\mathcal{H}\) only if it is \(\mu_{c}\)-semistable. **Proposition 3.4**.: _The complement \(\mathcal{H}\setminus\mathcal{U}\) has exactly \(16\) irreducible components and the group \(\mathbf{El}\) acts transitively on it. Each component is a Zariski open subset of \(\mathrm{Higgs}(E,l)\), for each one of the \(16\) decomposable parabolic bundles shown in Table 1._ Proof.: An element \((E,l,\theta)\) of \(\mathcal{H}\setminus\mathcal{U}\) corresponds to a Higgs field which has \(\mu_{c}\)-unstable underlying parabolic bundle. These parabolic bundles were classified in Corollary 3.2 and there are \(16\) of them. In addition, the group \(\mathbf{El}\) acts transitively on it, so we fix one, say \(E=L_{1}\oplus L_{2}\), \(L_{i}=\mathcal{O}_{\mathbb{P}^{1}}\), with \(3\) parabolic directions over \(0,1,\lambda\) lying in \(L_{1}\), and with the remaining directions, over \(t,\infty\), lying in \(L_{2}\). The corresponding space of Higgs fields \[\Gamma(\mathcal{SE}nd(E,l)\otimes\omega_{\mathbb{P}^{1}}(\Lambda))\] is three dimensional and its quotient by the automorphism group of \((E,l)\) gives \(\mathrm{Higgs}(E,l)\). We want the locus in \(\mathrm{Higgs}(E,l)\) formed by \(\mu_{c}\)-semistable Higgs fields. According to the proof of Corollary 3.2, any Higgs field in \(\mathrm{Higgs}(E,l)\) is given by \[\theta=\left(\begin{array}{cc}0&\beta\\ \gamma&0\end{array}\right) \tag{3.1}\] where \[\left\{\begin{array}{l}\beta:\mathcal{O}_{\mathbb{P}^{1}}\to\omega_{ \mathbb{P}^{1}}(0+1+\lambda)\\ \gamma:\mathcal{O}_{\mathbb{P}^{1}}\to\omega_{\mathbb{P}^{1}}(t+\infty)\end{array}\right.\] and so \((\beta,\gamma)\) lies in a three dimensional vector space. We see that \(\theta\) is \(\mu_{c}\)-semistable if and only if \(\gamma\neq 0\). On the other hand, automorphisms of \((E,l)\), i.e. automorphisms of the trivial bundle fixing parabolic directions, are diagonal and then the quotient of \[\Gamma(\mathcal{SE}nd(E,l)\otimes\omega_{\mathbb{P}^{1}}(\Lambda))\setminus \{\gamma=0\}\] is a two dimensional subvariety of \(\mathcal{H}\). ## 4. Nilpotent cone The nilpotent cone \(\mathcal{N}\) is formed by Higgs fields having vanishing determinant, we will show that it has \(17\) irreducible components. Of course it contains \(\mathcal{S}\), the locus obtained by taking \(\theta=0\), which is a del Pezzo surface of degree four. We will show that outside \(\mathcal{S}\) there is exactly one component for each of the \(16\) special rational curves of \(\mathcal{S}\) (those which have \((-1)\)-self intersection). These curves are parametrized by parabolic structures given in Table 2, see [15] for details. We first determine the intersection between \(\mathcal{N}\) and \(\mathcal{H}\setminus\mathcal{U}\), i.e., \(\mu_{c}\)-semistable Higgs bundles having \(\mu_{c}\)-unstable parabolic vector bundle and with vanishing determinant. To give one example, let \[\Theta_{1}=(L_{1}\oplus L_{2},l,\theta)\;,\quad L_{i}\simeq\mathcal{O}_{ \mathbb{P}^{1}}\] where the parabolic structure is given by \[l_{0},l_{1},l_{\lambda}\subset L_{1}\quad\text{and}\quad l_{t},l_{\infty} \subset L_{2}\] and \[\theta=\left(\begin{array}{cc}0&0\\ \frac{dx}{(x-t)}&0\end{array}\right). \tag{4.1}\] Note that the destabilizing subbundle \(L_{1}\) for the underlying parabolic structure is non-invariant under \(\theta\). By performing the transformations \(elem_{I}\in\mathbf{El}\) we get at least \(16\)\(\mu_{c}\)-semistable Higgs bundles, \(\Theta_{i}\), \(i=1,\ldots,16\), having \(\mu_{c}\)-unstable underlying parabolic vector bundles. The next result shows that these are all the cases. **Proposition 4.1**.: _There are exactly \(16\) Higgs bundles in \(\mathcal{H}\setminus\mathcal{U}\) with vanishing determinant, they are \(\Theta_{i}\), \(i=1,\ldots,16\), as above._ Proof.: By Proposition 3.4 we may assume, up to a transformation \(elem_{I}\), that a Higgs bundle \((E,l,\theta)\in\mathcal{H}\setminus\mathcal{U}\) is given by \(\theta\) as in (3.1) and \((E,l)\) is the parabolic vector bundle of Figure 1. Now, if \(\theta\) has vanishing determinant then \(\beta\gamma=0\), and \(\gamma\) cannot be zero because otherwise \(\theta\) is \(\mu_{c}\)-unstable. We conclude that \(\beta=0\) and up to an automorphism of \((E,l)\) we can assume that \(\gamma\) has residue \(1\) at \(t\). This gives the expression for \(\theta\) in (4.1). Let us denote by \(\zeta_{i}\subset\mathcal{S}\), \(i=1,\ldots,16\), the \((-1)\)-self intersection rational curves in \(\mathcal{S}\), see Table 2, and let \[\Sigma=\cup_{i=1}^{16}\zeta_{i}\] be the union. There is a natural correspondence between the set of rational curves \[\{\zeta_{i}\;:\;i=1,\ldots,16\}\] \begin{table} \begin{tabular}{|c|c|c|} \hline & \(E\) & \(\{u,v,p,q,r\}=\{0,1,\lambda,t,\infty\}\) \\ \hline \hline \(10\) & \(\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\) & \(l_{u},l_{v}\subset\mathcal{O}_{\mathbb{P}^{1}}\hookrightarrow E\) \\ \hline \(5\) & \(\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\) & \(l_{u},l_{v},l_{p},l_{q}\subset\mathcal{O}_{\mathbb{P}^{1}}(-1)\hookrightarrow E\) \\ \hline \(1\) & \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(1)\) & \(l_{0},l_{1},l_{\lambda},l_{t},l_{\infty}\nsubseteq\mathcal{O}_{\mathbb{P}^{1}} (1)\) \\ \hline \end{tabular} \end{table} Table 2. 16 special lines in \(\mathcal{S}\) and the set of Higgs bundles \[\{\Theta_{i}\;:\;i=1,\ldots,16\}\] in \(\mathcal{N}\cap(\mathcal{H}\setminus\mathcal{U})\). For instance, we first associate the rational curve \(\zeta_{1}\subset\mathcal{S}\), corresponding to parabolic vector bundles with two parabolic directions \(l_{t},l_{\infty}\) inside \(\mathcal{O}_{\mathbb{P}^{1}}\simeq L_{2}\subset\mathcal{O}_{\mathbb{P}^{1}} \oplus\mathcal{O}_{\mathbb{P}^{1}}\), to \(\Theta_{1}\). The structure of the underlying parabolic vector bundle of \(\Theta_{1}\) is infinitely close to the parabolic structure varying in \(\zeta_{1}\). Now, the correspondence \[\zeta_{i}\longleftrightarrow\Theta_{i}\] follows by the action of \(\mathbf{El}\) in both sets. We will see that besides \(\mathcal{S}\), the nilpotent cone has \(16\) components \(\mathcal{N}_{i}\) which can be obtained as one point compactification of \(\mathcal{N}_{i}\cap\mathcal{U}\), i.e. \[\mathcal{N}_{i}=(\mathcal{N}_{i}\cap\mathcal{U})\cup\{\Theta_{i}\}\] So, first we study the restriction of the nilpotent cone to \(\mathcal{U}\). **Proposition 4.2**.: _If \((E,l,\theta)\in\mathcal{U}\) has vanishing determinant, then \((E,l)\in\Sigma\), ie. \(\mathfrak{for}(\mathcal{N})=\Sigma\)._ Proof.: To begin with, note that if \(E=\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(1)\), we can apply \(elem_{I}\) in order to transform \(E\) into the trivial vector bundle \(E=\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\). In addition, since \((E,l)\) is \(\mu_{c}\)-semistable, there is no embedding of \(\mathcal{O}_{\mathbb{P}^{1}}\hookrightarrow E\) passing through \(3\) parabolic directions, and then for computation we can assume that the parabolic directions \(l=\{l_{i}\}\) are normalized as \[l_{0}=\begin{pmatrix}1\\ 0\end{pmatrix},l_{1}=\begin{pmatrix}1\\ 1\end{pmatrix},l_{\lambda}=\begin{pmatrix}1\\ u\end{pmatrix},l_{t}=\begin{pmatrix}1\\ v\end{pmatrix},l_{\infty}=\begin{pmatrix}0\\ 1\end{pmatrix}.\] Any Higgs field \(\theta\) on \((E,l)\) can be written as \[\theta=c_{1}\theta_{1}+c_{2}\theta_{2}\;;\quad c_{1},c_{2}\in\mathbb{C}\] where \[\theta_{1}=\left(\begin{array}{ccc}\frac{u}{(x-\lambda)}-\frac{u}{(x-1)}& \frac{u}{(x-1)}-\frac{1}{(x-\lambda)}+\frac{1-u}{x}\\ \frac{u^{2}}{(x-\lambda)}-\frac{u}{(x-1)}&-\frac{u}{(x-\lambda)}+\frac{u}{(x- 1)}\end{array}\right)\cdot dx\] and \[\theta_{2}=\left(\begin{array}{ccc}\frac{v}{(x-t)}-\frac{v}{(x-1)}&\frac{v}{ (x-1)}-\frac{1}{(x-t)}+\frac{1-v}{x}\\ \frac{v^{2}}{(x-t)}-\frac{v}{(x-1)}&-\frac{v}{(x-t)}+\frac{v}{(x-1)}\end{array} \right)\cdot dx\] and \(x\) intends to be the coordinate of \(\mathbb{P}^{1}\). Then we get \[\det\theta=(h_{1}+h_{2}\cdot x)\frac{dx^{\otimes 2}}{x(x-1)(x-\lambda)(x-t)}\] where \[h_{1} = (c_{1}(1-u)+c_{2}(1-v))(c_{1}tu(\lambda-u)+c_{2}\lambda v(t-v))\] \[h_{2} = (c_{1}u(u-1)+c_{2}v(v-1))(c_{1}(\lambda-1)+c_{2}(t-v))\] and let us write \[\left\{\begin{array}{l}a_{1}=c_{1}(1-u)+c_{2}(1-v)\\ a_{2}=c_{1}tu(\lambda-u)+c_{2}\lambda v(t-v)\\ b_{1}=c_{1}u(u-1)+c_{2}v(v-1)\\ b_{2}=c_{1}(\lambda-1)+c_{2}(t-v)\end{array}\right.\] We see that \(\theta\) has vanishing determinant if and only if \[a_{i}=b_{j}=0 \tag{4.2}\] for some \(i,j\in\{1,2\}\). We are looking for nontrivial solutions \(c_{1},c_{2}\) for each linear system (4.2) and actually we will show that it has a nontrivial solution if and only if the parabolic structure lies in \(\Sigma\), which is the locus of 16 special rational curves of \(\mathcal{S}\). To do so, we first note that the system \(a_{i}=b_{j}=0\), for some \(i,j\in\{1,2\}\), has a nontrivial solution \(c_{1},c_{2}\) if and only if at least one of the following equations hold \[\left\{\begin{array}{l}(v-1)(u-1)(u-v)=0\\ (t-v)(-u+\lambda)(\lambda v-tu)=0\\ u(t-1)+v(1-\lambda)+\lambda-t=0\\ vu(ut(\lambda-1)+v\lambda(1-t)+uv(t-\lambda))=0\end{array}\right.\] This last means that either there are two parabolic points lying in the same embedding \(\mathcal{O}_{\mathbb{P}^{1}}\hookrightarrow\mathcal{O}_{\mathbb{P}^{1}} \oplus\mathcal{O}_{\mathbb{P}^{1}}\) or there is an embedding \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\hookrightarrow\mathcal{O}_{\mathbb{P}^{1}} \oplus\mathcal{O}_{\mathbb{P}^{1}}\) passing through 4 parabolic directions. More precisely, there is an embedding \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\hookrightarrow\mathcal{O}_{\mathbb{P}^{1}} \oplus\mathcal{O}_{\mathbb{P}^{1}}\) passing through \(l_{0},l_{1},l_{\lambda},l_{\infty}\) when \[\lambda-u=0,\] through \(l_{0},l_{1},l_{t},l_{\infty}\) when \[t-v=0,\] through \(l_{0},l_{\lambda},l_{t},l_{\infty}\) when \[\lambda v-tu=0,\] through \(l_{1},l_{\lambda},l_{t},l_{\infty}\) when \[u(t-1)+v(1-\lambda)+\lambda-t=0,\] and through \(l_{0},l_{1},l_{\lambda},l_{t}\) when \[ut(\lambda-1)+v\lambda(1-t)+uv(t-\lambda)=0.\] The other cases are evident. This shows that \((E,l)\in\Sigma\), completing the proof of the proposition. We now emphasize another consequence of this proposition. Let \(\mathcal{N}_{(E,l)}\) be the set of Higgs fields having \((E,l)\) as underlying parabolic bundle and having vanishing determinant. Along the proof of Proposition 4.2, we have seen that the intersection \(\mathcal{N}_{(E,l)}\cap\mathcal{U}\) corresponds to a union of lines in the vector space \[\operatorname{Higgs}(E,l)\simeq\mathbb{C}^{2}.\] More precisely, it is one single line when \((E,l)\in\zeta_{i}\) and \((E,l)\notin\zeta_{j}\) for \(j\neq i\), and exactly two lines when \((E,l)\in\zeta_{i}\cap\zeta_{j}\). For convenience we give an explicit example. All the other cases can be obtained from this with a transformation \(elem_{I}\). If \((E,l)\in\zeta_{1}\), i.e., \(E=\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\) and \[l_{0}=\begin{pmatrix}1\\ 0\end{pmatrix},l_{1}=\begin{pmatrix}1\\ 1\end{pmatrix},l_{\lambda}=\begin{pmatrix}1\\ u\end{pmatrix},l_{t}=l_{\infty}=\begin{pmatrix}0\\ 1\end{pmatrix}\] then \[\mathcal{N}_{(E,l)}\cap\mathcal{U}=\{(E,l,c\cdot\theta_{1})\;:\quad c\in \mathbb{C}\}\] where \[\theta_{1}=\left(\begin{array}{cc}0&0\\ \frac{dx}{(x-t)}&0\end{array}\right) \tag{4.3}\] when \((E,l)\notin\zeta_{j}\) for \(j\neq 1\). But if \((E,l)\) lies in the intersection of two rational curves \(\zeta_{1}\cap\zeta_{j}\), for instance if \(u=0\), then \[\mathcal{N}_{(E,l)}\cap\mathcal{U}=\{(E,l,c\cdot\theta_{1})\;:\quad c\in \mathbb{C}\}\cup\{(E,l,c\cdot\theta_{j})\;:\quad c\in\mathbb{C}\}\] where \[\theta_{j}=\left(\begin{array}{cc}0&\frac{dx}{x(x-\lambda)}\\ 0&0\end{array}\right). \tag{4.4}\] It is interesting to note that for every \((E,l)\in\zeta_{1}\) the line \[\{(E,l,c\cdot\theta_{1})\;:\quad c\in\mathbb{C}\}\subset\mathcal{U}\] has the same limit point in \(\mathcal{H}\setminus\mathcal{U}\), that is, \[\lim_{c\to\infty}(E,l,c\cdot\theta_{1})=\Theta_{1}\] where \[\Theta_{1}=(L_{1}\oplus L_{2},l,\theta_{1})\;,\quad L_{i}\simeq\mathcal{O}_{ \mathbb{P}^{1}}\] has parabolic structure \[l_{0},l_{1},l_{\lambda}\subset L_{1}\quad\text{and}\quad l_{t},l_{\infty} \subset L_{2}.\] In fact, for any \(c\neq 0\), by performing an automorphism \[\phi_{c}=\left(\begin{array}{cc}1&0\\ 0&c^{-1}\end{array}\right) \tag{4.5}\] on \((E,l)\), one obtains \[l_{0}=\begin{pmatrix}1\\ 0\end{pmatrix},l_{1}=\begin{pmatrix}1\\ c^{-1}\end{pmatrix},l_{\lambda}=\begin{pmatrix}1\\ c^{-1}u\end{pmatrix},l_{t}=l_{\infty}=\begin{pmatrix}0\\ 1\end{pmatrix}\] as parabolic directions, and hence when \(c\to\infty\) the parabolic structure goes to the parabolic structure of \(\Theta_{1}\). On the other hand, we have \[\phi\circ(c\cdot\theta_{1})\circ\phi^{-1}=\theta_{1}.\] Let \(\theta_{j}\), \(j=1,\ldots,16\), denote the transformed of \(\theta_{1}\) by action of \(\mathbf{El}\). We summarise the discussion above in the next result. **Proposition 4.3**.: _If \((E,l)\in\Sigma\) belongs to the rational curve \(\zeta_{i}\), then_ \[\mathcal{N}_{(E,l)}\cap\mathcal{U}=\{(E,l,c\cdot\theta_{i})\;:\quad c\in\mathbb{ C}\}\] _when \((E,l)\notin\zeta_{j}\), \(\forall j\neq i\), and_ \[\mathcal{N}_{(E,l)}\cap\mathcal{U}=\{(E,l,c\cdot\theta_{i})\;:\quad c\in \mathbb{C}\}\cup\{(E,l,c\cdot\theta_{j})\;:\quad c\in\mathbb{C}\}\] _when \((E,l)\in\zeta_{i}\cap\zeta_{j}\). Moreover, we have_ \[\lim_{c\to\infty}(E,l,c\cdot\theta_{i})=\Theta_{i}\;.\] **Definition 4.4**.: The \(\Theta_{i}\) are fixed points by the \(\mathbb{C}^{*}\) action, following the terminology of [19], we call them the \(16\)_Hodge bundles_ of \(\mathcal{H}\). They are all the fixed points outside \(\mathcal{S}\). Finally, we are ready to the main result of this section: **Theorem 4.5**.: _The nilpotent cone of \(\mathcal{H}\) has exactly \(17\) irreducible components_ \[\mathcal{N}=\mathcal{S}\cup_{i=1}^{16}\mathcal{N}_{i}\] _where_ \[\mathcal{N}_{i}=\{(E,l,c\cdot\theta_{i})\;:\;(E,l)\in\zeta_{i},\;c\in\mathbb{ C}\}\cup\{\Theta_{i}\}.\] _See Figure 2._ Proof.: The proof follows from Propositions 4.1, 4.2 and 4.3. Figure 2. Nilpotent cone. ## 5. Other singular Hitchin fibers In this section we study singular fibers of the Hitchin map \[\det:\mathcal{H}\to\Gamma(\omega_{\mathbb{P}^{1}}^{\otimes 2}(\Lambda))\simeq \mathbb{C}^{2}\] over a point \(s\neq 0\). The general spectral curve \(X_{s}\) is a smooth curve of genus \(2\) branched over \(6\) distinct points \[0,1,\lambda,t,\infty,\rho\] of \(\mathbb{P}^{1}\) and the corresponding Hitchin fiber is \(\operatorname{Pic}^{3}(X_{s})\). A singular spectral curve occurs when the sixth point \(\rho\) coincides with one of the five other points. Hence, the _locus of singular spectral curves_ is a union of five lines \[\cup_{\rho}\Gamma(\omega_{\mathbb{P}^{1}}^{\otimes 2}(\Lambda-\rho))\] where \(\rho\) varies in \(\{0,1,\lambda,t,\infty\}\). If \(s\neq 0\) lies in one of these lines then \(X_{s}\) is a nodal curve of genus \(2\), its desingularization \(\tilde{X}_{s}\) is an elliptic curve branched over \[\{0,1,\lambda,t,\infty\}\setminus\{\rho\}\] and \(X_{s}\) can be obtained identifying two points \(w_{\rho}^{+}\) and \(w_{\rho}^{-}\) of \(\tilde{X}_{s}\). **Remark 5.1**.: When \(X_{s}\) is a nodal curve with a single node at \(w_{\rho}\), its compactified Jacobian \(\overline{\operatorname{Pic}}^{0}(X_{s})\) is obtained identifying the \(0\)-section with the \(\infty\)-section, see Figure 3, of the \(\mathbb{P}^{1}\)-bundle \[\mathbf{F}=\mathbb{P}(\mathcal{O}_{\tilde{X}_{s}}(w_{\rho}^{+}) \oplus\mathcal{O}_{\tilde{X}_{s}}(w_{\rho}^{-})) \tag{5.1}\] via the translation \(\mathcal{O}_{\tilde{X}_{s}}(w_{\rho}^{+}-w_{\rho}^{-})\), see (cf. [18, p. 83]). In particular, we have \[\tilde{X}_{s}\simeq\overline{\operatorname{Pic}}^{0}(X_{s}) \setminus\operatorname{Pic}^{0}(X_{s}).\] Figure 3. Resolution of the compactified Jacobian. We will see that the singular Hitchin fiber \(\det^{-1}(s)\), \(s\neq 0\), is a union of two copies of \(\mathbf{F}\). Before doing this, we introduce some notation. Let \[\mathcal{H}=\mathcal{H}\setminus\mathcal{N}\] be the complement of nilpotent cone and let \(\mathcal{H}^{pairs}\) be the moduli space of pairs \((E,\theta)\) with \((E,l,\theta)\) in \(\mathcal{H}\). Notice that every Higgs bundle \((E,l,\theta)\) in \(\mathcal{H}\) (and also in \(\mathcal{H}^{pairs}\)) is irreducible, see [6, Propositions 3.1 and 3.2]. We say that a pair \((E,\theta)\) is holomorphic at \(\rho\in\{0,1,\lambda,t,\infty\}\) if \(\operatorname{Res}(\theta,\rho)=0\). Each pair has at most one point with vanishing residual part, because the singular spectral curve has at most one singular (nodal) point. **Lemma 5.2**.: _The forgetful map_ \[f:\mathcal{H}\to\mathcal{H}^{pairs}\] _which forgets the parabolic structure, is the blowup at the locus \(\mathbf{H}\) formed by pairs \((E,\theta)\) such that \(\theta\) is holomorphic at some point \(\rho\in\{0,1,\lambda,t,\infty\}\). More precisely, \(f\) is one to one in the complement \(\mathcal{H}\setminus f^{-1}(\mathbf{H})\) and \(f^{-1}(E,\theta)\) is isomorphic to \(\mathbb{P}^{1}\) for every \((E,\theta)\in\mathbf{H}\)._ Proof.: If \(\theta\) is nowhere-holomorphic, i.e., \(\operatorname{Res}(\theta,\rho)\neq 0\) for every \(\rho\in\{0,1,\lambda,t,\infty\}\), then the parabolic structure is determined by the kernel of the residual part and the forgetful map is one to one. Now assume that \(\operatorname{Res}(\theta,\rho)=0\) for some \(\rho\in\{0,1,\lambda,t,\infty\}\) and we will show that the fiber of the forgetful map is isomorphic to \(\mathbb{P}^{1}\). Let \[l(\rho)=l\setminus\{l_{\rho}\}\] be the parabolic structure obtained by forgetting the direction over \(\rho\) and let \((E,l(\rho),\theta)\) be the corresponding Higgs bundle over \(\mathbb{P}^{1}\) with four marked points \[\Lambda_{\rho}=\{t_{1},t_{2},t_{3},t_{4}\}=\{0,1,\lambda,t,\infty\}\setminus \{\rho\}.\] The moduli space \(Bun_{\mu}(0)\) parametrizing parabolic vector bundles \((E,l(\rho))\) on \((\mathbb{P}^{1},\Lambda_{\rho})\) of degree zero which are semistable with respect to weight \(\mu=\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\) is isomorphic to \(\mathbb{P}^{1}\). A stable point of \(Bun_{\mu}(0)\) has no automorphisms, besides trivial ones, then the fiber of \(f\) is parametrized by the fifth parabolic direction \(l_{\rho}\in\mathbb{P}E_{\rho}\simeq\mathbb{P}^{1}\), as we want. It remains to consider strictly semistable points in \(Bun_{\mu}(0)\), there are exactly four of them, and each one is represented by three distinct quasi-parabolic structures giving the same \(S\)-equivalence class in \(Bun_{\mu}(0)\), see Figure 4. To see this remember that either \(E=\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\) or \(E=\mathcal{O}_{\mathbb{P}^{1}}(1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)\). Since these four strictly semistable points are permuted by elementary transformations, we may assume that we are in one of the three cases shown in Figure 4. Now, in the first two of them any Higgs field has vanishing determinant, and finally we arrive in the last case where \(E=L_{1}\oplus L_{2}\), \(L_{i}\simeq\mathcal{O}_{\mathbb{P}^{1}}\), \[l_{1},l_{2}\subset L_{1}\;\;\text{and}\;\;l_{3},l_{4}\subset L_{2}\] and the Higgs field on \((E,l(\rho))\) writes as \[\theta=\left(\begin{array}{cc}0&a\frac{dx}{(x-t_{1})(x-t_{2})}\\ b\frac{dx}{(x-t_{3})(x-t_{4})}&0\end{array}\right).\] with \(a,b\in\mathbb{C}^{*}\). Adding the fifth parabolic direction \(l_{\rho}\), if it does not lie in \(L_{1}\) neither in \(L_{2}\) then we may assume \(l_{\rho}=\begin{pmatrix}1\\ 1\end{pmatrix}\), \((E,l)\) has no automorphisms and the fiber \(f^{-1}(E,\theta)\) contains a \(\mathbb{C}^{*}\) parametrized by \[\theta_{c}=\left(\begin{array}{cc}0&ca\frac{dx}{(x-t_{1})(x-t_{2})}\\ c^{-1}b\frac{dx}{(x-t_{3})(x-t_{4})}&0\end{array}\right)\;,\;\;c\in\mathbb{C} ^{*}.\] It is worth noting that all the \(\theta_{c}\) are equivalent in \(\mathcal{H}^{pairs}\) because of the presence of automorphisms in \((E,l(\rho))\), which are diagonal. To complete the fiber \(f^{-1}(E,\theta)\) we have to add two points corresponding to either \(l_{\rho}\in L_{1}\) or \(l_{\rho}\in L_{2}\). This finishes the proof of the lemma. It follows from BNR correspondence [3, Proposition 3.6] that the fiber of the Hitchin map in the moduli space \(\mathcal{H}^{pairs}\) of pairs corresponds to the compactified Jacobian variety \(\overline{\operatorname{Pic}}^{3}(X_{s})\) and the restriction of the forgetful map to \(\det^{-1}(s)\) gives a map, still denoted by \[f:\det^{-1}(s)\to\overline{\operatorname{Pic}}^{3}(X_{s}). \tag{5.2}\] To understand \(\det^{-1}(s)\) we need the following result. **Lemma 5.3**.: _Assume that the spectral curve \(X_{s}\) has a nodal singularity at \(\rho\in\{0,1,\lambda,t,\infty\}\). There are bijective correspondences_ * \(\operatorname{Pic}^{3}(X_{s})\leftrightarrow\{(E,\theta)\in\mathcal{H}^{pairs} \;:\;\det\theta=s\;,\;\theta\text{ is nowhere-holomorphic at }\rho\}\)__ * \(\overline{\operatorname{Pic}}^{3}(X_{s})\setminus\operatorname{Pic}^{3}(X_{s })\leftrightarrow\{(E,\theta)\in\mathcal{H}^{pairs}\;:\;\det\theta=s\;,\; \theta\text{ is holomorphic at }\rho\}\)__ Proof.: The proof follows from [6, Proposition 3.5]. In the case (i) of Lemma 5.3, any Higgs field \(\theta\) is _apparent_ with respect to the parabolic direction over \(\rho\), meaning that the parabolic direction \(l_{\rho}\) is an eigendirection of the constant part of \(\theta\). For instance, assuming that \(l_{\rho}=\begin{pmatrix}1\\ 0\end{pmatrix}\) and \(\rho=0\), we can write \[\theta=\left(\begin{array}{cc}ax&b\\ cx&-ax\end{array}\right)\cdot\frac{dx}{x} \tag{5.3}\] Figure 4. Three \(S\)-equivalent parabolic structures giving a point in \(Bun_{\mu}(0)\). for suitable regular functions \(a,b,c\) in a neighborhood of \(\rho\), with \(b(\rho)\neq 0\), because \(\theta\) is nowhere-holomorphic at \(\rho\). Since \(X_{s}\) is singular over \(\rho\) and then \(\det\theta\) vanishes at order two, we conclude that \(c(\rho)=0\), showing that \(l_{\rho}\) is an eigendirection of the constant part of \(\theta\). It is important to note that, after an elementary transformation centered in \(l_{\rho}\), the transformed Higgs field \[\theta^{\prime}=\left(\begin{array}{cc}ax&bx\\ c&-ax\end{array}\right)\cdot\frac{dx}{x} \tag{5.4}\] becomes holomorphic at \(\rho\). This discussion justifies the notation for \(\mathbf{F}_{hol}\) and \(\mathbf{F}_{app}\) in the next result. **Theorem 5.4**.: _Assume that the spectral curve \(X_{s}\) has a nodal singularity at \(\rho\in\{0,1,\lambda,t,\infty\}\). The corresponding singular fiber \(\det^{-1}(s)\) of the Hitchin map has two irreducible components_ \[\det^{-1}(s)=\mathbf{F}_{hol}\cup\mathbf{F}_{app}\] _which are isomorphic via any elementary transformation_ \[(elem_{I})|_{\mathbf{F}_{hol}}:\mathbf{F}_{hol}\to\mathbf{F}_{app}\] _where \(I\subset\{0,1,\lambda,t,\infty\}\) contains \(\rho\) and has even cardinality. Moreover:_ 1. _Each component is a desingularization of_ \(\overline{\operatorname{Pic}}^{3}(X_{s})\)_, then isomorphic to_ \(\mathbf{F}\)_, c.f. (_5.1_), and the structure of_ \(\mathbb{P}^{1}\)_-bundle in_ \(\mathbf{F}_{hol}\) _is given by_ \[f|_{\mathbf{F}_{hol}}:\mathbf{F}_{hol}\to\tilde{X}_{s}\simeq\overline{ \operatorname{Pic}}^{3}(X_{s})\setminus\operatorname{Pic}^{3}(X_{s}).\] 2. _The map_ \(f|_{\mathbf{F}_{app}}:\mathbf{F}_{app}\to\overline{\operatorname{Pic}}^{3}(X_ {s})\) _is a desingularization map. See Figure_ 5_._ 3. _The intersection_ \(\mathbf{F}_{hol}\cap\mathbf{F}_{app}\) _is the union of the_ \(0\)_-section and the_ \(\infty\)_-section of_ \(\mathbf{F}_{hol}\)_. See Figure_ 6_._ Proof.: First, we identify \(\overline{\operatorname{Pic}}^{3}(X_{s})\) with the fiber of the Hitchin map \[\det:\mathcal{H}^{pairs}\to\mathbb{C}^{2}\] and \(\det^{-1}(s)\) consists of \(f^{-1}(\overline{\operatorname{Pic}}^{3}(X_{s}))\), where \(f\) is the forgetful map (5.2). It follows from Lemmas 5.2 and 5.3 that \(\det^{-1}(s)\) has two irreducible components, the strict transform of \(\overline{\operatorname{Pic}}^{3}(X_{s})\), which we call \(\mathbf{F}_{app}\) and the \(\mathbb{P}^{1}\)-bundle \(\mathbf{F}_{hol}\), which is the blowup at the locus \[\left\{(E,\theta)\in\mathcal{H}^{pairs}\;:\;\det\theta=s\;,\;\theta\text{ is holomorphic at }\rho\right\}.\] This last is a copy of the elliptic curve \(\tilde{X_{s}}\), because forgetting the parabolic direction over \(\rho\) where \(\theta\) is holomorphic, it can be identified with a fiber of the Hitchin map for moduli space of (irreducible) pairs \((E,\theta)\) over \(\mathbb{P}^{1}\) with four parabolic points \[\{0,1,\lambda,t,\infty\}\setminus\{\rho\}.\] We conclude that \(\mathbf{F}_{hol}\) is a \(\mathbb{P}^{1}\)-bundle over \(\tilde{X_{s}}\). The elementary transformation \(elem_{I}:\mathcal{H}\to\mathcal{H}\) is an isomorphism, c.f (2.3), and if \(I\) contains \(\rho\), \(elem_{I}\) switches the components \(\mathbf{F}_{hol}\) and \(\mathbf{F}_{app}\), see the discussion involving (5.3) and (5.4). In addition, \(f|_{\mathbf{F}_{app}}:\mathbf{F}_{app}\to\overline{\operatorname{Pic}}^{3}(X_ {s})\) is a birational morphism which is an isomorphism outside \(\mathbf{F}_{hol}\cap\mathbf{F}_{app}\), and then \(f|_{\mathbf{F}_{app}}\) is a desingularization map. We now study the intersection \(\mathbf{F}_{hol}\cap\mathbf{F}_{app}\). Its restriction to each fiber \[f^{-1}(E,\theta)\simeq\mathbb{P}^{1}\subset\mathbf{F}_{hol}\] corresponds to parabolic Higgs bundles \((E,l,\theta)\) with \(\theta\) holomorphic at \(\rho\) and apparent with respect to the parabolic direction \(l_{\rho}\). Adding the fact that \(X_{s}\) is nodal over \(\rho\), we can see that the constant part \(\theta_{\rho}\) of \(\theta\) has exactly two distinct eigendirections, it is an invertible matrix because otherwise \(X_{s}\) would have a singularity of order bigger than two over \(\rho\). In order to simplify notation, lets assume \(\rho=t\), the other cases are similar. Any Higgs field in \(\mathbf{F}_{hol}\) has determinant \[\det\theta=s\cdot\frac{dx^{\otimes 2}}{x(x-1)(x-\lambda)}\] where \(s\in\mathbb{C}^{*}\) is fixed and the constant part \(\theta_{t}\) has determinant \[\det\theta_{t}=\frac{s}{t(t-1)(t-\lambda)}\] which does not depend on \(\theta\). Therefore the intersection \(\mathbf{F}_{hol}\cap\mathbf{F}_{app}\) is a union of two sections \[\sigma_{0},\sigma_{\infty}:\tilde{X}_{s}\to\mathbf{F}_{hol}\] where \(\sigma_{0}\) is formed by eigendirections corresponding to the eigenvalue \(\sqrt{\frac{-s}{t(t-1)(t-\lambda)}}\) and \(\sigma_{\infty}\) corresponds to \(-\sqrt{\frac{-s}{t(t-1)(t-\lambda)}}\). **Remark 5.5**.: Via BNR correspondence, elements of \(\mathbf{F}_{app}\backslash\mathbf{F}_{hol}\) correspond to line bundles on the nodal spectral curve \(X_{s}\), see Lemma 5.3 - (i). We have seen that each irreducible component of \(\det^{-1}(s)\) is a resolution of \(\overline{\operatorname{Pic}}^{3}(X_{s})\). To recover \(\overline{\operatorname{Pic}}^{3}(X_{s})\) using \(\mathbf{F}_{app}\) we must identify the \(0\)-section and the \(\infty\)-section via the map \[\tau:\sigma_{0}(\tilde{X}_{s})\to\sigma_{\infty}(\tilde{X}_{s})\] which switches the two eigenvectors of the constant part of \(\theta\). See Figure 5. Figure 5. Component \(\mathbf{F}_{app}\). **Remark 5.6**.: Here, we will see that \(\tau\) consists of the translation \(\mathcal{O}_{\tilde{X_{s}}}(w_{\rho}^{+}-w_{\rho}^{-})\), recovering Remark 5.1 from the modular point of view, in terms of elementary transformations on Higgs fields. On the one hand, it is more convenient to work with \(\mathbf{F}_{hol}\), instead of \(\mathbf{F}_{app}\), because it contains a natural structure of \(\mathbb{P}^{1}\)-bundle given by the forgetful map \(f\), and the resolution map is given by \(f\circ elem_{I}:\mathbf{F}_{hol}\to\overline{\operatorname{Pic}}^{3}(X_{s})\). On the other hand, to recover the compactified Jacobian via \(\mathbf{F}_{app}\), we need to identify the sections \(\sigma_{0}\) and \(\sigma_{\infty}\) gluing points in the same fiber of the forgetful map \(f\), meaning that each point \[(E,\theta)\in\overline{\operatorname{Pic}}^{3}(X_{s})\setminus\operatorname{ Pic}^{3}(X_{s})\simeq\tilde{X_{s}}\] has exactly two representatives \(\sigma_{0}(E,\theta)=(E,\theta,l_{\sigma_{0}})\) and \(\sigma_{\infty}(E,\theta)=(E,\theta,l_{\sigma_{\infty}})\) in \(\mathbf{F}_{app}\), corresponding to the choices of eigendirections of the constant part of \(\theta\). Coming back to \(\mathbf{F}_{hol}\) using the involution \(elem_{I}:\mathbf{F}_{hol}\to\mathbf{F}_{app}\), in order to obtain \(\overline{\operatorname{Pic}}^{3}(X_{s})\) via \(\mathbf{F}_{hol}\) the \(0\)-section and the \(\infty\)-section must be identified via the map \[\iota:=elem_{I}\circ\tau\circ elem_{I}:\sigma_{0}(\tilde{X_{s}})\to\sigma_{ \infty}(\tilde{X_{s}}) \tag{5.5}\] where \(I\) has even cardinality and contains \(\rho\). We will show that this map corresponds to multiplication by \(\mathcal{O}_{\tilde{X_{s}}}(w_{\rho}^{+}-w_{\rho}^{-})\). To do this, let us first identify the elliptic curve \(\tilde{X_{s}}\) with a fiber of the Hitchin map in the moduli space of pairs \((E,\theta)\) over \(\mathbb{P}^{1}\) with four parabolic points \(\{0,1,\lambda,t,\infty\}\setminus\{\rho\}\), and also with its Jacobian via BNR correspondence \[\tilde{X_{s}}\ni(E,\theta)\longleftrightarrow M_{\theta}\in\operatorname{Pic }(\tilde{X_{s}})\simeq\tilde{X_{s}}.\] There is a third identification for \(\tilde{X_{s}}\), for each \(\theta\) with \(\det\theta=s\), we identify \(\tilde{X_{s}}\) with the curve of eigenvectors of \(\theta\), and since \(\theta\) is parabolic with respect to each one of the eigenvectors \(w_{\rho}^{+}\) and \(w_{\rho}^{-}\) of its constant part at \(\rho\), then the variation of \(M_{\theta}\) under an elementary transformation over \(I\) centered in \(w_{\rho}^{\pm}\), is given by [6, Proposition 2.3]. Using this proposition, we see that the following diagram is commutative \(\lrcorner\) **Remark 5.7**.: The structure of \(\mathbb{P}^{1}\)-bundle of \(\mathbf{F}_{app}\) is obtained from \(\mathbf{F}_{hol}\) via the isomorphism \(elem_{I}:\mathbf{F}_{hol}\to\mathbf{F}_{app}\). Figure 6 shows a ruling of \(\mathbf{F}_{app}\) intersecting \(\mathbf{F}_{hol}\). The whole Hitchin fiber \(\mathbf{F}_{app}\cup\mathbf{F}_{hol}\) is a "twisted product" of an elliptic curve \(\tilde{X_{s}}\) by a degenerate elliptic curve, meaning that a \(\mathbb{P}^{1}\) of the ruling of \(\mathbf{F}_{app}\) intersects two distinct \(\mathbb{P}^{1}\)'s of the ruling of \(\mathbf{F}_{hol}\) and the intersection agrees with the multiplication by \(\mathcal{O}_{\tilde{X_{s}}}(w_{\rho}^{+}-w_{\rho}^{-})\). The structure of this Hitchin fiber has been recently addressed by C.T. Simpson in [20, Discussion], from the topological point of view. ## 6. Connections Let \(\mathcal{C}_{n}^{\nu}=\mathcal{C}^{\nu}(\mathbb{P}^{1},\Lambda_{n})\), \(n\geq 5\), denote the moduli space of logarithmic connections over \(\mathbb{P}^{1}\) of degree zero with polar divisor \(\Lambda_{n}=t_{1}+\cdots+t_{n}\) supported on \(n\) distinct points, and with prescribed eigenvalue vector \(\nu=(\nu_{1},\ldots,\nu_{n})\in\mathbb{C}^{n}\). An element of it consists of an isomorphism class \((E,\nabla)\), where \(E\) is a rank two degree zero vector bundle over \(\mathbb{P}^{1}\) endowed with a logarithmic connection, i.e. a \(\mathbb{C}\)-linear map \[\nabla\colon E\longrightarrow E\otimes\omega_{\mathbb{P}^{1}}(\Lambda_{n})\] satisfying the Leibniz rule \[\nabla(as)=s\otimes da+a\nabla(s)\] for (local) sections \(s\) of \(E\) and \(a\) of \(\mathcal{O}_{\mathbb{P}^{1}}\). In addition, \(\nabla\) is assumed to have vanishing trace and its residue endomorphism \(\operatorname{Res}_{t_{i}}(\nabla)\) over a given parabolic point \(t_{i}\) has \(\pm\nu_{i}\) as eigenvalues. We suppose that the eigenvalue vector \(\nu\) is generic, meaning that \(\nu_{i}\neq 0\), \(\forall i\), and \[\sum\epsilon_{i}\nu_{i}\notin\mathbb{Z}\] for any choice of \(\epsilon_{i}\in\{\pm 1\}\). From this, any connection is irreducible and the construction of the moduli space does not depend of a weight vector giving a stability notion. The moduli space \(\mathcal{C}_{n}^{\nu}\) is a smooth irreducible quasiprojective variety of dimension \(2(n-3)\), see [13, 14]. Note that each connection \(\nabla\) on \(E\) defines a unique parabolic structure, by selecting the eigenspace \(l_{i}\subset E|_{t_{i}}\) associated to \(\nu_{i}\); therefore, \(\mathcal{C}_{n}^{\nu}\) can equivalently be viewed as a moduli space of parabolic connections \((E,\nabla,l)\). If a parabolic vector bundle \((E,l)\) admits a connection like above, we say that it is \(\nu\)_-flat_. ### Foliation conjecture It follows from the work of Simpson [19] that there is a decomposition of \(\mathcal{C}_{n}^{\nu}\) obtained by looking at the limit of \(c\cdot(E,\nabla,l)\) as \(c\to 0\). It turns out that for each weight vector \(\mu\) and for \((E,\nabla,l)\in\mathcal{C}_{n}^{\nu}\) there exists a unique limit \[(E,\theta,\mathbf{q})=\lim_{c\to 0}c\cdot(E,\nabla,l)\quad\in\mathcal{H}_{\mu}( \mathbb{P}^{1},\Lambda_{n},0)\] in the moduli space of \(\mu\)-semistable parabolic Higgs bundles, see also [16, Proposition 4.1]. This leads to an equivalence relation (depending on \(\mu\)) by assuming that two points of \(\mathcal{C}_{n}^{\nu}\) are equivalent if their limits are the same. We might equivalently consider the function \[\pi_{\mu}:\mathcal{C}_{n}^{\nu} \rightarrow \mathcal{H}_{\mu}(\mathbb{P}^{1},\Lambda_{n},0)\] \[(E,\nabla,l) \mapsto \lim_{c\to 0}c\cdot(E,\nabla,l)\] and the decomposition of \(\mathcal{C}_{n}^{\nu}\) given by fibers of \(\pi_{\mu}\). The _foliation conjecture_[19, Question 7.4], in this case, predicts that there is a Lagrangian (regular) foliation \(\mathcal{F}_{\mu}\) whose leaves are closed and coincide with fibers of \(\pi_{\mu}\). The Lagrangian property has already been proved by Simpson in [19]. The whole conjecture has been proved in [16, Corollaries 5.7 and 6.2] for the moduli space of connections over the four punctured projective line when the weight vector is generic, and recently [12] deals with the five punctured projective line by assuming the weight vector \(\mu\) satisfies \(\sum\mu_{i}<1\), which lies in the unstable zone. For the unstable zone we mean the locus of weight vectors \(\mu\) such that any parabolic vector bundle is \(\mu\)-unstable. It is known that there is a polytope \(\Delta\subset[0,1]^{n}\) consisting of weight vectors \(\mu\) such that \(Bun_{\mu}(\mathbb{P}^{1},\Lambda_{n},0)\) is nonempty [2], so the unstable zone consists of the complement of \(\Delta\). We will prove below that in the interior of \(\Delta\) the foliation conjecture is sensitive to weight change, namely it is true for the central weight \(\mu_{c}=\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\) but it turns to be false if \(\mu=\left(\frac{3}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4}\right)\). Even though the corresponding decompositions given by fibers of \(\pi_{\mu_{c}}\) and \(\pi_{\mu}\) share a Zariski open subset. ### Foliation \(\mathcal{F}_{Bun}\) We shall consider the non-separated scheme \(\mathcal{P}\) of rank two undecomposable parabolic vector bundles over \((\mathbb{P}^{1},\Lambda_{n})\) and the corresponding forgetful map \(\mathcal{C}_{n}^{\nu}\rightarrow\mathcal{P}\), sending \((E,\nabla,l)\) to \((E,l)\). **Proposition 6.1**.: _Each fiber of \(\mathcal{C}_{n}^{\nu}\rightarrow\mathcal{P}\) is isomorphic to the affine space \(\mathbb{C}^{n}\) and they fit together into a regular foliation \(\mathcal{F}_{Bun}\) on \(\mathcal{C}_{n}^{\nu}\)._ Proof.: It follows from [15, Proposition 3.1] that the following notions are equivalent \[\nu\text{-flat}\Leftrightarrow\text{undecomposable}\Leftrightarrow\text{ simple}\] where simple means that any automorphism of \(E\) preserving parabolic directions is scalar. This implies that each fiber of \(\mathcal{C}_{n}^{\nu}\rightarrow\mathcal{P}\) is isomorphic to an affine space \(\mathbb{C}^{n}\). Now, given \((E,\nabla,l)\) in \(\mathcal{C}_{n}^{\nu}\), by [15, Proposition 3.4] the underlying parabolic vector bundle is \(\mu\)-stable for a convenient choice of weight vector \(\mu\). The local chart \(Bun_{\mu}(\mathbb{P}^{1},\Lambda,0)\) of \(\mathcal{P}\) is a smooth irreducible projective variety and the restriction of \(\mathcal{C}_{n}^{\nu}\rightarrow\mathcal{P}\) to this chart gives a foliated neighborhood of \((E,\nabla,l)\) whose leaves coincide with fibers of \(\mathcal{C}_{n}^{\nu}\rightarrow\mathcal{P}\). Varying the weight vector \(\mu\) in all possible chambers, these foliated neighborhoods fit together into a regular foliation \(\mathcal{F}_{Bun}\) on \(\mathcal{C}_{n}^{\nu}\). The foliation \(\mathcal{F}_{Bun}\) of Proposition 6.1 plays an important role when the weight vector is in the interior of the polytope \(\Delta\). In fact when \((E,l)\) is \(\mu\)-stable, the limit \(\lim_{c\to 0}c\cdot(E,\nabla,l)\) consists of \((E,0,l)\), then if \(\mathcal{U}_{\mu}\) denotes the Zariski open subset formed by connections \((E,\nabla,l)\) with \((E,l)\in Bun_{\mu}(\mathbb{P}^{1},\Lambda,0)\), the decomposition given by fibers of \(\pi_{\mu}\) coincides with \(\mathcal{F}_{Bun}\) when restricted to \(\mathcal{U}_{\mu}\). In particular, we have the following result. **Proposition 6.2**.: _Assume that \(\mu\) lies in the stable zone, i.e. it is in the interior of \(\Delta\). If the foliation conjecture is true, that is, if the fibers of \(\pi_{\mu}\) fit into a regular foliation \(\mathcal{F}_{\mu}\) then \(\mathcal{F}_{\mu}=\mathcal{F}_{Bun}\)._ Proof.: By the discussion above, both foliations coincide on a nonempty open Zariski subset \(\mathcal{U}_{\mu}\), then they must coincide everywhere. ### Variation with weights In the next result we prove that given \(n\geq 5\) there is a weight vector \(\mu\) such that the foliation conjecture [19] is false in the case \(\mathbb{P}^{1}\) minus \(n\) points. **Proposition 6.3**.: _Let \(n\geq 5\). The foliation conjecture in the moduli space \(\mathcal{C}_{n}^{\nu}\) of logarithmic connections over the \(n\) punctured projective line is false when \(\mu=(\mu_{1},\ldots,\mu_{n})\), \(\mu_{n-2}=\frac{n-2}{n-1}\) and \(\mu_{i}=\frac{1}{n-1}\), \(\forall i\neq n-2\)._ Proof.: Up to an automorphism of \(\mathbb{P}^{1}\), we may assume \(t_{n-2}=0\), \(t_{n-1}=1\) and \(t_{n}=\infty\). By performing one elementary transformation over the parabolic point \(t_{n-2}\), we go to the democratic weight \(\mu^{\prime}=\left(\frac{1}{n-1},\ldots,\frac{1}{n-1}\right)\) and the determinant line bundle becomes odd. By [15, Proposition 3.7], the moduli space \(Bun_{\mu^{\prime}}(\mathbb{P}^{1},\Lambda_{n},-1)\) is isomorphic to \(\mathbb{P}^{n-3}\), which gives the same conclusion to \(Bun_{\mu}(\mathbb{P}^{1},\Lambda_{n},0)\). Fibers of \(\pi_{\mu}\) over a point \((E,l,0)\) with \((E,l)\) in \(Bun_{\mu}(\mathbb{P}^{1},\Lambda_{n},0)\) agree with leaves of \(\mathcal{F}_{Bun}\). We now consider a \(\nu\)-flat parabolic vector bundle \((E,l)\) which does not belong to \(Bun_{\mu}(\mathbb{P}^{1},\Lambda_{n},0)\) and let's investigate the fiber of \(\pi_{\mu}\) over this point. Let us assume that \(E=\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\), and parabolic directions are assumed to be \[l_{t_{1}}=\begin{pmatrix}u\\ 1\end{pmatrix},l_{t_{2}}=\cdots=l_{t_{n-3}}=l_{0}=\begin{pmatrix}0\\ 1\end{pmatrix},l_{1}=\begin{pmatrix}1\\ 1\end{pmatrix},l_{\infty}=\begin{pmatrix}1\\ 0\end{pmatrix}.\] The parabolic structure is actually determined by \(u\in\mathbb{C}\), so we denote by \((E,l_{u})\) the corresponding parabolic vector bundle. Note that the embedding \(\mathcal{O}_{\mathbb{P}^{1}}\to E\) corresponding to the second factor is a destabilizing subbundle, which makes \((E,l_{u})\)\(\mu\)-unstable. Let us denote by \(\mathbb{C}_{u}^{n-3}\) the space of connections over \((E,l_{u})\). By [15, Section 5.1], this space is formed by connections \(\nabla=\nabla_{0}+a_{1}\theta_{1}+\cdots+a_{n-3}\theta_{n-3}\), \((a_{1},\ldots,a_{n-3})\in\mathbb{C}_{u}^{n-3}\), where \[\nabla_{0}= d +\left(\begin{array}{cc}-\nu_{0}&0\\ \rho&\nu_{0}\end{array}\right)\frac{dx}{x}+\left(\begin{array}{cc}-\nu_{1}- \rho&2\nu_{1}+\rho\\ -\rho&\nu_{1}+\rho\end{array}\right)\frac{dx}{x-1}+\left(\begin{array}{cc}- \nu_{t_{1}}&2\nu_{t_{1}}u\\ 0&\nu_{t_{1}}\end{array}\right)\frac{dx}{x-t_{1}}\] \[+ \sum_{i=2}^{n-3}\left(\begin{array}{cc}-\nu_{t_{i}}&0\\ 0&\nu_{t_{i}}\end{array}\right)\frac{dx}{x-t_{i}}\quad,\text{with}\quad\rho=- \sum_{i=1}^{n-3}\nu_{t_{i}}-\nu_{0}-\nu_{1}-\nu_{\infty}\] the Higgs fields are \[\Theta_{1}=\left(\begin{array}{cc}0&0\\ 1-u&0\end{array}\right)\frac{dx}{x}+\left(\begin{array}{cc}u&-u\\ u&-u\end{array}\right)\frac{dx}{x-1}+\left(\begin{array}{cc}-u&u^{2}\\ -1&u\end{array}\right)\frac{dx}{x-t_{1}}\] and \[\Theta_{i}=\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right)\frac{dx}{x}+\left(\begin{array}{cc}0&0\\ -1&0\end{array}\right)\frac{dx}{x-t_{i}}\quad,i=2,\ldots,n-3.\] Then we can take the gauge transformation rescaling by \(c\) in the second component \[g_{c}=\left(\begin{array}{cc}1&0\\ 0&c\end{array}\right)\] to get \[\lim_{c\to 0}g_{c}(c\nabla)g_{c}^{-1}=\theta(a_{1})=\left(\begin{array}{cc}0& \beta\\ 0&0\end{array}\right)\] where \[\beta=(2\nu_{1}+\rho-a_{1}u)\frac{dx}{x-1}+(2\nu_{t_{1}}+a_{1}u^{2})\frac{dx} {x-t_{1}}.\] When \(c\) goes to \(0\), the parabolic structure projects to \((E,\mathbf{q})\) where \[q_{t_{1}}=q_{1}=q_{\infty}=\begin{pmatrix}1\\ 0\end{pmatrix},l_{t_{2}}=\cdots=l_{t_{n-3}}=l_{0}=\begin{pmatrix}0\\ 1\end{pmatrix}\] and the limit Higgs bundle \[(E,\theta(a_{1}),\mathbf{q})=\lim_{c\to 0}c\cdot(E,\nabla,l_{u})\] is stable with respect to the weight \(\mu\), indeed the destabilizing subbundle \(\mathcal{O}_{\mathbb{P}^{1}}\to E\) given by the second factor is not invariant under \(\theta(a_{1})\). We are not able to eliminate the parameter \(a_{1}\) from \(\theta(a_{1})\) using automorphisms of \((E,\mathbf{q})\), so this computation shows that the leaf \(\mathbb{C}_{u}^{n-3}\) of \(\mathcal{F}_{Bun}\) is not contracted by \(\pi_{\mu}\). This implies that fibers of \(\pi_{\mu}\) and leaves of \(\mathcal{F}_{Bun}\) do not agree everywhere. In view of Proposition 6.2, we conclude that fibers of \(\pi_{\mu}\) do not fit into a regular foliation on \(\mathcal{C}_{n}^{\nu}\). Our next result shows that, in the case \(n=5\), \(\mathcal{F}_{Bun}\) can be realized as fibers of \(\pi_{\mu_{c}}\), for the central weight \(\mu_{c}=\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\). **Theorem 6.4**.: _The foliation conjecture in the moduli space \(\mathcal{C}_{5}^{\nu}\) of logarithmic connections over the five punctured sphere is true when \(\mu_{c}=\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\)._ Proof.: Let \((E,\nabla,l)\) be an element of \(\mathcal{C}_{5}^{\nu}\). It follows from [15, Corollary 3.3] that either \(E=\mathcal{O}_{\mathbb{P}^{1}}(1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)\) or \(E=\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\). The locus of fixed points by the \(\mathbb{C}^{*}\)-action on the moduli space \(\mathcal{H}\) of Higgs bundles is the union of \(\mathcal{S}\), corresponding to \((E,0,l)\) with \((E,l)\)\(\mu\)-stable, and the 16 Hodge bundles \(\Theta_{i}\), see Definition 4.4 and Theorem 4.5. A fiber of \(\pi_{\mu_{c}}\) over a point \((E,0,l)\) consists of a leaf of \(\mathcal{F}_{Bun}\), so it remains to consider the other 16 points. We will show that there are exactly 16 \(\mu\)-unstable \(\nu\)-flat parabolic vector bundles. Assuming that \((E,l)\) is \(\mu_{c}\)-unstable, there exists a destabilizing subbundle \(L\subset E\) satisfying \[-2\deg L-\frac{m}{2}+\frac{5-m}{2}<0\] where \(m\) is the number of parabolic directions lying in \(L\). This gives \(\deg L\in\{-1,0,1\}\). If \(\deg L=1\) the there is at least one parabolic direction in \(L\) and at least two parabolic directions outside \(L\), otherwise \((E,l)\) would be decomposable. Up to performing an elementary transformation over these two parabolic directions outside \(L\), we may assume that \(L\) has degree zero and \(E=\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\). The same reasoning can be applied to the case where \(L\) has degree \(-1\), indeed here all parabolic directions must lie in \(L\), we then apply an elementary transformation over two of them. Therefore we may assume that \(L\) has degree zero, \(E=\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\) and exactly three parabolic directions lie in \(L\) (more than three implies \((E,l)\) undecomposable). We then arrive, up to elementary transformations, in the following case: \(E=\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}\) and \[l_{0}=l_{\lambda}=l_{t}=\begin{pmatrix}0\\ 1\end{pmatrix},l_{1}=\begin{pmatrix}1\\ 1\end{pmatrix},l_{\infty}=\begin{pmatrix}1\\ 0\end{pmatrix}. \tag{6.1}\] This implies that there are exactly 16 \(\mu\)-unstable \(\nu\)-flat parabolic vector bundles, they are in the same orbit of the group \(\mathbf{El}\) of elementary transformations. The space of connections over the parabolic bundle (6.1) is formed by \(\nabla=\nabla_{0}+a_{1}\theta_{1}+a_{2}\theta_{2}\), \(a_{1},a_{2}\in\mathbb{C}\), where \[\nabla_{0}= d +\left(\begin{array}{cc}-\nu_{0}&0\\ \rho&\nu_{0}\end{array}\right)\frac{dx}{x}+\left(\begin{array}{cc}-\nu_{1}- \rho&2\nu_{1}+\rho\\ -\rho&\nu_{1}+\rho\end{array}\right)\frac{dx}{x-1}\] \[+ \left(\begin{array}{cc}-\nu_{\lambda}&0\\ 0&\nu_{\lambda}\end{array}\right)\frac{dx}{x-\lambda}+\left(\begin{array}{cc }-\nu_{t}&0\\ 0&\nu_{t}\end{array}\right)\frac{dx}{x-t}\quad,\text{with}\quad\rho=-\sum\nu_{i}\] the Higgs fields are \[\theta_{1}=\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right)\frac{dx}{x}+\left(\begin{array}{cc}0&0\\ -1&0\end{array}\right)\frac{dx}{x-\lambda}\] and \[\theta_{2}=\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right)\frac{dx}{x}+\left(\begin{array}{cc}0&0\\ -1&0\end{array}\right)\frac{dx}{x-t}.\] Then the gauge transformation \[g_{c}=\left(\begin{array}{cc}1&0\\ 0&c\end{array}\right)\] gives \[\lim_{c\to 0}g_{c}(c\nabla)g_{c}^{-1}=\theta=\left(\begin{array}{cc}0& \beta\\ 0&0\end{array}\right)\] where \[\beta=(2\nu_{1}+\rho)\frac{dx}{x-1}.\] Note that using an automorphism of the projected parabolic vector bundle, we can eliminate the constant \(2\nu_{1}+\rho\) from \(\beta\). Indeed when \(c\) goes to \(0\), the parabolic structure projects to \((E,\mathbf{q})\) where \[q_{0}=q_{\lambda}=q_{t}=\begin{pmatrix}0\\ 1\end{pmatrix},l_{1}=l_{\infty}=\begin{pmatrix}1\\ 0\end{pmatrix}\] The conclusion is that the limit Higgs bundles \(\lim_{c\to 0}c\cdot(E,\nabla,l)\) is one of the 16 Hodge bundles. Therefore, any fiber of \(\pi_{\mu_{c}}\) coincides with a leaf of \(\mathcal{F}_{Bun}\).
2307.11824
Randomized semi-quantum matrix processing
We present a hybrid quantum-classical framework for simulating generic matrix functions more amenable to early fault-tolerant quantum hardware than standard quantum singular-value transformations. The method is based on randomization over the Chebyshev approximation of the target function while keeping the matrix oracle quantum, and is assisted by a variant of the Hadamard test that removes the need for post-selection. The resulting statistical overhead is similar to the fully quantum case and does not incur any circuit depth degradation. On the contrary, the average circuit depth is shown to get smaller, yielding equivalent reductions in noise sensitivity, as explicitly shown for depolarizing noise and coherent errors. We apply our technique to partition-function estimation, linear system solvers, and ground-state energy estimation. For these cases, we prove advantages on average depths, including quadratic speed-ups on costly parameters and even the removal of the approximation-error dependence.
Allan Tosta, Thais de Lima Silva, Giancarlo Camilo, Leandro Aolita
2023-07-21T18:00:28Z
http://arxiv.org/abs/2307.11824v3
# Randomized semi-quantum matrix processing ###### Abstract Quantum computers have the potential to speed-up important matrix-arithmetic tasks. A prominent framework for that is the quantum singular-value transformation (QSVT) formalism, which uses Chebyshev approximations and coherent access to the input matrix via a unitary block encoding to design a target matrix function. Nonetheless, physical implementations for useful end-user applications require large-scale fault-tolerant quantum computers. Here, we present a hybrid quantum-classical framework for Monte-Carlo simulation of generic matrix functions more amenable to early fault-tolerant quantum hardware. Serving from the ideas of QSVT, we randomize over the Chebyshev polynomials while keeping the matrix oracle quantum. The method is assisted by a variant of the Hadamard test that removes the need for post-selection. As a result, it features a statistical overhead similar to the fully quantum case of standard QSVT and do not incur any circuit depth degradation. On the contrary, the average circuit depth is shown to get smaller, yielding equivalent reductions of noise sensitivity, as we explicitly show for depolarizing noise and coherent errors. We apply our technique to four specific use cases: partition-function estimation via quantum Markov-chain Monte Carlo and via imaginary-time evolution; end-to-end linear system solvers; and ground-state energy estimation. For these cases, we prove advantages on average depths, including quadratic speed-ups on costly parameters and even the removal of the approximation-error dependence. All in all, our framework provides a pathway towards early fault-tolerant quantum linear algebra applications. ## I Introduction Faster algorithms for linear algebra are a major promise of quantum computation, holding the potential for precious runtime speed-ups over classical methods. A modern, unified framework for such algorithms is given by the quantum signal processing (QSP) [1; 2] and, more generally, quantum singular-value transformation (QSVT) [3] formalisms. These are powerful techniques to manipulate a matrix, coherently given by a quantum oracle, via polynomial transformations on its eigenvalues and singular values, respectively. The class of matrix arithmetic attained is remarkably broad, encompassing primitives as diverse as Hamiltonian simulation, matrix inversion, ground-state energy estimation, Gibbs-state sampling, among others [4]. Moreover, the framework often offers the state-of-the-art in asymptotic query complexities (i.e. number of oracle calls), in some cases matching known complexity lower bounds. Nevertheless, the experimental requirements for full implementations are prohibitive for current devices, and it is not clear if the framework will be useful in practice before large-scale fault-tolerant quantum computers appear. This has triggered a quest for _early fault-tolerant algorithms_ for matrix processing that allow one to trade performance for nearer-term feasibility in a controlled way, i.e. with provable runtime guarantees [5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. Particularly promising are randomized hybrid quantum-classical schemes to statistically simulate a matrix function via quantum implementations of more elementary ones [7; 8; 9; 10; 11; 12]. For instance, this has been applied to the Heaviside step function \(\theta(H)\) of a Hamiltonian \(H\), which allows for eigenvalue thresholding, a practical technique for Heisenberg-limited spectral analysis [7]. Two input access models have been considered there: quantum oracles as a controlled unitary evolution of \(H\)[7; 8; 13] and classical ones given by a decomposition of \(H\) as a linear combination of Pauli operators [10; 11; 12; 9]. In the former, one Monte-Carlo simulates the Fourier series of \(\theta(H)\) by randomly sampling its harmonics. In the latter - in an additional level of randomization - one also probabilistically samples the Pauli terms from the linear combination. Curiously, however, randomized quantum algorithms for matrix processing have been little explored beyond the specific case of the Heaviside function. Ref. [11] put forward a randomized, qubit-efficient technique for Fourier-based QSP [6; 13] for generic functions. However, the additional level of randomization can detrimentally affect the circuit depth per run, as compared to the case with coherent oracles. On the other hand, in the quantum-oracle setting, the randomized algorithms above have focused mainly on controlled unitary evolution as the input access model. This is convenient in specific cases where \(H\) can be analogically implemented. However, it leaves aside the powerful class of _block-encoding oracles_, i.e. unitary matrices with the input matrix as one of its blocks [2]. Besides having a broader scope of applicability (including non-Hermitean matrices), such oracle types are also a more natural choice for digital setups. Moreover, randomized quantum algorithms have so far not addressed Chebyshev polynomials, the quintessential basis functions for approximation theory [15], which often attain better accuracy than Fourier series [16]. Chebyshev polynomials, together with block-encoding oracles, provide the most sophisticated and general arena for quantum matrix arithmetic [1; 2; 3; 4]. Here, we fill in this gap. We derive a semi-quantum algorithm for Monte-Carlo simulations of QSVT with provably better circuit complexities than fully-quantum schemes as well as notable advantages in terms of experimental feasibility. Our results are summarized next. ## II Summary of our contributions Our method estimates state amplitudes and expectation values involving a generic matrix function \(f(A)\) leveraging three main ingredients: \(i)\) it samples each component of a Chebyshev series for \(f\) with a probability proportional to its coefficient in the series; \(ii)\) it assumes coherent access to \(A\) via a block-encoding oracle; and \(iii)\)\(f(A)\) is automatically extracted from its block-encoding without post-selection, using a Hadamard test. The combination of \(i)\) and \(ii)\) leaves untouched the maximal query complexity \(k\) per run native from the Chebyshev expansion. In addition, the statistical overhead we pay for end-user estimations scales only with the \(l_{1}\)-norm of the Chebyshev coefficients. For the use cases we consider, this turns out to be similar (at worst up to logarithmic factors) to the operator norm of \(f(A)\), which would govern the statistical overhead if we used fully-quantum (i.e. standard) QSVT. That is, our scheme does not incur any significant degradation with respect to the fully-quantum case either in runtime or circuit depth. On the contrary, the average query complexity can be significantly smaller than \(k\). We prove interesting speed-ups of the former over the latter for practical use cases. These speed-ups translate directly into equivalent reductions in noise sensitivity: For simple models such as depolarization or coherent errors in the quantum oracle, we show that the estimation inaccuracy caused by noise scales with the average query depth. In comparison, it scales with the maximal depth in standard QSVT implementations. Importantly, we implement each sampled Chebyshev polynomial with a simple sequence of queries to the oracle using subitization; no QSP pulses are required throughout. Finally, \(iii)\) circumvents the need for expensive repeat until success or quantum amplitude amplification. That is, no statistical run is wasted, and no overhead in circuit depth is incurred. The only price paid is the need for the control qubit in the Hadamard test, but fully quantum implementations would require yet another ancilla controlling everything else (due to the QSP pulses). All this renders our hybrid approach more experimentally friendly than coherent QSVT. As use cases, we benchmark our framework on four end-user applications: partition-function estimation of classical Hamiltonians via quantum Markov-chain Monte Carlo (MCMC); partition-function estimation of quantum Hamiltonians via quantum imaginary-time evolution (QITE); linear system solvers (LSSs); and ground-state energy estimation (GSEE). The maximal and expected query depths per run as well as the total expected runtime (taking into account sample complexity) are displayed in Table **I**, in Sec. IV.4. In all cases, we systematically obtain the following advantages (both per run and in total) of expected versus maximal query complexities. For MCMC, we prove a quadratic speed-up on a factor \(\mathcal{O}(\log(Z_{\beta}\,e^{\beta}/\epsilon_{\mathrm{r}}))\), where \(Z_{\beta}\) is the partition function to estimate, at inverse temperature \(\beta\), and \(\epsilon_{\mathrm{r}}\) is the tolerated relative error. For QITE, we remove a factor \(\mathcal{O}(\log(D\,e^{\beta}/Z_{\beta}\,\epsilon_{\mathrm{r}}))\) from the scaling, where \(D\) is the system dimension. For LSSs we consider two sub-cases: estimation of an entry of the (normalized) solution vector and of the expectation value of an observable \(O\) on it. We prove quadratic speed-ups on factors \(\mathcal{O}\big{(}\log(\kappa/\epsilon)\big{)}\) and \(\mathcal{O}\big{(}\log(\kappa^{2}\,\|O\|/\epsilon)\big{)}\) for the first and second sub-cases, respectively, where \(\|O\|\) is the operator norm of \(O\), \(\kappa\) is the condition number of the matrix, and \(\epsilon\) the tolerated additive error. Remarkably, this places our query depth at an intermediate position between that of the best known Chebyshev-based method [17] and the optimal one in general [18]. In turn, compared to the results obtained in [11] via full randomization, our scaling is one power of \(\kappa\) superior. Finally, for GSEE, we prove a speed-up on a factor that depends on the overlap \(\eta\) between the probe state and the ground state: the average query depth is \(\mathcal{O}\big{(}\frac{1}{2}\sqrt{\log(1/\eta)}/\log(1/\xi)\big{)}\), whereas the maximal query depth is \(\mathcal{O}\big{(}\frac{1}{\xi}\log(1/\eta)\big{)}\), with \(\xi\) the additive error in the energy estimate. Our method reduces the experimental requirements for early fault-tolerant quantum linear algebra applications. ## III Preliminaries We consider the basic setup of Quantum Singular Value Transformation (QSVT) [3; 4]. This is a powerful technique for synthesizing polynomial functions of a linear operator embedded in a block of a unitary matrix, via polynomial transformations on its singular values. Combined with approximation theory [19], this leads to state-of-the-art query complexities and an elegant unifying structure for a variety of quantum algorithms of interest. For simplicity of the presentation, in the main text we focus explicitly on the case of Hermitian matrices. There, QSVT reduces to the simpler setup of Quantum Signal Processing (QSP) [1; 2], describing eigenvalue transformations. The extension of our algorithms to QSVT for generic matrices is straightforward and is left for App. **G**. Throughout the paper, we adopt the short-hand notation \([l]:=\{0,\ldots,l-1\}\) for any \(l\in\mathbb{N}\). The basic input taken by QSP is a block-encoding \(U_{A}\) of the Hermitian operator \(A\) of interest (the _signal_). A block-encoding is a unitary acting on \(\mathcal{H}_{sa}:=\mathcal{H}_{s}\otimes\mathcal{H}_{a}\), where \(\mathcal{H}_{s}\) is the system Hilbert space where \(A\) acts and \(\mathcal{H}_{a}\) is an ancillary Hilbert space (with dimensions \(D\) and \(D_{a}\), respectively), satisfying \[\big{(}\left.\left\langle 0\right|_{a}\otimes\mathds{1}_{s}\right)U_{A}\left( \left.\left|0\right\rangle_{a}\otimes\mathds{1}_{s}\right)=A \tag{1}\] for some suitable state \(\left|0\right\rangle_{a}\in\mathcal{H}_{a}\) (here \(\mathds{1}_{s}\) is the identity operator in \(\mathcal{H}_{s}\)). Designing such an oracle for arbitrary \(A\) is a non-trivial task [20], but efficient block-encoding schemes are known in cases where some special structure is present, e.g., when \(A\) is sparse or expressible as a linear combination of unitaries [2; 3; 21]. In particular, we will need the following particular form of \(U_{A}\) that makes it amenable for dealing with Chebyshev polynomials. **Definition 1** (Qubitized block-encoding oracle).: _Let \(A\) be a Hermitian matrix on \(\mathcal{H}_{s}\) with spectral norm \(\left\|A\right\|\leq 1\), eigenvalues \(\{\lambda_{\gamma}\}_{\gamma\in[D]}\), and eigenstates \(\{\left|\lambda\right\rangle_{s}\}\). A unitary \(U_{A}\) acting on \(\mathcal{H}_{sa}\) is called a (exact) qubitized block-encoding of \(A\) if it has the form_ \[U_{A}=\bigoplus_{\gamma\in[D]}e^{-i\,\vartheta_{\gamma}\,Y_{\gamma}}\,, \tag{2}\] _where \(\vartheta_{\gamma}:=\arccos(\lambda_{\gamma})\) and \(Y_{\gamma}\) is the second Pauli matrix acting on the two-dimensional subspace spanned by \(\left\{\,\left|0\right\rangle_{a}\otimes\left|\lambda_{\gamma}\right\rangle_{s },\left|\,\left|\perp_{\lambda_{\gamma}}\right\rangle_{sa}\right\}\) with \({}_{sa}\!\left\langle\perp_{\lambda_{\gamma}}\right|\left(\,\left|0\right\rangle _{a}\otimes\left|\lambda_{\gamma}\right\rangle_{s}\right)=0\). A qubitized oracle of the form (2) can be constructed from any other block-encoding \(U_{A}^{\prime}\) of \(A\) using at most one query to \(U_{A}^{\prime}\) and \({U_{A}^{\prime}}^{-1}\), at most one additional ancillary qubit, and \(\mathcal{O}(\log(D_{a}))\) quantum gates [2]._ Standard QSP takes as input the qubitized oracle \(U_{A}\) and transforms it into (a block-encoding of) a polynomial function \(\tilde{f}(A)\). With the help of function approximation theory [15], this allows the approximate implementation of generic non-polynomial functions \(f(A)\). The algorithm complexity is measured by the number of queries to \(U_{A}\), which allows for rigorous quantitative statements agnostic to details of \(A\) or to hardware-specific circuit compilations. For our purposes, only a simple QSP result will be needed, namely the observation [2] that repeated applications of \(U_{A}\) give rise to Chebyshev polynomials of \(A\) (see App. **A** for a proof). **Lemma 2** (Block encoding of Chebyshev polynomials).: _Let \(U_{A}\) be a qubitized block-encoding of \(A\). Then_ \[\left(\,\left|0\right|_{a}\otimes\mathds{1}_{s}\right)U_{A}^{j}\left(\, \left|0\right\rangle_{a}\otimes\mathds{1}_{s}\right)=\mathcal{T}_{j}(A)\,, \tag{3}\] _for \(j\in\mathbb{N}\), where \(\mathcal{T}_{j}(\cdot)\) is the \(j\)-th order Chebyshev polynomial of the first kind._ We are interested in a truncated Chebyshev series \[\tilde{f}(x)=\sum_{j=0}^{k}a_{j}\mathcal{T}_{j}(x) \tag{4}\] providing a \(\nu\)-approximation to the target real-valued function \(f:[-1,1]\to\mathbb{R}\), that is, \(\max_{x\in[-1,1]}\left|f(x)-\tilde{f}(x)\right|\leq\nu\). The Chebyshev polynomials \(\mathcal{T}_{j}\) form a key basis for function approximation, often leading to near-optimal approximation errors [15]. In particular, unless the target function is periodic and smooth, they tend to outperform Fourier approximations [16]. The case of complex-valued functions can be treated similarly by splitting it into its real and imaginary parts. The truncation order \(k\) is controlled by the desired accuracy \(\nu\) in a problem-specific way (see Sec. IV.4 for explicit examples). We denote by \(\boldsymbol{a}:=\left\{a_{0},\ldots,a_{k}\right\}\) the vector of Chebyshev coefficients of \(\tilde{f}\) and by \(\left\|\boldsymbol{a}\right\|_{1}:=\sum_{j=0}^{k}\left|a_{j}\right|\) its \(\ell_{1}\)-norm. ## IV Results We are now in a position to state our main results. First, we set up explicitly the two problems of interest and then proceed to describe our randomized semi-quantum algorithm to solve each one of them, proving correctness, runtime, and performing an error-robustness analysis. We conclude by applying our general framework to a number of exemplary use cases of interest. ### Problem statement We consider the following two concrete problems (throughout the paper we will use superscripts \({}^{(1)}\) or \({}^{(2)}\) on quantities referring to Problems 1 or 2, respectively): **Problem 1** (Transformed vector amplitudes).: _Given access to state preparation unitaries \(U_{\phi}\) and \(U_{\psi}\) such that \(U_{\psi}\left|0\right\rangle=\left|\psi\right\rangle\), \(U_{\phi}\left|0\right\rangle=\left|\phi\right\rangle\), a Hermitean matrix \(A\), and a real-valued function \(f\), obtain an estimate of_ \[z^{(1)}=\,\left\langle\phi\middle|f(A)\middle|\psi\right\rangle \tag{5}\] _to additive precision \(\epsilon\) with failure probability at most \(\delta\)._ This class of problems is relevant for estimating the overlap between a linearly transformed state and another state of interest. This is the case, e.g., in linear system solving, where one is interested in the \(i\)-th computational basis component of a quantum state of the form \(A^{-1}\left|\boldsymbol{b}\right\rangle\) encoding the solution to the linear system (see Sec. IV.4.2 for details). The unitary \(U_{\phi}\) preparing the computational-basis state \(\left|i\right\rangle\), in that case, is remarkably simple, given by a sequence of bit flips. **Problem 2** (Transformed observable expectation values).: _Given access to a state preparation \(\varrho\), a Hermitian matrix \(A\), an observable \(O\), and a real-valued function \(f\), obtain an estimate of_ \[z^{(2)}=\operatorname{Tr}\!\left[O\,f(A)\,\varrho\,f(A)^{\dagger}\right] \tag{6}\] _to additive precision \(\epsilon\) with failure probability at most \(\delta\)._ This is of relevance, e.g., when \(A=H\) is a Hamiltonian, to estimate the partition function corresponding to \(H\), as discussed below in Sec. IV.4.1. We present randomized hybrid classical-quantum algorithms for these problems using Chebyshev-polynomial approximations of \(f\) and coherent access to a block-encoding of \(A\). Similar problems have been addressed in [11] but using Fourier approximations and randomizing also over a classical description of \(A\) in the Pauli basis. ### Randomized semi-quantum matrix processing Our framework is based on the Chebyshev approximation \(\tilde{f}\) of the function \(f\) and a modified Hadamard test involving the qubitized block-encoding oracle \(U_{A}\). The idea is to statistically simulate the coherent QSP algorithm using a hybrid classical/quantum procedure based on randomly choosing \(j\in[k+1]\) according to its importance for Eq. (4) and then running a Hadamard test involving the block encoding \(U_{A}^{j}\) of \(\mathcal{T}_{j}(A)\). Pseudo-codes for the algorithms are presented in Fig. **1. a)** and **1. b)** for Problems 1 and 2, respectively. In both cases, the Hadamard test is the only quantum sub-routine. The total number of statistical runs will be \(\frac{2}{P}\,S^{(P)}\), with \(P=1\) or \(2\), where \(S^{(P)}\) will be given in Eqs. (11) below. The factor \(\frac{2}{P}\) is a subtle difference between Algorithms 1 and 2 coming from the fact that the target quantity is a complex-valued amplitude in the former case, while in the latter it is a real number. This implies that two different types of Hadamard tests (each with \(S^{(1)}\) shots) are needed to estimate the real and imaginary parts of \(z^{(1)}\), while \(z^{(2)}\) requires a single one. More technically, the procedure goes as follows. First, for every \(\alpha\in[\frac{2}{P}\,S^{(P)}]\) run the following two steps: \(i)\) Classical subroutine: sample a Chebyshev polynomial degree \(j_{\alpha}\in[k+1]\) (and also \(l_{\alpha}\) for \(P=2\)) from a probability distribution weighted by the coefficients \(\mathbf{a}\) of Figure 1: Alg. 1 in panel **a)** solves Problem 1, whereas Alg. 2 in panel **b)** solves Problem 2. **a-b)** The algorithms receive as inputs: \(i)\) a qubitized block-encoding \(U_{A}\) of \(A\); \(ii)\) the vector \(\mathbf{a}\) of Chebyshev coefficients defining the polynomial approximation \(\tilde{f}\) to the target function \(f\); \(iii)\) state preparation unitaries \(U_{\phi}\) and \(U_{\psi}\) (for Alg. 1), or the state \(\varrho\) and the observable \(O\) (for Alg. 2); \(iv)\) the tolerated error \(\epsilon\) and failure probability \(\delta\) for the statistical estimation. The algorithm repeats a number \(\frac{2}{P}S^{(P)}\) of times two basic sampling steps. The first step is to classically sample a Chebyshev polynomial degree \(j_{\alpha}\) with probability \(p(j_{\alpha})=|a_{j_{\alpha}}|/\|\mathbf{a}\|_{1}\). The second step – the only quantum subroutine – is a Hadamard test (including a measurement of \(O\), for Alg. 2) containing \(j_{\alpha}\) successive queries to the controlled version of \(U_{A}\) (plus another sequence of \(l_{\alpha}\) queries but with the control negated and a different oracle-ancilla register for Alg. 2). Finally, the average over all the measurement outcomes gives the statistical estimate of the quantity of interest \(z^{(P)}\), for \(P=1\) or \(2\). Interestingly, the Hadamard test automatically extracts the correct block of \(U_{A}\), which relaxes the need for post-selection on the oracle ancillae. Therefore, every experimental run contributes to the statistics (i.e., no measurement shot is wasted). **c)** Histograms of number of times (shots) a Chebyshev polynomial degree \(j\) is drawn out of 1000 samples, for the four use cases described in Sec. **IV.4**. The vertical lines show the maximal Chebyshev degree \(k\) (purple) and the average degree \(\mathbb{E}[j]\) (red). Importantly, for this figure, we do not estimate \(k\) analytically using approximation theory. The values of \(k\) plotted are numerically obtained as the minimum degree of \(\tilde{f}\) such that the target error \(\nu\) is attained. The parameters used are: \(\nu=10^{-2}\) (all examples), \(\beta=100\) (exponential function), \(t=200\) (monomial), \(\kappa=8\) (inverse function), \(\mu=20\) (step function). In all cases, we observe a significant reduction in query complexity. This translates in practice into shallower circuits and hence less accumulated noise (see Sec. **IV.3**). \(\tilde{f}\), defined by \[p(j)=\frac{|a_{j}|}{\left\|\boldsymbol{a}\right\|_{1}},\quad\text{ for all }j\in[k+1]\,. \tag{7}\] This has classical runtime \(\tilde{\mathcal{O}}(k)\). \(ii)\)Quantum subroutine: if \(P=1\), run the Hadamard test in Fig. **1 a)** with \(B_{\alpha}=\mathds{1}\) for \(\alpha<S^{(1)}\) or \(B_{\alpha}=S^{\dagger}:=|0\rangle\!\langle 0|-i\,|1\rangle\!\langle 1|\) for \(\alpha\geq S^{(1)}\) and use the resulting random bit \(b_{\alpha}^{(1)}\in\{-1,1\}\) to record a sample of the variable \[\tilde{z}_{\alpha}^{(1)}:=\left\|\boldsymbol{a}\right\|_{1}\,\text{sgn}(a_{j_ {\alpha}})\,b_{\alpha}^{(1)}\,. \tag{8}\] If \(P=2\), in turn, run the test in Fig. **1 b)** to get as outcomes a random bit \(b_{\alpha}^{(2)}\in\{-1,1\}\) and a random number \(\omega_{\alpha}\in\{o_{m}\}_{m\in[D]}\) where \(o_{m}\) is the \(m\)-th eigenvalue of \(O\), and use this to record a sample of \[\tilde{z}_{\alpha}^{(2)}:=\left\|\boldsymbol{a}\right\|_{1}^{2}\,\text{sgn}(a _{j_{\alpha}})\,\text{sgn}(a_{l_{\alpha}})\,b_{\alpha}^{(2)}\,\omega_{\alpha} \enspace. \tag{9}\] Then, in a final classical step, obtain the desired estimate \(\tilde{z}^{(P)}\) by computing the empirical mean over all the recorded samples as follows \[\tilde{z}^{(1)} =\frac{1}{S^{(1)}}\sum_{\alpha=0}^{S^{(1)}-1}\left(\tilde{z}_{ \alpha}^{(1)}+i\,\tilde{z}_{\alpha+S^{(1)}}^{(1)}\right), \tag{10a}\] \[\tilde{z}^{(2)} =\frac{1}{S^{(2)}}\sum_{\alpha=0}^{S^{(2)}-1}\tilde{z}_{\alpha}^ {(2)}\,. \tag{10b}\] The following two theorems respectively prove the correctness of the estimator and establish the complexity of the algorithm. A simple but crucial auxiliary result for the correctness is the observation that the Hadamard test statistics (i.e. the expectation value of \(b_{\alpha}^{(P)}\)) depends only on the correct block of \(U_{A}^{j}\), removing the need of post-selection. With this, in App. **C**, we prove the following. **Theorem 3** (Correctness of the estimator).: _The empirical means \(\tilde{z}^{(1)}\) and \(\tilde{z}^{(2)}\) are unbiased estimators of \(\langle\phi|\tilde{f}(A)|\psi\rangle\) and \(\operatorname{Tr}\!\left[O\,\tilde{f}(A)\,\varrho\,\tilde{f}(A)^{\dagger}\right]\), respectively._ Importantly, since \(\tilde{f}\) is a \(\nu\)-approximation to \(f\), the obtained \(\tilde{z}^{(P)}\) are actually biased estimators of the ultimate quantities of interest \(z^{(P)}\) in Eqs. (5) and (6). Such biases are always present in quantum algorithms based on approximate matrix functions, including the fully-coherent schemes for QSP [1; 2] and QSVT [3; 4]. Nevertheless, they can be made arbitrarily small in a tunable manner by increasing the truncation order \(k\) in Eq. (4). Here, it is convenient to set \(k\) so that \(\nu^{(P)}\leq\epsilon/2\), where \(\nu^{(1)}:=\nu\) and \(\nu^{(2)}:=\nu\left(2\left\|f(A)\right\|\,\|O\|+\nu\right)\). This limits the approximation error in Eqs. (5) or (6) to at most \(\epsilon/2\). In addition, demanding the statistical error to be also \(\epsilon/2\), leads to (see App. **D**) the following end-to-end sample and oracle-query complexities for the algorithm. **Theorem 4** (Complexity of the estimation).: _Let \(\epsilon>0\) and \(\delta>0\) be respectively the tolerated additive error and failure probability; let \(\boldsymbol{a}\) be the vector of coefficients in Eq. (4) and \(\nu^{(P)}\leq\epsilon/2\) the error in \(z^{(P)}\) from approximating \(f\) with \(\tilde{f}\). Then, if the number of samples is at least_ \[S^{(P)}= \left\{\begin{aligned} &\frac{16\left\|\boldsymbol{a} \right\|_{1}^{2}}{\epsilon^{2}}\log\frac{4}{\delta}\,,&\text{for \ P=1,}\\ &\frac{8\left\|O\right\|^{2}\left\|\boldsymbol{a}\right\|_{1}^{4} }{\epsilon^{2}}\log\frac{2}{\delta}\,,&\text{for \ P=2,}\end{aligned}\right.\] (11a) \[S^{(P)}= \frac{16\left\|\boldsymbol{a}\right\|_{1}^{2}}{\epsilon^{2}} \log\frac{4}{\delta}\,,\] _for \[P= 2,\] (11b)_ _Eqs. (10) give an \(\epsilon\)-precise estimate of \(z^{(P)}\) with confidence \(1-\delta\). Moreover, the total expected runtime is \(Q^{(P)}:=2\,\mathbb{E}[j]\,S^{(P)}\), where \(\mathbb{E}[j]:=\sum_{j=0}^{k}j\,p(j)\)._ A remarkable consequence of this theorem is that the expected number of queries per statistical run is \(P\times\mathbb{E}[j]\). Instead, if we used standard QSVT (together with a similar Hadamard test to avoid post-selection), each statistical run would take \(P\times k\) queries (and an extra ancillary qubit coherently controlling everything else would be required). As shown in Fig. **1 c)**, \(\mathbb{E}[j]\) can be significantly smaller than \(k\) in practice. In fact, in Sec. **IV D**, we prove scaling advantages of \(\mathbb{E}[j]\) over \(k\). These query-complexity advantages translate directly into reductions in circuit depth and, hence, also in noise sensitivity (see next sub-section). As for sample complexity, the statistical overhead of our semi-quantum algorithms scales with \(\left\|\boldsymbol{a}\right\|_{1}\), while that of fully-quantum ones would have a similar scaling with \(\left\|f(A)\right\|\), due to the required normalization for block encoding. Interestingly, in all the use cases analyzed, \(\left\|\boldsymbol{a}\right\|_{1}\) and \(\left\|f(A)\right\|\) differ at most by a logarithmic factor. Finally, another appealing feature is that our approach relaxes the need to compute the QSP/QSVT angles, which is currently tackled with an extra classical pre-processing stage of runtime \(\mathcal{O}(\text{poly}(k))\)[1; 2; 3; 4]. We emphasize that here we have assumed Hermitian \(A\) for the sake of clarity, but a straightforward extension of our randomized scheme from QSP to QSVT (see App. **G**) gives the generalization to generic \(A\). Moreover, in Lemma 8 in App. B, we also extend the construction to Chebyshev polynomials of the second kind. This is useful for ground-state energy estimation, in Sec. **IV D.3**. ### Intrinsic noise-sensitivity reduction Here we study how the reduction in query complexity per run from \(k\) to the average value \(\mathbb{E}[j]\) translates into sensitivity to experimental noise. The aim is to make a quantitative but general comparison between our randomized semi-quantum approach and fully-quantum schemes, remaining agnostic to the specific choice of operator function, circuit compilation, or physical platform. To this end, we consider two toy error models that allow one to allocate one unit of noise per oracle query. Our first error model consists of a faulty quantum oracle given by the ideal oracle followed by a globally de polarizing channel \(\Lambda\) of noise strength \(p\), defined by [22] \[\Lambda[\varrho]:=(1-p)\,\varrho+p\,\frac{\mathds{1}}{D_{\text{tot}}}. \tag{12}\] Here, \(\varrho\) is the joint state of the total Hilbert space in Fig. **1a** (system register, oracle ancilla, and Hadamard test ancilla) and \(D_{\text{tot}}\) its dimension. In App. **E** we prove: **Theorem 5** (Average noise sensitivity).: _Let \(\tilde{z}^{(P)}\) be the ideal estimators (10) and \(\tilde{z}^{(P),\Lambda}\) their noisy version with \(\Lambda\) acting after each oracle query in Fig. **1**. Then_ \[\left|\mathbb{E}\big{[}\tilde{z}^{(1)}\big{]}-\mathbb{E}\big{[} \tilde{z}^{(1),\Lambda}\big{]}\right| \leq p\,E^{(1)}_{sq}\leq\,p\left\|\mathbf{a}\right\|_{1}\mathbb{E}[j]\,, \tag{13a}\] \[\left|\mathbb{E}\big{[}\tilde{z}^{(2)}\big{]}-\mathbb{E}\big{[} \tilde{z}^{(2),\Lambda}\big{]}\right| \leq p\,E^{(2)}_{sq}\leq\,2\,p\left\|\mathbf{a}\right\|_{1}^{2} \mathbb{E}[j], \tag{13b}\] _where \(E^{(1)}_{sq}:=\left|\sum_{j=0}^{k}j\,a_{j}\,\langle\phi|\mathcal{T}_{j}(A)| \psi\rangle\right|\) and \(E^{(2)}_{sq}:=\left|\sum_{j,l=0}^{k}(j+l)\,a_{j}\,a_{l}\,\text{Tr}\{O\,\mathcal{ T}_{j}(A)\,\varrho\,\mathcal{T}_{l}(A)\}\right|\)._ Our second model is coherent errors that make the quantum oracle no longer the exact block encoding \(U_{A}\) of \(A\) but only a \(\varepsilon\)-approximate block encoding (a unitary with operator-norm distance \(\varepsilon\) from \(U_{A}\)). In App. **E**, we show that Eq. (13) holds also there with \(p\) replaced by \(2\varepsilon\). It is instructive to compare Eq. (13) with the inaccuracy for the corresponding fully-quantum scheme. A fair scenario for that comparison (in the case of Problem 1) is to equip the standard QSVT with a Hadamard test similar to the ones in Fig. **1** so as to also circumvent the need for post-selection. Notice that, while in our randomized method, only the Hadamard ancilla controls the calls to the oracle, the standard QSVT circuit involves two-qubit control to also implement the pulses that determine the Chebyshev coefficients. As a consequence, the underlying gate complexity per oracle query would be considerably higher than for our schemes (with single-qubit gates becoming two-qubit gates, two-qubit gates becoming Toffoli gates, etc). For this reason, the resulting noise strength \(p_{\text{fq}}\) is expected to be larger than \(p\). The left-hand side of Eq. (13a) would then (see App. **E**) be upper-bounded by \(p_{\text{fq}}\,E_{\text{fq}}\), with \(E_{\text{fq}}=k\,|\,\langle\phi|\tilde{f}(A)|\psi\rangle\,|\), where \(p_{\text{fq}}>p\) and \(k>\mathbb{E}[j]\). Another natural scenario for comparison is that where the fully-quantum algorithm does not leverage a Hadamard test but implements post-selection measurements on the oracle ancilla, in a repeat-until-success strategy. This comparison applies only to Problem 2, since one cannot directly measure the complex amplitudes for Problem 1. The advantage though is that the circuits are now directly comparable because the gate complexities per oracle query are essentially the same (the fully-quantum scheme has extra QSP pulses, but these are single-qubit gates whose error contribution is low). Hence, similar error rates to \(p\) are expected here, so that one would have the equivalent of Eq. (13b) being \(\mathcal{O}(k\,p)\). This is already worse than Eq. (13b) because \(k>\mathbb{E}[j]\), as already discussed. However, crucially, the biggest disadvantage of the fully-quantum scheme manifests itself in the sample complexity (and consequently the total runtime), which here gains a (potentially exponentially) large factor inversely proportional to post-selection probability. Moreover, with post-selection, one additionally needs to estimate normalizing constants with an independent set of experimental runs. In contrast, our method does not suffer from this issue, as it directly gives the estimates in Eqs. (5) or (6) regardless of state normalization (see Sec. IV.4). Finally, a third possibility could be to combine the fully-quantum scheme with quantum amplitude amplification to manage the post-selection. This would quadratically improve the dependence on the post-selection probability. However, it would then be the circuit depth that would gain a factor inversely proportional to the square root of the post-selection probability. Unfortunately, this is far out of reach of early-fault tolerant hardware. ### End-user applications Here we illustrate the usefulness of our framework with four use cases of practical relevance: partition function estimation (both for classical or general Hamiltonians), linear system solving, and ground-state energy estimation. These correspond to \(f(x)=x^{t}\), \(e^{-\beta x}\), \(x^{-1}\), and \(\theta(x)\), respectively. The end-to-end complexities for each case are summarized in Table **I**. #### ii.4.1 Relative-error partition function estimation Partition function estimation is a quintessential hard computational problem, with applications ranging from statistical physics to generative machine learning, as in Markov random fields [23], Boltzmann machines [24], and even the celebrated transformer architecture [25] from large language models. Partition functions also appear naturally in other problems of practical relevance, such as constraint satisfaction problems [26]. The partition function of a Hamiltonian \(H\) at inverse temperature \(\beta\) is defined as \[Z_{\beta}=\text{Tr}\left[e^{-\beta H}\right]\,. \tag{14}\] One is typically interested in the problem of estimating \(Z_{\beta}\) to relative error \(\epsilon_{\text{r}}\), that is, finding \(\tilde{Z}_{\beta}\) such that \[\left|\tilde{Z}_{\beta}-Z_{\beta}\right|\leq\epsilon_{\text{r}}\,Z_{\beta}\,. \tag{15}\] This allows for the estimation of relevant thermodynamic functions, such as the Helmholtz free energy \(F=\frac{1}{\beta}\log Z_{\beta}\), to additive precision. The naive classical algorithm based on direct diagonalization runs in time \(\mathcal{O}(D^{3})\), where \(D=\text{dim}(\mathcal{H}_{s})\) is the Hilbert space dimension. Although it can be improved to \(\mathcal{O}(D)\) using the kernel polynomial method [27] if \(H\) is sparse, one expects no general-case efficient algorithm to be possible due to complexity theory arguments [28]. In turn, if the Hamiltonian is classical (diagonal), \(Z_{\beta}\) can be obtained exactly in classical runtime \(\mathcal{O}(D)\). General-purpose quantum algorithms (that work for any inverse temperature and any Hamiltonian) have been proposed [29, 30, 31]. The list includes another algorithm [30] that, like ours, utilizes the Hadamard test and a block-encoding of the Hamiltonian. In the following, we present two different quantum algorithms for partition function estimation: one for classical Ising models, based on the Markov-Chain Monte-Carlo (MCMC) method, and another for generic non-commuting Hamiltonians, based on quantum imaginary-time evolution (QITE) simulation [5, 32]. Partition function estimation via MCMCHere, we take \(H\) as the Hamiltonian of a classical Ising model. As such, spin configurations, denoted by \(\left|\mathbf{y}\right\rangle\), are eigenstates of \(H\) with corresponding energies \(E_{\mathbf{y}}\). Let us define the coherent version of the Gibbs state \(\left|\sqrt{\mathbf{\pi}}\right\rangle:=Z_{\beta}^{-1/2}\sum_{\mathbf{y}}e^{-\beta E _{\mathbf{y}}/2}\left|\mathbf{y}\right\rangle\). Then, for any \(\left|\mathbf{y}\right\rangle\), the partition function satisfies the identity \[Z_{\beta}=\frac{e^{-\beta E_{\mathbf{y}}}}{\left\langle\mathbf{y}|\Pi_{\mathbf{\pi}}|\mathbf{ y}\right\rangle} \tag{16}\] with \(\Pi_{\mathbf{\pi}}:=\left|\sqrt{\mathbf{\pi}}\right\rangle\!\!\left\langle\sqrt{\mathbf{ \pi}}\right|\). Below we discuss how to use our framework to obtain an estimation of \(\left\langle\mathbf{y}|\Pi_{\mathbf{\pi}}|\mathbf{y}\right\rangle\) for a randomly sampled \(\left|\mathbf{y}\right\rangle\) and, therefore, approximate the partition function. Let \(A\) be the discriminant matrix [33] of a Markov chain having the Gibbs state of \(H\) at inverse temperature \(\beta\) as its unique stationary state. The Szegedy quantum walk unitary [33] provides a subitized block-encoding \(U_{A}\) of \(A\) that can be efficiently implemented [34]. A useful property of \(A\) is that the monomial \(A^{t}\) approaches \(\Pi_{\mathbf{\pi}}\) for sufficiently large integer \(t\)[35] (the precise statement is given by Lemma 18 in App. **F** 1). This implies that \(\left\langle\mathbf{y}|\Pi_{\mathbf{\pi}}|\mathbf{y}\right\rangle\) can be estimated using Alg. 1 with \(f(A)=A^{t}\) and \(\left|\psi\right\rangle=\left|\phi\right\rangle=\left|\mathbf{y}\right\rangle\). In this case, the state preparation unitaries \(U_{\psi}=U_{\phi}\) will be simple bit flips. A \(\nu\)-approximation \(\tilde{f}(A)\) can be constructed by truncating the Chebyshev representation of \(A^{t}\) to order \(k=\sqrt{2\,t\log(2/\nu)}\)[19]. The \(l_{1}\)-norm of the corresponding coefficient vector is \(\left\|\mathbf{a}\right\|_{1}=1-\nu\). For this Chebyshev series, the ratio \(\mathbb{E}[j]/k\) between the average and the maximum query complexities can be shown (see Lemma 17 in App. **F** 1) to be at most \((1-\nu)^{-1}/\sqrt{\pi\,\log(2/\nu)}\) for large \(t\). This implies that the more precise the estimation, the larger the advantage of the randomized algorithm in terms of total expected runtime. For instance, for \(\nu=10^{-2}\), the ratio is roughly equal to \(0.25\). To estimate the partition function up to relative error \(\epsilon_{\text{r}}\), Alg. 1 needs to estimate \(\left\langle\mathbf{y}|\Pi_{\mathbf{\pi}}|\mathbf{y}\right\rangle\) with additive error \(\epsilon=\frac{e^{-\beta\mathbf{x}_{\mathbf{y}}}}{2Z_{\beta}}\epsilon_{\text{r}}\) (see Lemma 19 in App. **F** 1). In Lemma 20, in App. **F** 1, we show that the necessary \(t\) and \(\nu\) required for that yield a maximum query complexity per run of \(k=\sqrt{\frac{2}{\Delta}}\log\!\left(\frac{12\,Z_{\beta}\,e^{\beta\mathbf{x}_{ \mathbf{y}}}}{\epsilon_{\text{r}}}\right)\) and an average query complexity of \(\mathbb{E}[j]=\sqrt{\frac{2}{\pi\,\Delta}\log\!\left(\frac{12\,Z_{\beta}\,e^{ \beta E_{\mathbf{y}}}}{\epsilon_{\text{r}}}\right)}\), where \(\Delta\) is the spec \begin{table} \begin{tabular}{||c||c|c|c|c|} \hline **Problem** & **App.** & **Maximal query depth** & **Expected query depth** & **Total expected runtime** \\ \hline \hline Part. funct. (MCMC) & \(\mathbf{F}\,\mathbf{1}\) & \(\sqrt{\frac{2}{\Delta}}\log\left(\frac{12\,Z_{\beta}\,e^{\beta E_{\mathbf{y}}}}{ \epsilon_{\text{r}}}\right)\) & \(\sqrt{\frac{2}{\pi\Delta}}\log\left(\frac{12\,Z_{\beta}\,e^{\beta E_{\mathbf{y}}}}{ \epsilon_{\text{r}}}\right)\) & \(\mathcal{O}\left(\frac{e^{2\beta E_{\mathbf{y}}}\sqrt{2}}{\sqrt{\Delta}}\sqrt{\log \left(\frac{Z_{\beta}\,e^{\beta E_{\mathbf{y}}}}{\epsilon_{\text{r}}}\right)} \frac{\log(1/\delta)}{\epsilon_{\text{r}}^{2}}\right)\) \\ \hline Part. funct. (QITE) & \(\mathbf{F}\,\mathbf{2}\) & \(\mathcal{O}\!\left(\sqrt{\beta}\log\left(\frac{D\,e^{\beta}}{Z_{\beta}\, \epsilon_{\text{r}}}\right)\right)\) & \(\mathcal{O}\left(\sqrt{\beta}\right)\) & \(\mathcal{O}\left(\frac{D^{2}\sqrt{\Delta}e^{2\beta}}{Z_{\beta}^{2}}\frac{\log(1/ \delta)}{\epsilon_{\text{r}}^{2}}\right)\) \\ \hline QLSS: \(\left\langle i\right|A^{-1}\left|\mathbf{b}\right\rangle\) & \(\mathbf{F}\,\mathbf{3}\) & \(\mathcal{O}\left(\kappa\log\left(\frac{\kappa}{\epsilon}\right)\right)\) & \(\mathcal{O}\left(\kappa\sqrt{\log\left(\frac{\kappa}{\epsilon}\right)}\right)\) & \(\mathcal{O}\left(\kappa^{3}\log^{5/2}\!\left(\frac{\kappa}{\epsilon}\right)\frac{ \log(1/\delta)}{\epsilon^{2}}\right)\) \\ \hline QLSS: \(\left\langle\mathbf{b}\right|A^{-1}OA^{-1}\left|\mathbf{b}\right\rangle\) & \(\mathbf{F}\,\mathbf{3}\) & \(\mathcal{O}\left(\kappa\log\left(\frac{\kappa^{2}\left|O\right|}{\epsilon}\right)\right)\) & \(\mathcal{O}\left(\kappa\sqrt{\log\left(\frac{\kappa^{2}\left|O\right|}{\epsilon} \right)}\right)\) & \(\mathcal{O}\left(\kappa^{5}\|O\|^{2}\log^{9/2}\!\left(\frac{\kappa^{2}\left|O \right|}{\epsilon}\right)\frac{\log(1/\delta)}{\epsilon^{2}}\right)\) \\ \hline Ground-state energy & \(\mathbf{F}\,\mathbf{4}\) & \(\mathcal{O}\left(\frac{1}{\epsilon}\log\left(\frac{1}{\eta}\right)\right)\) & \(\mathcal{O}\left(\frac{1}{\epsilon}\frac{\sqrt{\log(1/\eta)}}{\log\left(1/\xi \right)\log\left(1/\eta\right)}\right)\) & \(\mathcal{O}\left(\frac{1}{\eta^{2}\xi}\sqrt{\log\left(\frac{1}{\eta}\right)} \log\left(\frac{1}{\xi}\right)\log\left(\frac{1}{\xi}\right)\right)\) \\ \hline \end{tabular} \end{table} Table 1: **Complexities of our algorithms for end-user applications**. The first column indicates the specific use case (see Sec. IV.4). The second one indicates the appendix with the corresponding derivations. The third column shows the maximal query complexity per run \(k\). Chebyshev-based fully-quantum matrix processing (using the same Hadamard tests as us) would require the same query depth but in _every_ run. The fourth column displays the average query complexity per run \(P\,\mathbb{E}[j]\), with \(P=1\) for Alg. 1 and \(P=2\) for Alg. 2. We notice that in the last row we use \(\xi\) for the additive error in the ground state energy (coming from the \(\mathcal{O}\!\left(\log\left(\frac{1}{\xi}\right)\right)\) steps in the binary search) to distinguish from the \(\epsilon\) (which here is \(\mathcal{O}(\eta)\)) reserved for the estimation error in the quantities \(z^{(P)}\). As can be seen, in all use cases, the average query depth features a better scaling than \(k\) on certain parameters. This is an interesting speed-up specific to the randomization over the Chebyshev expansion. Finally, the fourth column shows the expected runtime, given by \(\dot{Q}^{(P)}\) in Theorem 4, namely the average query depth times the sample complexity \(S^{(P)}\). Here, \(S^{(P)}\) scales with \(\left\|\mathbf{a}\right\|_{1}\) exactly as it would with \(\left\|f(A)\right\|\) had we used the fully-quantum algorithm. Interestingly, \(\left\|\mathbf{a}\right\|_{1}\) and \(\left\|f(A)\right\|\) happen to be of the same order for the use cases studied, except for small logarithmic corrections for QLSSs and ground-state energy estimation (see Table **II** in App. **F** for details). All in all, the total expected runtimes are either similar or slightly superior to the corresponding runtimes of Chebyshev-based fully-quantum approaches. Remarkably, this is achieved in tandem with important advantages in terms of quantum hardware (see, e.g., Sec. IV.3). tral gap of \(A\). Moreover, from Theorem 4, the necessary sample complexity is \(S^{(1)}=64\,e^{2\beta E_{\mathbf{y}}}\,Z_{\beta}^{2}\,\frac{\log(2/\delta)}{e^{2}}\). This leads to the total expected runtime in Table 1. Three important observations about the algorithm's complexities are in place. First, the total expected runtime has no explicit dependence on the Hilbert space dimension \(D\) and maintains the square-root dependence on \(\Delta\) (a Szegedy-like quadratic quantum speed-up [33]). Second, all three complexities in the first row of the table depend on the product \(Z_{\beta}\,e^{\beta E_{\mathbf{y}}}=\mathcal{O}\!\left(e^{\beta\left(E_{ \mathbf{y}}-E_{\min}\right)}\right)\), with \(E_{\min}\) the minimum eigenvalue of \(H\), where the scaling holds for large \(\beta\). This scaling plays more in our favor the lower the energy \(E_{\mathbf{y}}\) of the initial state \(\mathbf{y}\) is. Hence, by uniformly sampling a constant number of different bit-strings \(\mathbf{y}\) and picking the lowest energy one, one ensures to start with a convenient initial state. Third, the quadratic advantage featured by \(\mathbb{E}[j]\) over \(k\) on the logarithmic term is an interesting type of speed-up entirely due to the randomization over the components of the Chebyshev series. To end up with, the total expected runtime obtained can potentially provide a quantum advantage over classical estimations in regimes where \(\frac{e^{2\beta\left(E_{\mathbf{y}}-E_{\min}\right)}}{\sqrt{\Delta}\,e^{2}}<D\). Partition function estimation via QITE:Alternatively, the partition function associated with a Hamiltonian \(H\) can be estimated by quantum simulation of imaginary time evolution (QITE). This method applies to any Hamiltonian (not just classical ones), assuming a block-encoding of \(H\). \(Z_{\beta}\) can be written in terms of the expectation value of the QITE propagator \(e^{-\beta H}\) over the maximally mixed state \(\varrho_{0}:=\frac{1}{D}\), that is, \[Z_{\beta}=D\,\operatorname{Tr}\left[e^{-\beta H}\varrho_{0}\right]. \tag{17}\] Therefore, we can apply our Alg. 2 with \(A=H\), \(O=D\mathds{1},\varrho=\varrho_{0}\), and \(f(H)=e^{-\beta H/2}\) to estimate \(Z_{\beta}\) with relative precision \(\epsilon_{\mathrm{r}}\) and confidence \(1-\delta\). The sample complexity is obtained from Eq. (11b) as \(S^{(2)}=\frac{8\,D^{2}e^{2\beta}}{\epsilon_{\mathrm{r}}^{2}Z_{\beta}^{2}}\log \frac{2}{\delta}\), by setting the additive error equal to \(Z_{\beta}\,\epsilon_{\mathrm{r}}\). We use the Chebyshev approximation of the exponential function introduced in Ref. [19], which has a quadratically better asymptotic dependence on \(\beta\) than other well-known expansions such as the Jacobi-Anger decomposition [5]. This expansion was used before to implement the QITE propagator using QSVT coherently [3]. The resulting truncated Chebyshev series has order \(k=\sqrt{2\,\max\left\{\frac{e^{2}\beta}{2},\log\left(\frac{8D}{Z_{\beta}} \frac{e^{\beta}}{\epsilon_{\mathrm{r}}}\right)\right\}\,\log\left(\frac{16D}{ E_{\beta}^{\beta}}\frac{e^{\beta}}{\epsilon_{\mathrm{r}}}\right)}\) and coefficient \(l_{1}\)-norm \(\left\|\boldsymbol{a}\right\|_{1}\leq e^{\beta/2}+\nu\) (see Lemmas 21 and 22 in App. **F 2**). Interestingly, the average query depth does not depend on the precision of the estimation but scales as \(\mathcal{O}(\sqrt{\beta})\) with a modest constant factor for any \(\epsilon_{\mathrm{r}}\) (see Lemma 23 in App. **F 2**). This implies an advantage of \(\mathcal{O}\left(\log\left(\frac{D}{Z_{\beta}\epsilon_{\mathrm{r}}}\right)\right)\) in terms of overall runtime as compared to coherent QSVT, which is again entirely due to our randomization scheme. Overall, this gives our algorithm a total expected runtime of \(\mathcal{O}\left(\frac{D^{2}\sqrt{\beta}\,e^{2\beta}}{Z_{\beta}^{2}}\frac{ \log(2/\delta)}{\epsilon_{\mathrm{r}}^{2}}\right)\). The previous state-of-the-art algorithm from Ref. [30] has runtime \(\tilde{\mathcal{O}}\left(\frac{D^{2}D_{\mathrm{r}}^{2}e^{2\beta}\beta^{2}}{ \epsilon_{\mathrm{r}}^{2}Z_{\beta}^{2}}\log\frac{1}{\delta}\right)\). Compared with that, we get an impressive quartic speed-up in \(\beta\) together with the entire removal of the dependence on \(D_{a}^{2}\). The improvement comes from not estimating each Chebyshev term individually and allowing the ancillas to be pure while only the system is initialized in the maximally mixed state. Finally, compared to the \(\mathcal{O}\!\left(D^{3}\right)\) scaling of the classical algorithm based on exact diagonalization, our expected runtime has a better dependence on \(D\). Moreover, in the regime of small \(\beta\) such that \(Z_{\beta}^{2}>\mathcal{O}\!\left(\sqrt{\beta}\,e^{2\beta}\log(1/\delta)/ \epsilon_{\mathrm{r}}^{2}\right)\), the expected runtime can be even better than that of the kernel method, which scales as \(\mathcal{O}(D)\). #### iii.2.2 Quantum linear-system solvers Given a matrix \(A\in\mathbb{C}^{D}\times\mathbb{C}^{D}\) and a vector \(\boldsymbol{b}\in\mathbb{C}^{D}\), the task is to find a vector \(\boldsymbol{x}\in\mathbb{C}^{D}\) such that \[A\,\boldsymbol{x}=\boldsymbol{b}\,. \tag{18}\] The best classical algorithm for a generic \(A\) is based on Gaussian elimination, with a runtime \(\mathcal{O}(D^{3})\)[36]. For \(A\) positive semi-definite and sparse, with sparsity (i.e. maximal number of non-zero elements per row or column) \(s\), the conjugate gradient algorithm [37] can reduce this to \(\mathcal{O}(Ds\kappa)\), where \(\kappa:=\left\|A\right\|\left|A^{-1}\right\|\) is the condition number of \(A\). In turn, the randomized Kaczmarz algorithm [38] can yield an \(\epsilon\)-precise approximation of a single component of \(\boldsymbol{x}\) in \(\mathcal{O}\!\left(s\,\kappa_{F}^{2}\log(1/\epsilon)\right)\), with \(\kappa_{F}:=\left\|A\right\|_{F}\!\left\|A^{-1}\right\|\) and \(\left\|A\right\|_{F}\) the Frobenius norm of \(A\). In contrast, quantum linear-system solvers (QLSSs) [39, 4, 43, 17, 3, 44, 3, 45, 46, 47, 18] prepare a quantum state that encodes the normalized version of the solution vector \(\boldsymbol{x}\) in its amplitudes. More precisely, given quantum oracles for \(A\) and \(\left|\boldsymbol{b}\right\rangle:=\frac{1}{\left\|\boldsymbol{b}\right\|_{2}} \sum_{i}b_{i}\left|i\right\rangle\) as inputs, they output the state \(\left|\boldsymbol{x}\right\rangle:=\frac{1}{\left\|\boldsymbol{x}\right\|_{2}} \sum_{i}x_{i}\left|i\right\rangle\), where \(\left\|\cdot\right\|_{2}\) is the \(l_{2}\)-norm and we assume \(\left\|A\right\|\leq 1\) for simplicity of presentation (see App. **G** for the case of unnormalized \(A\)). Interestingly, circuit compilations of block encoding oracles for \(A\) with gate complexity \(\mathcal{O}\left(\log(D/\epsilon)\right)\) have been explicitly worked out assuming a QRAM access model to the classical entries of \(A\)[44]. This can be used for extracting relevant features - such as an amplitude \(\left\langle\phi|\boldsymbol{x}\right\rangle\) or an expectation value \(\left\langle\boldsymbol{x}|O|\boldsymbol{x}\right\rangle\) - from the solution state, with potential exponential speed-ups over known classical algorithms, assuming that the oracles are efficiently implementable and \(\kappa=\mathcal{O}\!\left(\mathrm{polylog}(D)\right)\). Ref. [18] proposed an asymptotically optimal QLSS based on a discrete version of the adiabatic theorem with query complexity \(\mathcal{O}\left(\kappa\log(1/\epsilon)\right)\). Within the Chebyshev-based QSP framework, the best known QLSS uses \(\mathcal{O}\left(\kappa\log(\kappa/\epsilon)\right)\) oracle queries [17]. If the final goal is, for instance, to reconstruct a computational-basis component \(\langle i|\mathbf{x}\rangle\) of the solution vector, the resulting runtime becomes \(\mathcal{O}\left(\left(\kappa^{3}/\epsilon^{2}\right)\log(\kappa/\epsilon)\right)\), since this requires \(\mathcal{O}\left(\kappa^{2}/\epsilon^{2}\right)\) measurements on \(|\mathbf{x}\rangle\). Importantly, however, in order to relate the abovementioned features of \(|\mathbf{x}\rangle\) to the corresponding ones from the (unnormalized) classical solution vector \(\mathbf{x}\), one must also independently estimate \(\|\mathbf{x}\|_{2}\). This can still be done with QLSSs (e.g., with quantum amplitude estimation techniques), but requires extra runs. Our algorithms do not suffer from this issue, providing direct estimates from the unnormalized vector \(A^{-1}\left|\mathbf{b}\right\rangle\). More precisely, with \(f\) being the inverse function on the cut-off interval \(\mathcal{I}_{\kappa}:=[1,-1/\kappa]\cup[1/\kappa,1]\), our Algs. 1 and 2 readily estimate amplitudes \(\langle\phi|\,A^{-1}\left|\mathbf{b}\right\rangle\) and expectation values \(\langle\mathbf{b}|\,A^{-1}OA^{-1}\left|\mathbf{b}\right\rangle\), respectively. The technical details of the polynomial approximation \(\tilde{f}\) and complexity analysis are deferred to App. **F 3**. In particular, there we show that, to approximate \(f\) to error \(\nu\), one needs a polynomial of degree \(k=\mathcal{O}\left(\kappa\,\log(\kappa/\nu)\right)\) and \(\left\|\mathbf{a}\right\|_{1}=\mathcal{O}\left(\kappa\sqrt{\log(\kappa^{2}/\nu)}\right)\). For our purposes, as discussed before theorem 4, to ensure a target estimation error \(\epsilon\) on the quantity of interest one must have \(\nu=\mathcal{O}(\epsilon)\) for Alg. 1 and \(\nu=\mathcal{O}((\kappa\|O\|)^{-1}\epsilon)\) for Alg. 2. This leads to the sample complexities \(S^{(1)}=\mathcal{O}\left((\kappa^{2}/\epsilon^{2})\log^{2}(\kappa^{2}/ \epsilon)\log(4/\delta)\right)\) and \(S^{(2)}=\mathcal{O}\left((\kappa^{4}\|O\|^{2}/\epsilon^{2})\log^{4}(\kappa^{3 }\,\|O\|/\epsilon))\log(4/\delta)\right)\), respectively. The expected query depth and total expected runtimes are shown in Table **I**. In particular, the former exhibits a quadratic improvement in the error dependence with respect to the maximal query depth \(k\). This places our algorithm in between the \(\mathcal{O}(\kappa\log(\kappa/\epsilon))\)[17] scaling of the fully quantum algorithm and the asymptotically optimal \(\mathcal{O}(\kappa\log(1/\epsilon))\) scaling of [18], therefore making it more suitable for the early fault-tolerance era. In fact, our expected query depth can even beat this optimal scaling for \(\kappa\lesssim(1/\epsilon)^{\log(1/\epsilon)-1}\). Note also that our total expected runtimes are only logarithmically worse in \(\kappa\) than the ones in the fully-quantum case. For the case of Alg. 1, an interesting sub-case is that of \(\langle\phi|=\langle i|\), as this directly gives the \(i\)-th component of the solution vector \(\mathbf{x}\). The quantum oracle \(U_{\phi}\) is remarkably simple there, corresponding to the preparation of a computational-basis state. As for the runtime, we recall that \(\left\|A\right\|\leq\left\|A\right\|_{F}\) in general and \(\left\|A\right\|_{F}=\mathcal{O}(\sqrt{D}\,\|A\|)\) for high-rank matrices. Hence, Alg. 1 has potential for significant speed-ups over the randomized Kaczmarz algorithm mentioned above. In turn, for the case of Alg. 2, we stress that the estimates obtained refer directly to the target expectation values for a generic observable \(O\), with no need to estimate the normalizing factor \(\|\mathbf{x}\|_{2}\) separately (although, if desired, the latter can be obtained by taking \(O=\openone\)). It is also interesting to compare our results with those of the fully randomized scheme of [11]. There, for \(A\) given in terms of a Pauli decomposition with total Pauli weight \(\lambda\), they also offer direct estimates, with no need of \(\|\mathbf{x}\|_{2}\). However, their total runtime of \(\tilde{O}\big{(}\big{\|}A^{-1}\big{\|}^{4}\lambda^{2}/\epsilon^{2}\big{)}\) is worse than the scaling presented here by a factor \(\tilde{O}\big{(}\big{\|}A^{-1}\big{\|}\,\lambda^{2}\big{)}\) (recall that here \(\kappa=\big{\|}A^{-1}\big{\|}\) since we are assuming \(\|A\|=1\)). In turn, compared to the solver in Ref. [11], the scaling of our query depth per run is one power of \(\kappa\) superior. In their case, the scaling refers readily to circuit depth, instead of query depth, but this is approximately compensated by the extra dependence on \(\lambda^{2}\) in their circuit depth. #### iii.1.3 Ground-state energy estimation The task of estimating the ground-state energy of a quantum Hamiltonian holds paramount importance in condensed matter physics, quantum chemistry, material science, and optimization. In fact, it is considered one of the most promising use cases for quantum computing in the near term [45]. However, the problem in its most general form is known to be QMA-hard [46]. A typical assumption - one we will also use here - is that one is given a Hamiltonian \(H\) with \(\|H\|\leq 1\) and a promise state \(\varrho\) having non-vanishing overlap \(\eta\) with the ground state subspace. The _ground state energy estimation_ (GSEE) problem [7] then consists in finding an estimate of the ground state energy \(E_{0}\) to additive precision \(\epsilon\). If the overlap \(\eta\) is reasonably large (which is often the case in practice, e.g., for small molecular systems using the Hartree-Fock state [47]), the problem is known to be efficiently solvable, but without any guarantee on \(\eta\) the problem is challenging. A variety of quantum algorithms for GSEE have been proposed (see, e.g., [48, 49, 50, 40]), but the substantial resources required are prohibitive for practical implementation before full-fledged fault tolerant devices become available. Recent works have tried to simplify the complexity of quantum algorithms for GSEE with a view towards early fault-tolerant quantum devices. Notably, a semi-randomized quantum scheme was proposed in [7] with query complexity \(\mathcal{O}\big{(}\frac{1}{\epsilon}\log\big{(}\frac{1}{\epsilon\eta}\big{)} \big{)}\) achieving Heisenberg-limited scaling in \(\epsilon\)[51]. Importantly, their algorithm assumes access to the Hamiltonian \(H\) through a time evolution oracle \(e^{-iH\tau}\) (for some fixed time \(\tau\)), which makes it more appropriate for implementation in analog devices. The similar fully-randomized approach of [10] gives rise to an expected circuit (not query) complexity of \(\mathcal{O}\big{(}\frac{1}{\epsilon^{2}}\log\big{(}\frac{1}{\eta}\big{)}\big{)}\). Here we approach the GSEE problem within our Chebyshev-based randomized semi-quantum framework. We follow the same strategy used in [7, 10, 14, 52] of reducing GSEE to the so-called _eigenvalue thresholding problem_. The problem reduces to the estimation up to additive precision \(\frac{\eta}{2}\) of the filter function \(F_{\varrho}(y):=\mathrm{Tr}[\varrho\,\theta(y\openone-H)]\) for a set of \(\log\big{(}\frac{1}{\epsilon}\big{)}\) different values of \(y\) chosen from a uniform grid of cell size \(\xi\) (times the length \(E_{\max}-E_{0}\) of the interval of energies of \(H\)). This allows one to find \(E_{0}\) up to additive error \(\xi\) with \(\log\big{(}\frac{1}{\xi}\big{)}\) steps of a binary-like search over \(y\)[7]. At each step, we apply our Alg. **1** with \(f(x)=\theta(y-x)\), \(A=H\), and \(\left|\phi\right\rangle=\left|\psi\right\rangle\) to estimate \(F_{\varrho}(y)\), with \(\varrho=\left|\psi\right\rangle\!\!\left\langle\psi\right|\). Here, \(\left|\psi\right\rangle\) is any state with promised overlap \(\eta>0\) with the ground state subspace. The requirement of additive precision \(\frac{\eta}{2}\) for \(F_{\varrho}(x)\) requires an approximation error \(\nu\leq\frac{\eta}{4}\) for \(f\) and a statistical error \(\epsilon\leq\frac{\eta}{4}\) for the estimation. Interestingly, our approach does not need to estimate \(F_{\varrho}(y)\) at different \(y\)'s for the search. In Lemma 32 in App. **F 4**, we show that estimating \(F_{\varrho}\) at a special point \(y_{*}=1/\sqrt{2}\) and increasing the number of samples suffices to obtain \(F_{\varrho}(y)\) at any other \(y\). As a core auxiliary ingredient for that, we develop a new \(\nu\)-approximation \(\tilde{f}\) to the step function with a shifted argument, \(\theta(y-x)\), given in Lemma 27 in App. **F 4**. It has the appealing property that the \(x\) and \(y\) dependence are separated, namely \(\tilde{f}(y-x)=\sum_{j\in[k]}\big{[}a_{j}(y)\,\mathcal{T}_{j}(x)\big{]}+\sum_{ j\in[k]}\big{[}b_{j}(y)\sqrt{1-x^{2}}\mathcal{U}_{j}(x)\big{]}\), where \(\mathcal{U}_{j}\) is the \(j\)-th Chebyshev polynomial of the second kind. To the best of our knowledge, this is a novel Chebyshev-polynomial expansion of the step function that may be of independent interest. The first contribution to \(\tilde{f}\) takes the usual form (4) and can be directly implemented by our Alg. **1**; the second contribution containing the \(\mathcal{U}_{j}\)'s can also be implemented in a similar way, with the caveat that the required Hadamard test needs a minor modification described in Lemma 8, App. **B**. The maximal degree \(k=\mathcal{O}(\frac{1}{\xi}\log\big{(}\frac{1}{\eta}\big{)})\) is the same for both contributions and the coefficient 1-norms are \(\left\|\mathbf{a}\right\|_{1}=\left\|\mathbf{b}\right\|_{1}=\mathcal{O}\left(\log \big{(}\frac{1}{\xi}\log\big{(}\frac{1}{\eta}\big{)}\big{)}\right)\). Putting all together and taking into account also the \(\mathcal{O}\big{(}\log\big{(}\frac{1}{\xi}\big{)}\big{)}\) steps of the binary search, one obtains a total sample complexity \(S^{(1)}=\mathcal{O}\left(\frac{1}{\eta^{2}}\log^{2}\big{(}\frac{1}{\xi}\log \big{(}\frac{1}{\eta}\big{)}\big{)}\log\Big{(}\frac{4}{\delta}\log\Big{(}\frac {1}{\xi}\Big{)}\Big{)}\right)\). The corresponding expected query depth and total runtime are shown in Table **I**. Remarkably, the query depth exhibits a speed-up with respect to the maximal value \(k\), namely a square root improvement in the \(\eta\) dependence and a logarithmic improvement in the \(\frac{1}{\xi}\) dependence (see Lemma 29 in App. **F 4** for details). In addition, as can be seen in the table, our expected runtime displays the same Heisenberg-scaling of [10]. This is interesting given that our algorithm is based on block-encoded oracles rather than the time-evolution oracles used in [10], which may be better suited for digital platforms as discussed previously. Finally, it is interesting to note that there have been recent improvements in the precision dependence, e.g. based on a derivative Gaussian filter [52]. Those matrix functions are also within the scope of applicability of our approach. ## V Final discussion We presented a randomized hybrid quantum-classical framework to efficiently estimate state amplitudes and expectation values involving a generic matrix function \(f(A)\). More precisely, our algorithms perform a Monte-Carlo simulation of the powerful quantum signal processing (QSP) and singular-value transformation (QSVT) techniques [1; 2; 3; 4]. Our toolbox is based on three main ingredients: \(i)\) it samples each component of a Chebyshev series for \(f\) weighed by its coefficient in the series; \(ii)\) it assumes coherent access to \(A\) via a block-encoding oracle; and \(iii)\)\(f(A)\) is automatically extracted from its block-encoding without post-selection, using a Hadamard test. This combination allows us to deliver provably better circuit complexities than, similar total runtimes to, and advantages in terms of experimental feasibility over the standard QSP and QSVT algorithms. We illustrated our algorithms on four specific end-user applications: partition-function estimation via quantum Markov-chain Monte Carlo and via imaginary-time evolution; linear system solvers; and ground-state energy estimation. A non-technical summary of the main features (functioning, performance guarantees, and noise-sensitivity) of the framework as well as the highlights for each use case is presented in Sec. **II**. In turn, the full end-to-end complexity scalings are detailed in Table **I**. An interesting future direction is to explore other matrix functions with our framework. This includes recent developments such as Gaussian and derivative-Gaussian filters for precision improvements in ground-state energy estimation [52] or Green function estimation [11], and a diversity of other more-established use cases [4]. Another possibility is to explore the applicability of our methods in the context of hybrid quantum-classical rejection sampling [14]. Moreover, further studies on the interplay between our framework and Fourier-based matrix processing [6; 13] may be in place too. Fourier-based approaches have so far focused mainly on the eigenvalue thresholding for ground-state energy estimation [7; 8; 10; 11]. Our findings open a promising arena to build and optimize early fault-tolerant quantum algorithms towards practical linear-algebra applications in a nearer term. ###### Acknowledgements. AT acknowledges financial support from the Serrapilheira Institute (grant number Serra-1709-17173). We thank Lucas Borges, Samson Wang, Sam McArdle, Mario Berta, Daniel Stilck-Franca, and Juan Miguel Arrazola for helpful discussions.
2310.05246
Revisiting Remote State Preparation with Verifiability: A New Set of Notions with Well-behaved Properties
In remote state preparation with verifiability (RSPV), a client would like to prepare a quantum state (sampled from a state family) on the server side, such that ideally the client knows its full description, while the server holds and only holds the state itself. A closely related notion called self-testing, which is recently generalized to the single-server computationally-secure setting [MV21], aims at certifying the server's operation. These notions have been widely studied in various different settings and have become fundamental building blocks in many quantum protocols. However, there are many variants of definitions in existing works, and many of these variants do not have some desirable properties like sequential composability. In this background, a new framework that could potentially support more general solutions is desirable. In this paper, we choose notions or basic ideas from existing works [BDSSTW01,GV19,Zha22,RY21] and introduce new notions, with the goal of developing a more general, well-behaved framework for these problems. We choose RSPV with simulation-based soundness [BDSSTW01,GV19,Zha22], and study its basic properties like composability. Furthermore, for controlling the server's operation in a verifiable way, we introduce a new notion named remote operator application with verifiability (ROAV) as a replacement of self-testing. In this notion the server is provided with an unknown input state, and is supposed to perform a specific operator (sampled from an operator family) to the state; the client knows the operator description, but what server knows in the end is limited to the output state of the operation applied on the input state. Finally, we show several basic constructions of protocols under our set of notions, and discuss why these notions could potentially lead to quantum cryptographic protocols with new functionalities.
Jiayu Zhang
2023-10-08T17:38:43Z
http://arxiv.org/abs/2310.05246v1
Revisiting Remote State Preparation with Verifiability: A New Set of Notions with Well-behaved Properties ###### Abstract In remote state preparation with verifiability (RSPV), a client would like to prepare a quantum state (sampled from a state family) on the server side, such that ideally the client knows its full description, while the server holds and only holds the state itself. A closely related notion called self-testing, which is recently generalized to the single-server computationally-secure setting [21], aims at certifying the server's operation. These notions have been widely studied in various different settings and have become fundamental building blocks in many quantum protocols [10, 1, 30, 12]. However, there are many variants of definitions in existing works, and many of these variants do not have some desirable properties like sequential composability. What's more, existing works mainly focus on simple state families like simple product states, and treatments for these types of states are already technically complicated; in this background, a new framework that could potentially support more general solutions is desirable. In this paper, we choose notions or basic ideas from existing works [3, 10, 30, 28] and introduce new notions, with the goal of developing a more general, well-behaved framework for these problems. We choose RSPV with simulation-based soundness [3, 10, 30] (instead of rigidity-based soundness [1]), and study its basic properties like composability. Furthermore, for controlling the server's operation in a verifiable way, we introduce a new notion named _remote operator application with verifiability_ (ROAV) as a replacement of self-testing. In this notion the server is provided with an unknown input state, and is supposed to perform a specific operator (sampled from an operator family) to the state; the client knows the operator description, but what server knows in the end is limited to the output state of the operation applied on the input state. Finally, we show several basic constructions of protocols under our set of notions, and discuss why these notions could potentially lead to quantum cryptographic protocols with new functionalities. ## 1 Introduction ### Background Development of quantum computers leads to demands of various quantum cryptographic protocols, for example, quantum computation verification [19, 30], multiparty quantum computations [2], etc. In its typical setting, there is a client and a remote quantum server (or servers), and the client would like to achieve some quantum cryptographic tasks, but it does not trust the server; thus a cryptographic protocol between the client and the server is needed. Among these problems, two examples that are basic and very important are _remote state preparation_ (RSP) [3] and _self-testing_[29], which we introduce below. #### 1.1.1 Remote state preparation In the RSP problem, ideally, the client would like to prepare a quantum state (sampled from a state family) on the server side; thus in the end the client knows the description of the state, while the server simply holds the state. The trivial solution is to simply send the quantum state through a quantum channel. RSP asks: how could we simulate this quantum communication by other means (like classical communication or other types of quantum communication), possibly under computational assumptions? Studies of RSP have a long history [25, 3]. One setting [3] of RSP is the fully honest setting: all the parties execute the protocols honestly. In this work, we are interested in the setting where the server could be malicious, and RSP protocols in this setting should satisfy a correctness requirement and a security requirement. The natural correctness requirement for RSP says that when the server is honest, the client accepts and the server gets the state while the client gets the state description. For security, there are different security notions, including blindness (secrecy) and verifiability (soundness) [7, 10, 31]. In this paper we focus on RSP with verifiability (RSPV). In RSPV, intuitively, the client is able to verify that in case of acceptance the server really gets the state, as if it is sent through a quantum channel. A malicious server who attempts to get other states by deviating from the protocol would be caught cheating by the client. As a natural quantum task, the RSPV problem is interesting on its own. What's more, it has become an important building block in many other quantum cryptographic protocols. As examples, [10, 7] first construct classical channel cryptography-based RSPV and use it to achieve classical verification of quantum computations; [1] explores more applications of RSPV; [30] takes the RSPV approach to achieve classical verification of quantum computations with linear total time complexity. Many quantum cryptographic protocols rely on quantum channel and quantum communication, and an RSPV protocol could often allow us to replace these quantum communication steps by other cheaper resources, like classical communication. Preparing states on the server side is quite useful. But in many scenarios what the client needs is to have control on server's _operations_, as introduced below. #### 1.1.2 How to control server's operations How could the client verify that the server has really applied an operation on its state? In existing works, people raised the notion of self-testing to address the problem. The concept of self-testing also has a long history [29, 26, 20] in quantum information. One famous application of self-testing is in the study of non-local games [14, 27]. In this scenario, the client (or called verifier) sends questions to two spatially-separated but entangled quantum servers, and quantum servers are supposed to perform specific measurements and send back the results, then the client decides whether to accept or reject. The natural correctness requirement says that when all the parties follow the protocol, the client accepts with some specific probability, say, OPT. Furthermore, specific games have the property that, any servers that want to pass the protocol with probability bigger than \(\mathrm{OPT}-\epsilon\) have to use a strategy (measurement operators) that is close to the honest behavior. This provides a way to constrain servers' operations through only classical interactions and spatial separation, which is a fundamental technique in the study of non-local games. Recently a series of works [21, 12, 4, 23] study the single-server analog of two server self-testing as discussed above. The goal is typically to design cryptographic protocols between a client and a single quantum server so that it is certified that the server has prepared the entangled state between two registers as the two server setting, and has performed the measurements on it. [21] studies the basic analog of CHSH game on the single server computationally secure setting and construct a protocol that only uses classical channel; [22, 12] further extend it to the three-qubit and \(N\)-qubit setting; [16, 23] makes use of QFHE [18] to address the problem; [15, 16, 4] study the proof of quantumness problem and the construction is later proved to have a self-testing property. Typically these self-testing protocols have also achieved a sense of RSPV since the protocols also certify the underlying entangled states; however, these self-testing protocols do not aim to reserve the states in the end. #### 1.1.3 Subtleties and limitations of existing works There are several subtleties or limitations in existing works for RSPV or self-testing. First, existing works for RSPV do not have a consistent choices of definitions. There are roughly two types of security notions, the _rigidity-based_ (or isometry-based) soundness [7, 1] and _simulation-based_ soundness [3, 10, 30]. Roughly speaking, these two definitions go as follows: * (Rigidity-based soundness) The output state, going through an isometry, is close to the target state. * (Simulation-based soundness) The target state, going through a simulator, is indistinguishable to the output state. Existing works do not seem to care about the differences. Another subtlety in RSPV and self-testing problems is its composability. For example, one basic desirable property of cryptographic primitives is sequential composability between independent instances. This means, if the client and the server execute an RSPV (or self-testing) protocol for a state family \(\mathcal{F}_{1}\), and then execute another protocol for another state family \(\mathcal{F}_{2}\), we would like the overall protocol to be automatically an RSPV for \(\mathcal{F}_{1}\otimes\mathcal{F}_{2}\) (defined to be tensor products between each pair of elements). Existing works [1, 12] deal with this type of states or operators by designing new protocols and giving highly technical proofs; if such sequential composability property holds for RSPV or self-testing, protocols for tensor products of simple states could be reduced to protocols for simple states, which will potentially significantly simplify the constructions and proofs. One more limitation in current RSPV and self-testing protocols is that they could only handle simple tensor product states and operators. Remote preparation of large entangled states is also quite useful in quantum cryptography [8, 5], and a more general solution for RSPV for these types of states is highly desirable. We note that the composability subtlety discussed above also makes the problem harder: considering the fact that preparing simple product states is already highly technically complicated, preparing large entangled states might be too complicated to work on. In this background, we ask the following question: _Could RSPV and single-server self-testing be more well-behaved and useful?_ ### Our Contributions We argue that the current complicated situation of RSPV and single-server self-testing is largely from the choices of definitions. In this work we choose or introduce a new set of notions for these problems and study their properties and applications, which we summarize below. #### 1.2.1 Choosing or introducing definitions RspvWe first develop a new set of notions. For RSPV, we choose and study RSPV with simulation-based soundness (see Section 1.1.3 and 3.1.2). We show that the definitions that we choose have several desirable properties, which could hopefully make RSPV much easier to work on: * We show our choice of notions has a well-behaved sequential composability property. * In usual applications of RSPV, simulation-based definition is as powerful as rigidity-based definition. Then we introduce a new notion called remote operator application with verifiability (ROAV), as our analog of self-testing in the single-party cryptographic setting. Remote operator application with verifiability (ROAV)Recall in the two-server protocol design scenario, one typical techniques is to design two subprotocols, one of them has a self-testing property, while the other is to execute the computation. One server, without communicating with the other, could not decide which one the client is currently executing; to pass the overall protocol it has to pass the self-testing subprotocol so that its operations has to be close to the honest behavior; and this implies the computation subprotocol is also executed almost honestly. The driven question behind our definition is: could we formulate a notion in the single-server cryptographic setting that is analogous to what a specific server sees in the two-server setting? We raise the notion of ROAV for formulating this intuition. An ROAV for a target operation \(\mathcal{E}\) is defined as a tuple \((\rho_{test},\pi_{test},\pi_{comp})\) where: * \(\rho_{test}\) is a specific state used as the input state of \(\pi_{test}\). * \(\pi_{comp}\) is a protocol with an undetermined input state whose dimension is the same as the server-side of \(\rho_{test}\). Here \((\rho_{test},\pi_{test})\) is the test mode, which means, running \(\pi_{test}\) on input state \(\rho_{test}\) is used to test the adversary's behavior; \(\pi_{comp}\) is the computation mode, which means, in this mode the operator \(\mathcal{E}\) is finally applied on the input state. More formally, the soundness is defined roughly as follows: For any adversary \(\mathsf{Adv}\), denoting the final output of running protocol \(\pi_{\dots}\) against adversary \(\mathsf{Adv}\) on input \(\rho_{\dots}\) as \(\pi_{\dots}^{\mathsf{Adv}}(\rho_{\dots})\), the ROAV satisfies: * either the cheating behavior gets caught in \(\pi_{test}^{\mathsf{Adv}}(\rho_{test})\) with high probability, * or \(\pi_{comp}^{\mathsf{Adv}}(\chi)\) is close (in a sense) to \(\mathcal{E}(\chi)\) where \(\chi\) denotes an arbitrary input state. Finally we note that in the formal notion that we propose we consider a large entangled state which can be collapsed to any \(\chi\) by measuring part of its systems. We argue that our new notion has relatively well-behaved properties, is consistent with the intuition of self-testing in the multi-party setting, and is potentially useful. #### 1.2.2 Applications We show several potential applications of our notions as follows. First in Section 4.2 we show that ROAV is potentially a useful tool for constructing RSPV protocols for more general state families. The outcome of an ROAV protocol is a remote preparation of joint state \(\mathcal{E}(\chi)\) where \(\chi\) is the input state; such a state might be hard to prepare directly, but could be made possible once we have an ROAV for \(\mathcal{E}\) and have the RSPV for the corresponding \(\rho_{test}\) and \(\chi\). Then we construct a Hamiltonian ground energy testing protocol based on specific RSPV and ROAV. This shows the potential of our set of notions in other quantum cryptogrpahic problems like QMA verification. Our construction shares similarities to Grilo's Hamiltonian verification protocol in the 2-party setting [11]. ### More Related Works One work that shares similarities to our work is [28]. This work studies the complexity of interactive synthesis of states and unitaries. In a sense, the relation of states and unitaries in their work is similar to the relation of RSPV and ROAV in our work; but state complexity problem and the RSPV/ROAV problem seem quite different and have different applications. ### Open Questions and Summary The obvious open question coming out of this work is to give a construction for ROAV. Our work focuses on the definitions and applications in an abstract sense; an explicit construction of ROAV would allow us to instantiate these applications. We hope our work clarifies the subtleties in RSPV and related problems and could serve as a foundation for further studies. ## Acknowledgements This work is supported by different fundings at different time during its preparation: * Partially supported by the IQIM, an NSF Physics Frontiers Center (NSF Grant PHY-1125565) with support of the Gordon and Betty Moore Foundation (GBMF-12500028). * This work is partially done when the author was visiting Simons Institute for Theory of Computing. * This work is partially done in Zhongguancun Laboratory. ## 2 Preliminaries We refer to [24] for basics of quantum computing, and refer to [17] for basics of cryptography. In this section we clarify some notations and notions. **Notation 2.1**.: We use \([m]\) to denote \(\{1,2,\cdots m\}\). **Notation 2.2**.: We use \(D(\mathcal{H})\) to denote the set of density operators over some Hilbert space \(\mathcal{H}\). **Notation 2.3**.: For a pure state \(\left|\Phi\right\rangle\), \(\Phi\) is an abbreviation of \(\left|\Phi\right\rangle\left\langle\Phi\right|\). **Notation 2.4**.: We use \(\mathcal{E}(\rho)\) to denote the operation of an operator (either unitary or superoperator) on density operator \(\rho\). We also use this notation when \(\mathcal{E}\) is an isometry (say, \(V\)): it is the same as \(V\rho V^{\dagger}\). When the system that \(\mathcal{E}\) is contained in the system of \(\rho\), the operation on the remaining system is identity. **Notation 2.5**.: We use \(\rho\approx_{\epsilon}\sigma\) to denote \(\left|\rho-\sigma\right|_{\mathrm{tr}}\leq\epsilon\), where \(\left|\cdot\right|_{\mathrm{tr}}\) is the trace distance. **Definition 2.1** (Bell basis).: In a two qubit system, define the following four states as the Bell basis: \[\frac{1}{\sqrt{2}}(\left|00\right\rangle+\left|11\right\rangle),\frac{1}{ \sqrt{2}}(\left|00\right\rangle-\left|11\right\rangle),\] \[\frac{1}{\sqrt{2}}(\left|01\right\rangle+\left|10\right\rangle),\frac{1}{ \sqrt{2}}(\left|01\right\rangle-\left|10\right\rangle).\] Define \(\frac{1}{\sqrt{2}}(\left|00\right\rangle+\left|11\right\rangle)\), these states could be denoted as \(\mathsf{X}^{a}\mathsf{Z}^{b}\left|\Phi\right\rangle\), where \(\mathsf{X}^{a}\) means apply \(\mathsf{X}\) if \(a=1\) and apply identity if \(a=0\). \(\mathsf{Z}^{b}\) is defined similarly. Now define the Bell-basis measurement as follows: the projection onto Bell basis \(\mathsf{X}^{a}\mathsf{Z}^{b}\left|Phi\right\rangle\) has output \((a,b)\). **Definition 2.2**.: In cryptographic protocols there is usually a completeness requirement and a soundness requirement. When both requirements are probabilistic, the statements are stated in the following type: * (Completeness) In the yes-instance the honest server makes the client accepts with probability \(c\). * (Soundness) In the no-instance the malicious server could at most make the client accepts with probability \(s\). There should be \(0<s<c<1\). \(1-c\) is called the completeness error. \(s\) is called soundness or soundness error. **Notation 2.6**.: In this paper we use \(\pi\) to denote cryptographic protocols. Cryptographic protocols typically will take a security parameter as part of inputs; in this paper we denote it as \(\kappa\). When we analyze security of protocols, operators and states are typically families of operators or states parameterized by \(\kappa\); in this paper we make it implicit. In the end the protocol will also output a \(flag\in\{\mathsf{pass},\mathsf{fail}\}\); in this work this decision will be made solely on the client side and we denote the projection onto the passing space as \(\Pi_{\mathsf{pass}}\). We use \(\pi^{\mathsf{Adv}}(\rho_{in})\) to denote the output joint state of \(\pi\) run on initial state \(\rho_{in}\) against adversary \(\mathsf{Adv}\). **Notation 2.7**.: We write \(\rho\approx_{\epsilon}^{ind:\mathcal{F}}\sigma\) when \(\forall\mathsf{Adv}\in\mathcal{F},\Pr[\mathsf{Adv}(\rho)\to 1]\approx_{ \epsilon+\mathsf{negl}(\kappa)}\Pr[\mathsf{Adv}(\sigma)\to 1]\). We write \(\rho\approx_{\epsilon}^{ind}\sigma\) when \(\mathcal{F}\) is taken to be all the polynomial time algorithms. **Fact 1**.: _If \(\rho\approx_{\epsilon}^{ind}\sigma\), \(\mathcal{E}\) is an efficient operator, then \(\Pi_{\mathsf{pass}}(\mathcal{E}(\rho))\approx_{\epsilon}^{ind}\Pi_{\mathsf{ pass}}(\mathcal{E}(\sigma))\)._ **Fact 2** (Chernoff bounds).: _Suppose for all \(i\in[K]\), \(s_{i}\) is a random variable independently sampled from \(\{0,1\}\) with probability \(1-p,p\) corresponding to values \(0,1\). Then_ \[\Pr[\sum_{i\in[K]}s_{i}\geq(1+\delta)pK]\leq e^{-\delta^{2}pK/3}\] Finally we review the local Hamiltonian problem. **Definition 2.3** ([11]).: The following problem is called the XZ k-local Hamiltonian problem: Given input \((H,a,b)\) where \(H\) is a Hamiltonian on \(n\)-qubit registers, \(a,b\) are real value function of \(n\), and they satisfy: \[H=\sum_{j\in[m]}\gamma_{j}H_{j},\quad\forall j,|\gamma_{j}|\leq 1 \tag{1}\] \[\forall j,H_{j}\in\{\sigma_{X},\sigma_{Z},I\}^{\otimes n}\text{ with at most $k$ appearances of non-identity terms} \tag{2}\] Decide which is the case: * Yes-instance: The ground energy of \(H\) is \(\leq a\) * No-instance: The ground energy of \(H\) is \(\geq b\). **Theorem 2.1** ([13]).: _There exist \(a(n),b(n)\in[0,1],b-a\geq 1/\mathsf{poly}(n)\) such that the XZ 5-local Hamiltonian problem is QMA-complete._ ## 3 Remote State Preparation with Verifiability In this section we study definitions and basic properties of RSPV. ### Definitions Recall that in RSPV, the client aims at creating a state sampled from a state ensemble on the server-side. The client should know the description of the state, while the server holds the state itself. Similar to many cryptographic problems, an RSPV protocol needs to have completeness (correctness) and soundness (verifiability). Furthermore, there are two definitions for the soundness of RSPV: simulation-based soundness [3, 10, 30] and rigidity-based soundness [7, 10, 1], which are both used in existing works. In this subsection we choose formal definitions for both variants and study their differences and relations. #### 3.1.1 Basic settings and completeness To formalize the completeness and soundness of this notion, let's first formalize some basic settings of RSPV. In more detail, we define the target state of RSPV, as follows: **Definition 3.1**.: An RSPV protocol is defined with respect to an ensemble of normalized states and the corresponding probabilities \[((p_{1},|\varphi_{1}\rangle),(p_{2},|\varphi_{2}\rangle),\cdots(p_{D},|\varphi _{D}\rangle)),\sum_{i\in[D]}p_{i}=1.\] The target state of an RSPV protocol is denoted by the following joint state of the client and the server (described in terms of density operators): \[\rho_{tar}=\sum_{i\in[D]}p_{i}\underbrace{\ket{i}\bra{i}}_{\text{client}}\otimes \underbrace{\ket{\varphi_{i}}\bra{\varphi_{i}}}_{\text{server}} \tag{3}\] And we simply call it RSPV for \((\ket{\varphi_{1}}\cdots\ket{\varphi_{D}})\) when \((p_{i})_{i\in[D]}\) is a uniform distribution. Note that (3) should be intuitively understood as a cq-state; the fact that the client-side register is classical is equivalent to say any operator (for example, distinguishers that will be used later) that operates on the client-side register in (3) only has classical access to it. Then the completeness of an RSPV protocol is defined as follows. **Definition 3.2** (Completeness of RSPV).: We say an RSPV protocol for target state \(\rho_{tar}\) has completeness error \(\gamma\) if in the honest setting, in the end of the protocol the joint state of the client and the server is \(\gamma\)-close to \(\rho_{tar}\) (together with the passing flag). And we simply say the protocol is complete if it has completeness error \(\mathsf{negl}(\kappa)\). #### 3.1.2 Rigidity-based soundness and simulation-based soundness The soundness of RSPV is more subtle; below we formalize and study the two types of soundness definitions. Rigidity-based soundnessRoughly speaking, the rigidity-based soundness says the output state, after going through an isometry on the server side, is close to the target state. An interpretation is "the passing flag certifies that the server has really got the state". **Definition 3.3** (Rigidity-based soundness for RSPV).: We say a protocol \(\pi\) is an RSPV for target state \(\rho_{tar}\) with soundness error \(\delta\) and approximation error \(\epsilon\) under rigidity-based definition if: For any BQP adversary \(\mathsf{Adv}\), any input state \(\rho_{in}\in D(\mathcal{S}\otimes\mathcal{T})\) prepared by the adversary where \(\mathcal{S}\) is the server-side system and \(\mathcal{T}\) is a system that will not be touched by any party in the protocol, there exists a server-side efficiently-computable isometry \(V^{\mathsf{Adv}}\) and an efficiently-computable operation \(\mathsf{Sim}^{\mathsf{Adv}}\) operated on \(\mathcal{S}\) such that: * (Small passing probability) Either: \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi^{\mathsf{Adv}}(\rho_{in})))\leq\delta,\] * or: \[\Pi_{\mathsf{pass}}(V^{\mathsf{Adv}}(\pi^{\mathsf{Adv}}(\rho_{in})))\approx_ {\epsilon}^{ind}\Pi_{\mathsf{pass}}(\rho_{tar}\otimes\mathsf{Sim}^{\mathsf{ Adv}}(\rho_{in}))\] (4) where the distinguisher has classical access to the client side of \(\rho_{tar}\) and quantum access to all the other registers (including \(\mathcal{S}\) and \(\mathcal{T}\)). We note that this definition is slightly different from the (rigidity-based) definitions in existing works [10, 1]. In [10, 1] the left hand side of (4) is statistically close to a state in the form of \(\sum_{i}\ket{\varphi_{i}}\bra{\varphi_{i}}\otimes\sigma_{i}\), and then a computaitonal indistinguishability requirement is put on \(\sigma_{i}\) for different \(i\). We argue that our global indistinguishability captures the same intuition and is more general; what's more, a suitable formulation of variants of definitions in [10, 1] should imply this definition.1 Simulation-based soundnessDifferent from the rigidity-based soundness, the simulation-based soundness does not certify that the server really holds the state; an interpretation is "the passing flag certifies that what the adversarial server gets is no more than holding the state". It's not as strong as the rigidity-based definition on its own, but arguably it's sufficiently strong for many applications and it turns out to have good properties. Footnote 1: The proof of the soundness is a bit-by-one-shot theorem, which is a bit-by-one-shot theorem, which is a bit-by-one-shot theorem. **Definition 3.4** (Simulation-based soundness for RSPV).: We say a protocol \(\pi\) is an RSPV for target state \(\rho_{tar}\) with soundness error \(\delta\) and approximation error \(\epsilon\) under simulation-based definition if: For any BQP adversary \(\mathsf{Adv}\), any input state \(\rho_{in}\in D(\mathcal{S}\otimes\mathcal{T})\) prepared by the adversary where \(\mathcal{S}\) is the server-side system and \(\mathcal{T}\) is a system that will not be touched by any party in the protocol, there exists an efficiently-computable operation \(\mathsf{Sim}^{\mathsf{Adv}}\) operated on \(\mathcal{S}\) such that: * (Small passing probability) Either: \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi^{\mathsf{Adv}}(\rho_{in})))\leq\delta,\] * or: \[\Pi_{\mathsf{pass}}(\pi^{\mathsf{Adv}}(\rho_{in}))\approx_{\epsilon}^{ind} \Pi_{\mathsf{pass}}(\mathsf{Sim}^{\mathsf{Adv}}(\rho_{tar}\otimes\rho_{in}))\] (5) where the distinguisher has classical access to the client side of \(\rho_{tar}\) and quantum access to all the other registers (including \(\mathcal{S}\) and \(\mathcal{T}\)). We note that in both notions, we consider initial states that are possibly correlated or entangled between the server's system \(\mathcal{S}\) and the running environment \(\mathcal{T}\) of cryptographic protocols. This part could be used to model everything else that happens outside this protocol and helps to give RSPV the sequential composability property (and hopefully other types of composability). Finally, we could prove the simulation-based soundness as defined above is no stronger than the rigidity-based soundness defined above: **Theorem 3.1**.: _Suppose \(\pi\) is an RSPV for target state \(\rho_{tar}\) with soundness error \(\delta\) and approximation error \(\epsilon\) under rigidity-based soundness, it's also an RSPV with the same configurations under simulation-based soundness._ Proof.: By the rigidity-based soundness we get \(V\), \(\mathsf{Sim}\) that satisfies (4). Then taking \[\mathsf{Sim}^{\prime}(\underbrace{\cdot}_{\rho_{tar}}\otimes\underbrace{ \cdot}_{\rho_{in}})=V^{\dagger}(\underbrace{\cdot}_{\rho_{tar}}\otimes \mathsf{Sim}(\underbrace{\cdot}_{\rho_{in}}))\] as the simulator in (5) completes the proof. But the inverse is not necessarily true. Actually, the rigidity-based soundness of RSPV is not even resilient to an additional empty timestep (that is, no party does anything) at the end of the protocol: the adversary could destroy everything in the end to violate the rigidity requirement. For comparison, the simulation-based notion has such resilience: the state destroying operation could be absorbed into the simulator in (5). However, arguably this also means the simulation-based notion has more well-behaved properties. What's more, intuitively the simulation-based version is as useful as the rigidity-based version in common applications of RSPV. When we construct cryptographic protocols, what we are doing is usually to enforce that the malicious parties could not do something. In this sense, in the simulation-based soundness it is certified that what the adversary gets is no more than the target state, which should be at least as secure as really getting the target state. ### Basic Properties of RSPV with Simulation-based Soundness Below we prove several useful properties of simulation-based RSPV. #### 3.2.1 Composition property First we could prove the simulation-based RSPV has a natural sequential composition property. As far as we know, rigidity-based RSPV does not seem to behave well under this property. **Theorem 3.2** (Sequential composition of RSPV).: _Under simulation-based notion, if \(\pi_{1}\) is an RSPV protocol for \(\rho_{tar}\) with soundness \(s\) and approximation error \(\epsilon_{1}\), \(\pi_{2}\) is an RSPV protocol for \(\sigma_{tar}\) with soundness \(s\) and approximation error \(\epsilon_{2}\), the honest behavior of \(\pi_{1}\) and \(\pi_{2}\) are completely independent, then \(\pi_{2}\circ\pi_{1}\) is an RSPV protocol for \(\rho_{tar}\otimes\sigma_{tar}\) with soundness \(s\) and approximation error \(\epsilon_{1}+\epsilon_{2}\)._ Proof.: For an adversary, suppose the initial joint state is \(\rho_{0}\in D(\mathcal{S}\otimes\mathcal{T})\), the output state of \(\pi_{1}\) with \(\rho_{0}\) being the initial state is \(\rho_{1}\in D(\mathcal{S}\otimes\mathcal{T})\), and the final output state of \(\pi_{2}\) with \(\rho_{1}\) being the initial state is \(\rho_{2}\in D(\mathcal{S}\otimes\mathcal{T})\). Then by the simulation-based soundness of \(\pi_{2}\) there exists an efficiently computable simulator \(\mathsf{Sim}_{2}\) working on \(\mathcal{S}\) such that: \[\Pi_{\mathsf{pass}}(\rho_{2})\approx_{\epsilon_{2}}^{ind}\Pi_{\mathsf{pass}}( \mathsf{Sim}_{2}(\sigma_{tar}\otimes\rho_{1})) \tag{6}\] By the simulation-based soundness of \(\pi_{1}\) there exists an efficiently computable simulator \(\mathsf{Sim}_{1}\) such that: \[\Pi_{\mathsf{pass}}(\rho_{1})\approx_{\epsilon_{1}}^{ind}\Pi_{\mathsf{pass}}( \mathsf{Sim}_{1}(\rho_{tar}\otimes\rho_{0})) \tag{7}\] which by Fact 1 implies \[\Pi_{\mathsf{pass}}(\mathsf{Sim}_{2}(\sigma_{tar}\otimes\rho_{1}))\approx_{ \epsilon_{1}}^{ind}\Pi_{\mathsf{pass}}(\mathsf{Sim}_{2}(\sigma_{tar}\otimes \mathsf{Sim}_{1}(\rho_{tar}\otimes\rho_{0}))) \tag{8}\] Combining (6)(8) and choosing \[\mathsf{Sim}(\sigma_{tar}\otimes\rho_{tar}\otimes\cdot):=\mathsf{Sim}_{2}( \sigma_{tar}\otimes\mathsf{Sim}_{1}(\rho_{tar}\otimes\cdot))\] as the final simulator completes the proof. #### 3.2.2 Cut-and-choose soundness amplification procedure Consider an RSPV protocol with soundness \(s\). We want \(s\) to be small. However, very frequently, in some initial construction of RSPV, \(s\) might be not good enough (for example, \(s\) might be very close to 1). In this case, a soundness amplification procedure is needed. One commonly used technique for soundness amplification is the _cut-and-choose_. In this technique, to amplify an RSPV protocol \(\pi\) with soundness \(s\), both parties run many repetitions of \(\pi\), and it's required that the server should pass in all the subprotocols. Intuitively if the server wants to pass the overall protocol with high probability, the number of iterations that it could cheat will be relatively small (recall that a cheating server in a single execution of \(\pi\) is caught with probability \(1-s\)). Then a state (and its corresponding classical description) is randomly chosen from these output states. **Protocol 1** (Cut-and-choose for RSPV).: _Given an RSPV protocol \(\pi\) for target state \(\rho_{tar}\) and a repetition number \(L\). The cut-and-choose amplification procedure is defined as below._ 1. _For each_ \(i\in[L]\)_:_ 1. _Run_ \(\pi\)_. Both parties keep the state. The client rejects if_ \(\pi\) _rejects._ 2. _The client randomly chooses_ \(i\in[L]\) _and sends it to the server. Both parties use the output from the_ \(i\)_-th repetition as the output state._ We have the following theorem on this cut-and-choose process. Note this process does not reduce the approximation error but make the soundness better. **Theorem 3.3**.: _If \(\pi\) is an RSPV with soundness \(s\) and approximation error \(\epsilon\), for any \(s^{\prime}<s\), Protocol 1 has soundness \(s^{\prime}\) and approximation error \(\epsilon+\frac{2}{L}\log_{s}(s^{\prime})\)._ Especially, by taking \(L=O(\frac{1}{\epsilon(1-s)})\), we are able to amplify the original protocol to a new protocol with a much smaller soundness value, and approximation error \(O(\epsilon)\). Proof.: Consider an adversary \(\mathsf{Adv}\). Define event \(E_{i}=\)"the adversary passes by the \(i\)-th iteration". Then by the simulation-based soundness property we get, for any \(i\), there exists an efficiently computable simulator \(\mathsf{Sim}_{i}\) such that either \(\Pr[E_{i}|E_{i-1}]<s\), or (5) is satisfied by the end of the \(i\)-th iteration. Suppose this adversary could pass the protocol with overall probability \(\geq s^{\prime}\). Define \(S_{\text{low pass}}\) as the set of \(i\) that satisfies \(\Pr[E_{i}|E_{i-1}]<s\). To pass the overall protocol the adversary needs to pass in each iteration, thus to pass the overall protocol with probability \(\geq s^{\prime}\), there has to be \(|S_{\text{low pass}}|\leq\log_{s}(s^{\prime})\). Denote the initial state as \(\rho_{0}\), and denote the output state by the end of the \(i\)-th round as \(\rho_{i}\). Then for each \(i\in[L]-S_{\text{low pass}}\), \[\Pi_{\mathsf{pass}}(\rho_{i})\approx_{\epsilon}^{ind}\Pi_{\mathsf{pass}}( \mathsf{Sim}_{i}(\rho_{tar}\otimes\rho_{i-1}))\] which implies \[\Pi_{\mathsf{pass}}(\pi_{>i}(\rho_{i}))\approx_{\epsilon}^{ind}\Pi_{\mathsf{ pass}}(\pi_{>i}(\mathsf{Sim}_{i}(\rho_{tar}\otimes\pi_{<i}(\rho_{0})))) \tag{9}\] where \(\pi_{>i}\) is the protocol after round \(i\), and \(\pi_{<i}\) is the protocol before round \(i\). In the second round the client makes a random choice of \(i\in[L]\). We will construct a simulator that simulates the overall state. The simulator \(\mathsf{Sim}\) applied on \((\rho_{tar}\otimes\rho_{0})\) is defined as follows: 1. Sample a random coin \(i\leftarrow[L]\). 2. Run \(\tilde{\pi}_{<i}\) on \(\rho_{0}\) and get \(\tilde{\rho}_{i-1}\). 3. Run \(\mathsf{Sim}_{i}\) on \(\rho_{tar}\otimes\tilde{\rho}_{i-1}\). 4. Run \(\tilde{\pi}_{>i}\) on \(\mathsf{Sim}_{i}(\rho_{tar}\otimes\tilde{\rho}_{i-1})\). where \(\tilde{\pi}\) denotes the simulated protocol execution of \(\pi\): instead of interacting with the client, the simulator does all the client-side operations on its own registers and disgards these registers in the end. We prove this simulator achieves what we want. use \(\mathsf{Disgard}[\cdots]\) to denote the operation of disgarding the client-side registers with specific indices, which is in the second step of Protocol 1. Then by (9) we have \[\Pi_{i\in[L]-S_{\text{low pass}}}(\sum_{i\in[L]}\frac{1}{L}\left| i\right>\left<i\right|\otimes\mathsf{Disgard}[[L]-i](\Pi_{\mathsf{pass}}( \pi_{>i}(\rho_{i})))) \tag{10}\] \[\approx_{\epsilon}^{ind}\Pi_{i\in[L]-S_{\text{low pass}}}(\sum_{i \in[L]}\frac{1}{L}\left|i\right>\left<i\right|\otimes\mathsf{Disgard}[[L]-i]( \Pi_{\mathsf{pass}}(\pi_{>i}(\mathsf{Sim}_{i}(\rho_{tar}\otimes\pi_{<i}(\rho_{0 })))))) \tag{11}\] By \(\left|S_{\text{low pass}}\right|\leq\log_{s}(s^{\prime})\) there is \[\Pi_{i\in[L]-S_{\text{low pass}}}(\sum_{i\in[L]}\frac{1}{L}\left| i\right>\left<i\right|\otimes\mathsf{Disgard}[[L]-i](\Pi_{\mathsf{pass}}( \pi_{>i}(\rho_{i})))) \tag{12}\] \[\approx_{\frac{1}{L}\log_{s}(s^{\prime})}\sum_{i\in[L]}\frac{1}{L }\left|i\right>\left<i\right|\otimes\mathsf{Disgard}[[L]-i](\Pi_{\mathsf{pass }}(\pi_{>i}(\rho_{i}))) \tag{13}\] \[\Pi_{i\in[L]-S_{\text{low pass}}}(\sum_{i\in[L]}\frac{1}{L}\left| i\right>\left<i\right|\otimes\mathsf{Disgard}[[L]-i](\Pi_{\mathsf{pass}}( \pi_{>i}(\mathsf{Sim}_{i}(\rho_{tar}\otimes\pi_{<i}(\rho_{0})))))) \tag{14}\] \[\approx_{\frac{1}{L}\log_{s}(s^{\prime})}\sum_{i\in[L]}\frac{1}{L }\left|i\right>\left<i\right|\otimes\mathsf{Disgard}[[L]-i](\Pi_{\mathsf{pass }}(\pi_{>i}(\mathsf{Sim}_{i}(\rho_{tar}\otimes\pi_{<i}(\rho_{0}))))) \tag{15}\] Combining them completes the proof. ## 4 Remote Operator Application with Verifiability In this section we introduce a new notion named _remote operator application with verifiability_ (ROAV), for certifying server's operations. We will give the definition, and show how to use this notion to construct other RSPV protocols and the energy test protocol. ### Definitions of ROAV **Definition 4.1**.: An ROAV for a tuple of operators \((E_{1},E_{2}\cdots E_{D})\) is in the form of \((\rho_{test},\pi_{test},\pi_{comp})\) where: * \(\rho_{test}\) is in the form of (3) Definition 3.1; \(\pi_{test},\pi_{comp}\) are protocols as defined in Notation 2.6; there is a specific register on the server-side, and the honest behavior of both \(\pi_{test}\) and \(\pi_{comp}\) take this register as part of their inputs, and: * the server-side of \(\rho_{test}\) is expected to be in this register in the execution of \(\pi_{test}\); * the input of \(\pi_{comp}\) on this register is not expected to be a specific state; when we describe it (together with the corresponding client-side information) below, we typically use symbol \(\chi\). * \((E_{1},E_{2}\cdots E_{D})\) is a tuple of operators operating on a server-side register and they satisfy \(\sum_{i\in[D]}E_{i}^{\dagger}E_{i}=I\). Here \((E_{1},E_{2}\cdots E_{D})\) are the operators to be verified. Similar to Definition 3.1, we define the target operator as the following superoperator on both the client side and the server side, working on the server-side register and producing outputs on both the server-side register and a client-side register:2 Footnote 2: Note that we are not using Notation 2.4 for \(E_{i}(\cdot)E_{i}^{\dagger}\) to be consistent with the usual notations. \[\mathcal{E}(\underbrace{\cdot}_{\text{server}})=\sum_{i\in[D]}\underbrace{ \left|i\right\rangle\left\langle i\right|}_{\text{client}}\otimes\underbrace {E_{i}(\cdot)E_{i}^{\dagger}}_{\text{server}}\] An informal description of our ROAV notion is as follows. In Definition 4.1 (\(\rho_{test},\pi_{test}\)) is used to certify the server's operation. Explicitly, suppose the adversary is \(\mathsf{Adv}\), then protocol \(\pi_{test}^{\mathsf{Adv}}(\rho_{test})\) is used to certify the server's operation. Our goal is to certify that the server has applied the operator \(\mathcal{E}\) on the server-side input state, which means, \(\pi_{comp}^{\mathsf{Adv}}(\cdot)\) is close to \(\mathcal{E}(\cdot)\) where we use \(\cdot\) to denote an arbitrary input state. We further note that \(\mathsf{Adv}\) is the same adversary in both possible running above, and whether the client is running \(\pi_{test}\) or \(\pi_{comp}\) is not revealed in advanced. The completeness and soundness are defined as follows. **Definition 4.2** (Completeness of ROAV).: \((\rho_{test},\pi_{test},\pi_{comp})\) is an ROAV for target operator \(\mathcal{E}\) with completeness error \(\gamma\) if in the honest setting, for any input state \(\chi\), the joint output state of the client and the server is \(\gamma\)-close to \(\mathcal{E}(\chi)\). We simply say the protocol is complete if \(\gamma=\mathsf{negl}(\kappa)\). The soundness is formulated by a simulation-based definition. One additional subtlety is whether the server-side registers of \(\mathcal{E}\) is contained in or could be bigger than the server-side of \(\chi\). We will first formulate the simpler case where the server-side of \(\mathcal{E}\) is contained in \(\chi\); then we formulate the more general case. 1.1 Simpler case: the server-side of \(\mathcal{E}\) is contained in the server-side register of \(\chi\) **Definition 4.3** (Soundness of ROAV).: \((\rho_{test},\pi_{test},\pi_{comp})\) is an ROAV for target operator \(\mathcal{E}\) with soundness error \(\delta\) and approximation error \(\epsilon\) if: For any BQP adversary \(\mathsf{Adv}\), there exists an efficiently computable simulator \(\mathsf{Sim}^{\mathsf{Adv}}\) such that for any state \(\rho_{in}\in D(\mathcal{S}\otimes\mathcal{T})\) prepared by the adversary where \(\mathcal{S}\) is the server-side system and \(\mathcal{T}\) is a system that will not be touched by any party in the protocol, one of the following two is true: * (Small passing probability) when \(\rho\) is taken to be \(\rho_{test}\): \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi_{test}^{\mathsf{Adv}}(\rho_{test} \otimes\rho_{in})))\leq\delta\] * Define \[\left|\Phi\right\rangle=\frac{1}{\sqrt{D}}\sum_{i\in[D]}\underbrace{\left|i \right\rangle}_{\text{client}}\otimes\underbrace{\left|i\right\rangle}_{ \text{server}}\] (16) then there is \[\Pi_{\mathsf{pass}}(\pi_{comp}^{\mathsf{Adv}}(\Phi\otimes\rho_{in}))\approx_{ \epsilon}^{ind}\Pi_{\mathsf{pass}}(\mathsf{Sim}^{\mathsf{Adv}}(\mathcal{E}(\Phi \otimes\rho_{in})))\] (17) where the distinguisher has classical access to the client side output of \(\mathcal{E}\) and quantum access to all the registers (including the client-side of \(\Phi\), system \(\mathcal{S}\), and \(\mathcal{T}\)). #### 4.1.2 General definition of ROAV soundness Here we further generalize the notion to the setting where \(\mathcal{E}\) might operate on a server-side register that is possibly bigger than the server-side of \(\rho_{test}\). The soundness definition is mostly the same as Definition 4.3, with differences on (17) and an additional simulator. **Definition 4.4** (Soundness of ROAV).: \((\rho_{test},\pi_{test},\pi_{comp})\) is an ROAV for target operator \(\mathcal{E}\) with soundness error \(\delta\) and approximation error \(\epsilon\) if: For any BQP adversary \(\mathsf{Adv}\), there exist efficiently computable simulators \(\mathsf{Sim}^{\mathsf{Adv}}\) and \(\mathsf{Sim}_{in}^{\mathsf{Adv}}\) such that for any state \(\rho_{in}\in D(\mathcal{S}\otimes\mathcal{T})\) prepared by the adversary where \(\mathcal{S}\) is the server-side system and \(\mathcal{T}\) is a system that will not be touched by any party in the protocol, one of the following two is true: * (Small passing probability) when \(\rho\) is taken to be \(\rho_{test}\): \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi_{test}^{\mathsf{Adv}}(\rho_{test} \otimes\rho_{in})))\leq\delta\] * Define \[|\Phi\rangle=\frac{1}{\sqrt{D}}\sum_{i\in[D]}\underbrace{|i\rangle}_{\text{ client}}\otimes\underbrace{|i\rangle}_{\text{server}}\] (18) then there is \[\Pi_{\mathsf{pass}}(\pi_{comp}^{\mathsf{Adv}}(\Phi\otimes\rho_{in}))\approx_{ \epsilon}^{ind}\Pi_{\mathsf{pass}}(\mathsf{Sim}^{\mathsf{Adv}}(\mathcal{E}( \Phi\otimes\mathsf{Sim}_{in}^{\mathsf{Adv}}(\rho_{in})))\] (19) where the distinguisher has classical access to the client-side outputs of \(\mathcal{E}\) and quantum access to all the registers. With this definition, we could handle the case where some server-side states are not known by the client. For example, if the client wants to force the server to apply an operation on a QMA witness state, this definition will be needed. #### 4.1.3 Basic properties We show that, under our definition, ROAV has a relatively well-behaved property which allows us to derive ROAV for larger operators in the form of tensor products from ROAV for simpler operators. **Theorem 4.1**.: _Suppose \((\rho_{test,1},\pi_{test,1},\pi_{comp,1})\) is an ROAV under simpler definition (Definition 4.3) for target operator \(\mathcal{E}_{1}\) soundness error \(\delta\) and approximation error \(\epsilon_{1}\), \((\rho_{test,2},\pi_{test,2},\pi_{comp,2})\) is an ROAV (also under Definition 4.3) for target operator \(\mathcal{E}_{2}\) soundness error \(\delta\) and approximation error \(\epsilon_{2}\), \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) operate on different registers, the server-side dimension of \(\rho_{test,1}\) is \(D_{1}\) and the server-side dimension of \(\rho_{test,2}\) is \(D_{2}\)._ _Consider the protocol \((\rho_{test},\pi_{test},\pi_{comp})\) where:_ * \[\rho_{test}=\frac{1}{2}\underbrace{\left|1\right\rangle\left\langle 1\right|}_{ client\text{ side register of roundtype}}\otimes\rho_{test,1}\otimes\frac{1}{D_{2}} \mathbb{I}+\frac{1}{2}\left|2\right\rangle\left\langle 2\right|\otimes\frac{1}{D_{1}} \mathbb{I}\otimes\rho_{test,2}\] * \(\pi_{test}\) _is defined as follows. The client chooses to execute one of the following depending on the value of roundtype register, without telling the server the value of roundtype:_ * _If roundtype_ \(=1\)_, execute_ \(\pi_{test,1}\)_._ * _If roundtype_ \(=2\)_, execute_ \(\pi_{test,2}\circ\pi_{comp,1}\)__ * \[\pi_{comp}:=\pi_{comp,2}\circ\pi_{comp,1}\] _Then \((\rho_{test},\pi_{test},\pi_{comp})\) is an ROAV for target operator \(\mathcal{E}_{1}\otimes\mathcal{E}_{2}\) with soundness error \(\delta^{\prime}=1-\frac{1}{2}(1-\delta)+\frac{1}{2}\epsilon_{1}\) and approximation error \(\epsilon^{\prime}=\epsilon_{1}+\epsilon_{2}\)._ We note that our composition protocol could only handle the case where the ROAVs are under the simpler definition (Definition 4.3). Proof.: Suppose an adversary \(\mathsf{Adv}\) satisfies \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi_{test}^{\mathsf{Adv}}(\rho_{test} \otimes\rho_{in})))>\delta^{\prime} \tag{20}\] Then by the construction of \(\pi_{test}\) considering roundtype \(=1\) there is \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi_{test,1}^{\mathsf{Adv}}(\rho_{test, 1}\otimes\frac{1}{D_{2}}\mathbb{I}\otimes\rho_{in})))>\delta\] By the soundness of \(\pi_{test,1}\) there exists efficiently-computable simulator \(\mathsf{Sim}_{1}^{\mathsf{Adv}}\) such that \[\Pi_{\mathsf{pass}}(\pi_{comp,1}^{\mathsf{Adv}}(\Phi_{1}\otimes\frac{1}{D_{2 }}\mathbb{I}\otimes\rho_{in}))\approx_{\epsilon_{1}}^{ind}\Pi_{\mathsf{pass}} (\mathsf{Sim}_{1}^{\mathsf{Adv}}(\mathcal{E}_{1}(\Phi_{1})\otimes\frac{1}{D_{ 2}}\mathbb{I}\otimes\rho_{in}))) \tag{21}\] Now we move to analyze the second ROAV. First from (20) considering roundtype \(=2\) we get \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi_{test,2}^{\mathsf{Adv}}(\pi_{comp,1}^{\mathsf{Adv}}(\frac{1}{D_{1}}\mathbb{I}\otimes\rho_{test,2}\otimes\rho_ {in}))))>1-2(1-\delta^{\prime}) \tag{22}\] Notice the server-side of \(\Phi_{1}\) in (21) is \(\frac{1}{D_{1}}\mathbb{I}\), we can re-write (22) as \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi_{test,2}^{\mathsf{Adv}}(\pi_{comp,1}^{\mathsf{Adv}}(\Phi_{1}\otimes\rho_{test,2}\otimes\rho_{in}))))>1-2(1- \delta^{\prime}) \tag{23}\] Combining it with (21) we get \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi_{test,2}^{\mathsf{Adv}}(\mathsf{Sim }_{1}^{\mathsf{Adv}}(\mathcal{E}_{1}(\Phi_{1})\otimes\rho_{test,2}\otimes \rho_{in}))))>1-2(1-\delta^{\prime})-\epsilon_{1}>\delta \tag{24}\] Applying the soundness property of \((\rho_{test,2},\pi_{test,2})\) we know there exists an efficiently computable server-side simulator \(\mathsf{Sim}^{\mathsf{Adv}}\) such that \[\Pi_{\mathsf{pass}}(\pi_{\mathsf{comp},2}^{\mathsf{Adv}}(\mathsf{Sim}_{1}^{ \mathsf{Adv}}(\mathcal{E}_{1}(\Phi_{1})\otimes\Phi_{2}\otimes\rho_{in}))) \approx_{\epsilon_{2}}^{ind}\Pi_{\mathsf{pass}}(\mathsf{Sim}^{\mathsf{Adv}}( \mathcal{E}_{1}(\Phi_{1})\otimes\mathcal{E}_{2}(\Phi_{2})\otimes\rho_{in})) \tag{25}\] Combining (21) and (25) completes the proof. #### 4.1.4 Comparison to the non-local games setting Recall that in the usual setting of self-testing protocols, the client sends questions to two spatially-separated quantum servers. How is this related to our notions? In our notion there is no explicit appearance of two different servers; however, we could think about what the server is able to do if we focus on a specific server in the multi-server setting: the state that it holds is determined by the questions to and answers from the other server, which is unknown to it; to pass the client's checking, it has to apply the specific operation, regardless what the underlying state is. Indeed, one reason that self-testing in the multi-server setting is powerful is it allows us to design protocols that behave as follows: 1. The client chooses to play either Game 1 or Game 2 with the two servers; the distribution of questions seems the same in the view of a specific server. Game 1 is to control the server's operation and Game 2 is to perform some nontrivial cryptographic tasks. Then the soundness proof could go as follows: 1. The ability of passing Game 1 implies the servers' operations are close to some target operations. 2. Each of the servers is not aware of which game they are playing, and each of their operations only depends on the question it receives. Thus the operation closeness properties derived from Game 1 could be used to argue about behaviors of servers in Game 2. 3. Prove that servers with these behaviors could achieve the goal in Game 2. Our notion shares the same intuition with the self-testing in the multi-server setting as described above. In our definition of ROAV, the \((\rho_{test},\pi_{test})\) is used to test the server's behavior, and the soundness allows us to argue about the behavior of the server in \(\pi_{comp}\). ### Building RSPV from ROAV In this subsection we argue that ROAV is potentially useful for building RSPV for state families that are not easy to construct directly. We give a protocol for building RSPV protocols from ROAV and more basic RSPV. As a preparation we formulate a condition on the target state (Equation (3)). **Definition 4.5**.: Consider a target state (formulated in (3)): \[\rho_{tar}=\sum_{i\in[D]}p_{i}\underbrace{\ket{i}\bra{i}}_{\text{client}} \otimes\underbrace{\ket{\varphi_{i}}\bra{\varphi_{i}}}_{\text{server}} \tag{26}\] If \(\{\ket{\varphi_{i}}\}_{i\in[D]}\) is an orthogonal normal basis, we say (26) is a target state with respect to an orthogonal normal basis. **Protocol 2**.: _Suppose \((\rho_{test},\pi_{test},\pi_{comp})\) is an ROAV for target operator \(\mathcal{E}\). \(p\) is a constant in \((0,1)\). \(\rho_{0}\) is a target state as formulated in (26). Suppose \(\pi_{0}\) is an RSPV for the target state_ \[p\underbrace{\ket{\mathsf{test}}}_{\text{roundtype}}\bra{\mathsf{test}}\otimes \rho_{test}+(1-p)\ket{\mathsf{comp}}\bra{\mathsf{comp}}\otimes\rho_{0} \tag{27}\] _where the roundtype register is on the client side, and the server-side of \(\rho_{test}\) and \(\rho_{0}\) are of the same dimension._ 1. _Run protocol_ \(\pi_{0}\)_._ 2. _Depending on the value of roundtype:_ * _If roundtype_ \(=\mathsf{test}\)_, run_ \(\pi_{test}\)_._ * _If roundtype_ \(=\mathsf{comp}\)_, run_ \(\pi_{comp}\) _and keeps the output._ **Theorem 4.2**.: _Suppose \((\rho_{test},\pi_{test},\pi_{comp})\) is an ROAV for \(\mathcal{E}\) with soundness error \(\delta\) and approximation error \(\epsilon\). \(\pi_{0}\) is an RSPV for (27) with soundness error \(\delta\) and approximation error \(\epsilon_{0}\). Then Protocol 2 is an RSPV for target state \(\mathcal{E}(\rho_{0})\) with soundness error \(\delta^{\prime}=1-p(1-\delta)+\epsilon_{0}\) and approximation error \(\epsilon^{\prime}=4p+\epsilon+\epsilon_{0}\)._ Thus to use this protocol we need to make \(p\) small to keep the approximation error small, which leads to an RSPV protocol with large soundness error. But this could be solved by taking Protocol 2 to the cut-and-choose amplification protocol in Section 3.2.2. The following fact is useful for proving Theorem 4.2. **Fact 3**.: _Suppose \(\ket{\Phi}=\sum_{i\in[D]}\frac{1}{\sqrt{D}}\ket{i}\otimes\ket{i}\), \(\left(\ket{\varphi_{1}},\ket{\varphi_{2}},\cdots\ket{\varphi_{D}}\right)\) is an orthogonal normal basis, \(U\in\mathbb{C}^{D\times D}\) is defined as \(U\ket{i}=\ket{\varphi_{i}}\), then_ \[(U^{\dagger}\otimes I)\ket{\Phi}=(I\otimes U)\ket{\Phi}\] A corollary is a state in the form of (26) could be prepared by operating on the client side of \(\ket{\Phi}\). Proof for Theorem 4.2.: Suppose the adversary is \(\mathsf{Adv}\), the input state is \(\rho_{in}\) and Protocol 2 passes with probability \(>\delta^{\prime}\). More formally, denoting the step 2 in Protocol 2 as \(\pi_{\mathrm{step2}}\), there is \[\mathrm{tr}(\Pi_{\mathsf{pass}}((\pi_{\mathrm{step2}}\circ\pi_{0})^{\mathsf{ Adv}}(\rho_{in})))>\delta^{\prime}. \tag{28}\] First this implies the \(\pi_{0}\) step passes with probability \(>\delta^{\prime}>\delta\). By the soundness of \(\pi_{0}\) there exists an efficiently computable server-side simulator \(\mathsf{Sim}_{0}^{\mathsf{Adv}}\) such that \[\Pi_{\mathsf{pass}}(\pi_{0}^{\mathsf{Adv}}(\rho_{in}))\approx_{\epsilon_{0} }^{ind}\Pi_{\mathsf{pass}}(\mathsf{Sim}_{0}^{\mathsf{Adv}}((27)\otimes\rho_{ in})) \tag{29}\] This together with (28) implies \[\mathrm{tr}(\Pi_{\mathsf{pass}}(\pi_{\mathrm{step2}}^{\mathsf{Adv}}(\mathsf{ Sim}_{0}^{\mathsf{Adv}}((27)\otimes\rho_{in}))))>\delta^{\prime}-\epsilon_{0}. \tag{30}\] which further implies \[\mathrm{tr}(\Pi_{\mathsf{pass}}(\ket{\mathsf{test}}\bra{\mathsf{ test}}\otimes(\pi_{test}^{\mathsf{Adv}}(\mathsf{Sim}_{0}^{\mathsf{Adv}}(\rho_{ test}\otimes\rho_{in})))))>\delta \tag{31}\] From (31), by the ROAV soundness there exists an efficiently computable server-side simulator \(\mathsf{Sim}^{\mathsf{Adv}}\) such that \[\Pi_{\mathsf{pass}}(\pi_{\mathsf{comp}}^{\mathsf{Adv}}(\mathsf{ Sim}_{0}^{\mathsf{Adv}}(\Phi\otimes\rho_{in})))\approx_{\epsilon}^{ind}\Pi_{ \mathsf{pass}}(\mathsf{Sim}^{\mathsf{Adv}}(\mathcal{E}(\Phi)\otimes\rho_{in}))\] Applying Fact 3 we can measure the client-side of \(\Phi\) to collapse it to \(\rho_{0}\): \[\Pi_{\mathsf{pass}}(\pi_{\mathsf{comp}}^{\mathsf{Adv}}(\mathsf{ Sim}_{0}^{\mathsf{Adv}}(\rho_{0}\otimes\rho_{in})))\approx_{\epsilon}^{ind}\Pi_{ \mathsf{pass}}(\mathsf{Sim}^{\mathsf{Adv}}(\mathcal{E}(\rho_{0})\otimes\rho_{ in})) \tag{32}\] Notice that \(\mathsf{Sim}_{0}^{\mathsf{Adv}}(\rho_{0}\otimes\rho_{in})\approx_{2p}\mathsf{ Sim}_{0}^{\mathsf{Adv}}((27)\otimes\rho_{in})\), this together with (29)(32) implies \[\Pi_{\mathsf{pass}}((\pi_{\mathsf{comp}}\circ\pi_{0})^{\mathsf{Adv}}(\rho_{in})) \approx_{2p+\epsilon_{0}+\epsilon}^{ind}\Pi_{\mathsf{pass}}(\mathsf{Sim}^{ \mathsf{Adv}}(\mathcal{E}(\rho_{0})\otimes\rho_{in}))\] Noticing that \(\pi_{\mathsf{comp}}(\cdot)\approx_{2p}\pi_{\mathrm{step2}}(\cdot)\) completes the proof. ### Testing Ground State Energy by ROAV In this subsection we give a Hamiltonian ground energy testing protocol based on RSPV and ROAV. #### 4.3.1 Overview of the protocol As a review, in existing Hamiltonian ground energy testing protocols like [9, 11], the high level structure of protocols is typically as follows: Input: a Hamiltonian \(H=\sum_{i}\gamma_{i}H_{i}\) where \(H_{i}\) is simple. The honest server gets a witness state \(\rho\). 1. Repeat (sequentially or in parallel) the following for polynomial number of times: The client samples a random \(H_{i}\) and uses some protocols to get the measurement results of operator \(H_{i}\) on the server-side state. The server is not able to know which operator the client is measuring. 2. The client calculates the weighted average of the measurement results in the first step and decides whether it's a yes-instance or no-instance. The first step seems to have a form of ROAV, in the sense that the protocols aim at certifying that the server has measured an operator obliviously. Although the witness state is not held by the client, this is still within reach of our definition in Section 4.1.2. A more subtle difference here is that an ROAV protocol is defined for a fixed family of operators, while in the protocol described above the target operators depends on the input Hamiltonian \(H\). We expect that it's typically harder to construct ROAV compared to other primitives like RSPV, thus we want to find a way to reduce this task to simpler primitives. In this subsection we show a protocol that reduces this problem to the following two protocols: 1. An ROAV for a simple, fixed operator family: tensor products of Bell basis measurements. 2. An RSPV for state families that depend on the input Hamiltonian (but still as simple as products of simple states). The idea is to make use of teleportation-based computation [6].3 As a simple example, we consider the single-qubit witness case below. Footnote 3: Several other existing works [11, 5] also use it for different purposes or in different settings. As the setup, assume the server holds a single-qubit witness state \(\rho\), and in addition holds one of the four Bell state (see Definition 2.1). Index the qubit register for the witness with wire number \(w=1\), and index the qubit register for the Bell state with wire number \(w=2,3\). Then the quantum teleportation says a Bell-basis measurement on qubit \(1,2\) results in a state of the following form on qubit \(3\): \[\mathsf{X}^{a^{\prime}}\mathsf{Z}^{b^{\prime}}(\rho),a^{\prime},b^{\prime}\text { depend on the Bell state choices and measurement outcomes}\] An explicit expression for this process is as follows. Use \(\mathcal{E}\) to denote the Bell-basis measurement, use \(\mathsf{X}^{a}\mathsf{Z}^{b}\left|\varphi\right\rangle\) to denote different Bell basis states, there is \[(\mathcal{E}\otimes\mathsf{I})(\rho\otimes\mathsf{X}^{a}\mathsf{Z}^{b}(\varphi ))=\sum_{c,d\in\{0,1\}^{2}}\frac{1}{4}\underbrace{\left|c,d\right\rangle}_{ \text{measurement outcome}}\left\langle c,d\right|\otimes\mathsf{X}^{a+c}\mathsf{Z }^{b+d}(\rho)\] Then the standard basis measurement outcome on these three qubits encodes the standard basis measurement outcome on \(\rho\). Furthermore, to control the measurement operator applied on \(\rho\), the client only needs to have control on the state in qubit number \(3\), as follows: \[\text{For any gate $g$ on the 3rd qubit, }(\mathcal{E}\otimes\mathsf{l})(\rho\otimes g( \mathsf{X}^{a}\mathsf{Z}^{b}(\varphi)))=\sum_{c,d\in\{0,1\}^{2}}\frac{1}{4} \left|c,d\right\rangle\left\langle c,d\right|\otimes g(\mathsf{X}^{a+c} \mathsf{Z}^{b+d}(\rho)) \tag{33}\] Especially, if \(g=\mathsf{H}\), \(g(\mathsf{X}^{a+c}\mathsf{Z}^{b+d}(\rho))=\mathsf{X}^{b+d}\mathsf{Z}^{a+c}(g( \rho))\). This relation allows us to reduce the task of measuring operator \(H_{i}\) on \(\rho\) to the standard basis measurement of (33). What we need for translating (33) to a protocol is an ROAV for \(\mathcal{E}\) and an RSPV for the states that will be used (including the test state of ROAV, \(g(\mathsf{X}^{a}\mathsf{Z}^{b}(\rho))\) for \(g\in\{\mathsf{l},\mathsf{H}\}\)). Below we formulate the protocol. #### 4.3.2 Protocol formulation To formulate the protocol, we define several notations for preparation. **Notation 4.1** (Notation preparation for Protocol 3).: Consider qubit registers indexed by \((1,i),(2,i),(3,i)\), \(i\in[n]\). Define \[\left|\varphi\right\rangle=\otimes_{i\in[n]}(\frac{1}{\sqrt{2}}(\underbrace{0 }_{(2,i)}\underbrace{0}_{(3,i)})+\underbrace{1}_{(2,i)}\underbrace{1}_{(3,i)})) \tag{34}\] Define index set \(I_{2}=\{(2,i)|i\in[n]\}\), and define \(I_{3}\) similarly. We say \(\vec{a}\in\{0,1\}^{n}\) indexed by \(I_{2}\), when its coordinates are denoted as as \(a_{(2,i)}\) where \((2,i)\in I_{2}\). Define \(\mathsf{X}^{\vec{a}}\) as the operation that applies \(\mathsf{X}\) on qubit \((2,i)\) if \(a_{(2,i)}=1\). Define notations \(\mathsf{Z}^{\vec{b}}\) and \(\mathsf{H}^{\vec{v}}\) similarly. Define the following state in the form of (3), which is the family of all the possible four Bell states on wire \(2,3\) for each \(i\): \[\left|\phi\right\rangle\left\langle\phi\right|=\frac{1}{2^{2n}}\sum_{\vec{a}, \vec{b}\in\{0,1\}^{n}\text{ indexed by }I_{2}}\underbrace{\left|\vec{a},\vec{b}\right\rangle \left\langle\vec{a},\vec{b}\right|}_{\text{client-side}}\otimes\mathsf{X}^{ \vec{a}}\mathsf{Z}^{\vec{b}}(\varphi)\] Define operation \[\mathcal{E}=\text{``For each $i$, measure }(1,i),(2,i)\text{ on the Bell basis and measure }(3,i)\text{ on the standard basis}\] \[\text{and report the result to the client.''}\] Below we introduce more notations that deal with the Hamiltonian and its repetition. **Notation 4.2** (More notation preparation for Protocol 3).: For an XZ-local-Hamiltonian as defined in (4)(2) \[H=\sum_{j\in[m]}\gamma_{j}H_{j}, \tag{35}\] denote \(\mathsf{vecx}(H_{j})\) as an \(n\)-dimension vector indexed by \(I_{3}\) that: * If the observable on the \(i\)-th qubit in \(H_{j}\) is \(\sigma_{X}\), the \(i\)-th coordinate of \(\mathsf{vecx}(H_{j})\) is \(1\); * If the observable on the \(i\)-th qubit in \(H_{j}\) is \(\sigma_{Z}\) or \(\mathsf{l}\), the \(i\)-th coordinate of \(\mathsf{vecx}(H_{j})\) is \(0\). That is, \(\mathsf{vecx}(H_{j})\) indicates whether the corresponding observable in \(H_{j}\) is the \(\sigma_{\mathsf{X}}\) observable. Similarly define \(\mathsf{vecz}(H_{j})\) as the indicator for whether the corresponding observable in \(H_{j}\) is the \(\sigma_{\mathsf{Z}}\) observable. An example is as follows: \(\mathsf{H}^{\mathsf{vecx}(H_{i})}\) flips all the \(\sigma_{X}\) operations in \(H_{i}\) to \(\sigma_{Z}\) operations and keeps the others unchanged. Then define state \[\rho_{comp}=\frac{1}{m}\sum_{j\in[m]}\underbrace{|j\rangle\, \langle j|}_{\text{client-side}}\otimes\mathsf{H}^{\mathsf{vecx}(H_{j})}(\phi) \tag{36}\] That is, the client randomly samples an \(H_{j}\) from (35) and flips the \(\sigma_{\mathsf{X}}\) operators to \(\sigma_{\mathsf{Z}}\) operators. Now consider the \(K\)-fold tensor product for the notations above and in Notation 4.1. The qubit registers are indexed by \((w,i,k)\), \(w\in\{1,2,3\}\), \(i\in[n]\), \(k\in[K]\). Then \(\ket{\varphi}^{\otimes K}\) is defined as the \(k\)-fold tensor product of \(\ket{\varphi}\) arranged on registers \((2,i,k),(3,i,k)\). Similarly \(\ket{\phi}^{\otimes K}\) is defined as the \(k\)-fold tensor product of \(\ket{\phi}\), where the client-side registers are denoted by \(\vec{a}_{k},\vec{b}_{k}\), \(k\in[K]\). \(\mathcal{E}^{\otimes K}\) is similarly defined as applying \(\mathcal{E}\) for each \(k\in[K]\). Then similarly \(\rho_{comp}^{\otimes K}\) is defined as \[\sum_{\vec{j}=(j_{1},j_{2}\cdots j_{K})\in[m]^{K}}\ket{\vec{j}} \bra{\vec{j}}\otimes\mathsf{H}^{\mathsf{vecx}(H_{j_{1}})}(\phi)\otimes \mathsf{H}^{\mathsf{vecx}(H_{j_{2}})}(\phi)\otimes\cdots\otimes\mathsf{H}^{ \mathsf{vecx}(H_{j_{K}})}(\phi)\] Then the honest behavior that we want to design a protocol for could be described as \[\mathcal{E}^{\otimes K}(\rho^{\otimes K}\otimes\rho_{comp}^{\otimes K}) \tag{37}\] where \(\rho\) is the ground state of the Hamiltonian. Then we define a series of notations for arguing about the energy corresponding to (37). For each \(k\in[K]\), introduce variable \(\vec{c}_{k}\in\{0,1\}^{n},\vec{d}_{k}\in\{0,1\}^{n},\vec{e}_{k}\in\{0,1\}^{n}\) (which are \(3K\)\(n\)-dimensional vectors), and they correspond to the measurement outcome of (37) on qubits in the \(k\)-th fold indexed by \(I_{1},I_{2},I_{3}\). Then define \[\text{val}^{H}(\vec{a}_{k},\vec{b}_{k},\vec{c}_{k},\vec{d}_{k}, \vec{e}_{k},j_{k})=((\vec{a}_{k}+\vec{c}_{k}+\vec{e}_{k})\cdot\mathsf{vecz}(H_ {j_{k}})+(\vec{b}_{k}+\vec{d}_{k}+\vec{e}_{k})\cdot\mathsf{vecx}(H_{j_{k}})) \mod 2\] \[\text{val}^{H}(\vec{a}_{k},\vec{b}_{k},\vec{c}_{k},\vec{d}_{k}, \vec{e}_{k},j_{k})=m\cdot\gamma_{j_{k}}(-1)^{\text{valtemp}^{H}(\vec{a}_{k}, \vec{b}_{k},\vec{c}_{k},\vec{d}_{k},\vec{e}_{k},j_{k})}\] Use \(T_{k}\) to denote the tuple \((\vec{a}_{k},\vec{b}_{k},\vec{c}_{k},\vec{d}_{k},\vec{e}_{k},j_{k})\) and use \(T\) to denote the tuple \((T_{k})_{k\in[K]}\). Define \[\text{val}^{H}(T)=\frac{1}{K}\sum_{k\in[K]}\text{val}^{H}(T_{k}) \tag{38}\] **Protocol 3**.: _Input: an XZ 5-local Hamiltonian \(H=\sum_{j\in[m]}\gamma_{j}H_{j}\), \(a,b\), \(b-a\geq 1/\mathsf{poly}(n)\), as Definition 2.3._ _Take \(K=100\kappa^{2}\frac{1}{(b-a)^{2}}\). Consider qubit registers indexed by \((w,i,k)\), \(w\in\{1,2,3\},i\in[n],k\in[K]\). Use notations in Notation 4.1, 4.2. Suppose \((\rho_{test},\pi_{test},\pi_{comp})\) is an ROAV for \(\mathcal{E}^{\otimes K}\). Suppose \(\pi_{0}\) as an RSPV for the following target state:_ \[\frac{1}{2}\underbrace{|\mathsf{operatortest}\rangle}_{\text{roundtype}} \langle\mathsf{operatortest}|\otimes\rho_{test}+\frac{1}{2}\,|\mathsf{energytest} \rangle\,\langle\mathsf{energytest}|\otimes(\rho_{comp})^{\otimes K} \tag{39}\] _Note the roundtype information is kept on the client side and hidden from the server._ 1. _Execute protocol_ \(\pi_{0}\)_._ 2. _Depending on the value of roundtype:_ * _If_ \(\text{roundtype}=\mathsf{operatortest}\)_, the client executes_ \(\pi_{test}\) _with the server._ * _If_ \(\text{roundtype}=\mathsf{energytest}\)_, the client executes_ \(\pi_{comp}\) _with the server. Suppose the set of client-side information is denoted by_ \(T\) _as in Notation_ 4.2_. Accept if_ \(\text{val}(T)\leq\frac{a+b}{2}\) _and reject otherwise._ **Theorem 4.3**.: _Suppose \((\rho_{test},\pi_{test},\pi_{comp})\) is complete, \(\pi_{0}\) is complete, then Protocol 3 is complete._ Proof.: From the completeness we know the first step of Protocol 3 succeeds with \(1-\mathsf{negl}(\kappa)\) probability, the \(\mathsf{operatortest}\) succeeds with \(1-\mathsf{negl}(\kappa)\) probability, and the energy test implements (37) up to only a negligible error. For a yes-instance, for each \(k\in[K]\), by the promise there is \(\mathbb{E}[\text{val}^{H}(T_{k})]\leq a\). Thus for \(K=100\kappa^{2}\cdot\frac{1}{(b-a)^{2}}\) by Chernoff's bound there is \(\Pr[\text{val}^{H}(T)\leq\frac{a+b}{2}]\leq 2^{-\kappa}\). **Theorem 4.4**.: _Suppose \((\rho_{\mathsf{test}},\pi_{\mathsf{test}},\pi_{\mathsf{comp}})\) is an ROAV with soundness error \(\delta\) and approximation error \(\epsilon\). \(\pi_{0}\) is an RSPV with soundness error \(\delta\) and approximation error \(\epsilon_{0}\). Then Protocol 3 has soundness error \(\delta^{\prime}=\min\{1-\frac{1}{2}(1-\delta)+2\epsilon_{0},\epsilon+\epsilon_ {0}+\frac{1}{2}+\mathsf{negl}(\kappa)\}\)._ By substituting suitable parameters it's possible to make the soundness smaller than the completeness. Proof.: Suppose \(H\) is a no-instance, the adversary is \(\mathsf{Adv}\) and the protocol passes with probability \(>\delta^{\prime}\). That is, 4 Footnote 4: We omit the initial states since it’s not important here and could be \(|0\rangle\). \[\text{tr}(\Pi_{\mathsf{pass}}(\pi_{\text{step2}}\circ\pi_{0})^{\mathsf{Adv}}) >\delta^{\prime} \tag{40}\] This implies \(\text{tr}(\Pi_{\mathsf{pass}}(\pi_{0}^{\mathsf{Adv}}))>\delta\). By the soundness property of \(\pi_{0}\) there exists an efficiently computable server-side simulator \(\mathsf{Sim}_{0}^{\mathsf{Adv}}\) such that \[\Pi_{\mathsf{pass}}(\pi_{0}^{\mathsf{Adv}})\approx_{\epsilon_{0}}^{ind}\Pi_{ \mathsf{pass}}(\mathsf{Sim}_{0}^{\mathsf{Adv}}(\text{equation \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq This implies \[\Pi_{\mathsf{pass}}(\pi_{\mathsf{comp}}^{\mathsf{Adv}\circ\mathsf{Sim}_{0}^{ \mathsf{Adv}}}(\rho_{\mathsf{comp}}^{\otimes K}))\approx_{\epsilon}^{ind}\Pi_{ \mathsf{pass}}(\underbrace{|\mathsf{energytest}\rangle\langle\mathsf{energytest} |}_{\mathrm{roundtype}}\otimes(\mathsf{Sim}^{\mathsf{Adv}}(\mathcal{E}(\rho \otimes\rho_{\mathsf{comp}}^{\otimes K})))) \tag{44}\] Porjecting (41) onto \(\mathrm{roundtype}=\mathsf{comp}\) we get \[\operatorname{tr}(\Pi_{\mathsf{pass}}(\pi_{\mathrm{step2}}\circ\pi_{0})^{ \mathsf{Adv}}))\leq 1-\frac{1}{2}(1-\operatorname{tr}(\Pi_{\mathsf{pass}}(| \mathsf{energytest}\rangle\langle\mathsf{energytest}|\otimes(\pi_{\mathsf{ comp}}^{\mathsf{Adv}}(\mathsf{Sim}_{0}^{\mathsf{Adv}}(\rho_{\mathsf{comp}}^{\otimes K})))))+ \epsilon_{0}\] Combining it with the left hand side of (44) we get \[\delta^{\prime}\leq\frac{1}{2}\operatorname{tr}(\Pi_{\mathsf{pass}}(|\mathsf{ energytest}\rangle\langle\mathsf{energytest}|\otimes(\mathsf{Sim}^{\mathsf{Adv}}( \mathcal{E}(\rho\otimes\rho_{\mathsf{comp}}^{\otimes K})))))+\epsilon_{0}+ \epsilon+\frac{1}{2} \tag{45}\] Now we analyze the energy test passing probability in (45). By the definition of \(\mathcal{E}\) and \(\rho_{\mathsf{comp}}^{\otimes K}\), \(\mathcal{E}(\rho\otimes\rho_{\mathsf{comp}}^{\otimes K})\) is applying the energy test described in the beginning of Section 4.3.1, and see whether \(\operatorname{val}^{H}(T)<\frac{a+b}{2}\) holds (see (38) for the definition of \(\operatorname{val}(T)\)). By the fact that the ground energy of \(H\) is \(\geq b\), we know for each \(k\in[K]\), conditioned on any possible outcome of \(T_{1},\cdots T_{k-1}\), there is \(\mathbb{E}[\operatorname{val}^{H}(T_{k})]\geq b\); then we could apply the Chernoff's bound and get \[\operatorname{tr}(\Pi_{\operatorname{val}^{H}(T)<\frac{a+b}{2}}(\mathcal{E}( \rho\otimes\rho_{\mathsf{comp}}^{\otimes K})))<2^{-\kappa}\] Substituting it to (45) completes the proof.
2310.02529
MIDDAG: Where Does Our News Go? Investigating Information Diffusion via Community-Level Information Pathways
We present MIDDAG, an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles accompanied by comprehensive insights, including user/community susceptibility level, as well as events and popular opinions raised by the crowd while propagating the information. Besides discovering information flow patterns among users, we construct communities among users and develop the propagation forecasting capability, enabling tracing and understanding of how information is disseminated at a higher level.
Mingyu Derek Ma, Alexander K. Taylor, Nuan Wen, Yanchen Liu, Po-Nien Kung, Wenna Qin, Shicheng Wen, Azure Zhou, Diyi Yang, Xuezhe Ma, Nanyun Peng, Wei Wang
2023-10-04T02:08:11Z
http://arxiv.org/abs/2310.02529v2
MIDDAG: Where Does Our News Go? Investigating Information Diffusion via Community-Level Information Pathways ###### Abstract We present MIDDAG, an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles accompanied by comprehensive insights including user/community susceptibility level, as well as events and popular opinions raised by the crowd while propagating the information. Besides discovering information flow patterns among users, we construct communities among users and develop the propagation forecasting capability, enabling tracing and understanding of how information is disseminated at a higher level. 1University of California, Los Angeles 2University of Southern California 3Stanford University 4Harvard University {ma, ataylor2, ponienkung, wioletpeng, weiwang}@cs.ucla.edu {nuanwen, wenshich, xuezhema}@usc.edu [email protected] {wennaqin, amysz, diyiy}@stanford.edu ## 1 Introduction The current information propagation ecosystem consisting of traditional news outlets and social media platforms allows for near real-time transmission of information concerning current events. The complexity of the multi-modal data contained in individual information pathway (IP) necessitates the development of novel approaches to analyze how information originally reported by traditional news outlets will spread across social media platforms, as well as the receptivity of the users engaging with this information. While there are many existing methods performing link prediction on social media networks [1, 1, 1, 2, 13], recent works [14, 15] have investigated the prediction of links in information pathways. These works mainly focus on the pathway prediction task itself, it is hard to find works that visualize the information pathways and their forecasting results intuitively with comprehensive insights on motivating forces of information propagation such as user/community characteristics and discussion content. We present a comprehensive visualization system incorporating state-of-the-art information analysis components to provide a clear demonstration of information propagation details and patterns concerning COVID-19-related news within and across the audiences of prominent news organizations from countries highly impacted by COVID-19 across social media platforms. The first dataset we consider consists of all COVID-19-related tweets extracted from the Twitter API. Using tweets containing links to news articles authored by selected news organizations, we construct communities of users based on their engagement patterns with the selected news organizations, which enables us to construct and visualize the information pathways at the community level. We use these pathways to train our novel information pathway prediction model and present the predicted information propagation patterns for a held-out evaluation set. We also present a novel machine-learning-based approach trained on the collected tweets at the user level to predict and display the susceptibility level of users and communities, which provides further insight into each pathway. In addition to the susceptibility score, we perform event extraction on each tweet in an information pathway, which shows how and if the core ideas of the original article spread. To diversify the source of our data, we also use the Reddit data available for each news organization and apply existing techniques to extract popular opinions from the available posts in order to provide additional characterization to the information pathways once visualized. Finally, we pack the IP analysis capabilities in a system for the user to select, visualize, interact and discover information pathways at both user and community levels.1 Footnote 1: A demo video is available at [https://info-pathways.github.io](https://info-pathways.github.io) ## 2 Data and System Design We use COVID-19-related social media data on Twitter and Reddit to enable a broad information pathway investigation. We utilize an updated version of the Twitter news dataset presented in prior work [14], including all tweets from May 2020 to April 2021 containing COVID-19 keywords. Due to the large dataset size, we focus on the subset containing tweets from May 15 to May 30, 2020, which contained the largest number of tweets among all periods, including 640 million tweets from 5.3 million distinct users. We consider news articles from the selected organizations as the start of pathways. To select the news organizations, we first identified the top 15 countries by COVID-19 cases according to the WHO for the time period starting a month before the span of our data to its endpoint. We then used the Digital News Report from the Reuters Institute to determine the most prominent online news organizations for each country, and supplemented with additional sources when the report did not cover a given country or required further justification. We retrieve all user-level information pathways to include source tweets mentioning a news URL and its sub sequent retweets and replies. Besides Twitter, we explored discussions on Reddit with hyperlinked COVID-19 news articles and within the same period as the Twitter subset. In total, we collected 5,410 posts on 4,578 unique articles across 649 subreddit communities. We design a system to demonstrate information pathways and their related properties. The user first types keywords to search and then picks one or more news articles that serve as the starting points of IPs. The user-level visualization includes reply/repost propagation among users, susceptibility score and community assignment. Community-level visualization includes a directed graph indicating IPs among communities, community aggregated susceptibility level and community key opinions. The event panel shows the event trigger, type and associated arguments in users' discussions. ## 3 Components We first construct communities among social media users as the foundation for aggregating the user-level IP to the community granularity (SS3.1). We develop ML models to forecast the potential information flow (SS3.2), and predict the susceptibility levels of users and communities (SS3.3). We further demonstrate events and leading opinions in users' discussions while propagating information (SS3.4). ### Community Construction We assign social media users to communities centered around specific news organizations. The community aggregation method follows prior work that measures the influence of nodes in social networks Romero et al. (2010). This community aggregation method measures the influence that a given node exerts on its neighborhood, as well as the likelihood of how passive or receptive to propagation from its neighborhood a given node is. For each user that interacts with the given community, we retrieve the number of URLs it has posted and compile its one-hop neighborhood based on the source tweets from the community in question it has both posted and interacted with; these individual user graphs are then united to construct a directed community graph. We then assign weights to each user where \(Q_{i}\) is the number of URLs that \(i\) mentioned and \(S_{ij}\) is the number of URLs mentioned by \(i\) and retweeted by \(j\) as illustrated in the following equations. Finally, we perform the Influence-Passivity algorithm Romero et al. (2010) until the Influence (\(I_{i}\)) and Passivity (\(P_{i}\)) Scores have converged. \[w_{e}=\frac{S_{ij}}{Q_{ij}},u_{ij}=\frac{w_{i,j}}{\sum\limits_{k:(k,j)\in E}w_ {kj}},v_{ji}=\frac{1-w_{ji}}{\sum\limits_{k:(j,k)\in E}(1-w_{jk})}\] \[I_{i}\leftarrow\sum\limits_{j:(i,j)\in E}u_{ij}P_{j},P_{i}\leftarrow\sum \limits_{j:(j,i)\in E}v_{ji}I_{j}\] ### Pathway Prediction We use an improved version of the state-of-the-art IP prediction model presented in prior work Taylor et al. (2023). The community-level pathways of time periods other than the 15-day selected duration are separated into distinct time windows for training and evaluation, and a graph neural network model is trained to perform link prediction. The evaluation result shows the IP prediction model yields 86.83% AUC for the link prediction task. We then apply the trained model to the unseen IP graphs to conduct autoregressive prediction to predict the full information propagation traces. ### Susceptibility Prediction A user's susceptibility reflects their reaction to a piece of misinformation. Since collecting users' susceptibility directly is hard, we develop an ML model to predict susceptibility by analyzing its influence on users' repost behavior (\(P_{repost}\)). When a user (\(u\)) perceives a piece of content (\(c\)), we assume that the more susceptible the user, the more likely the user would repost the misinformation content. The probability of reposting is calculated by \[P_{repost}=Sigmoid(E(u)*E(c)*Sus(E(u),E(c)))\] where \(u\) is represented by user history posts, \(E\) is the text embedding model based on RoBERTa-large Reimers and Gurevych (2019); Liu et al. (2019), and the susceptibility score is produced by the \(Sus\) - an MLP with the user and content embeddings as input. We perform contrastive learning to tune the model to distinguish reposted (\(u,t\)) pairs from non-repost ones. The susceptibility score ranges from -100% to 100% where 100% means the user is most susceptible to misinformation. We train the ML model with misinformation tweets in the ANTi-Vax Hayawi et al. (2022) and CoAID Cui and Lee (2020) datasets and we retrieve corresponding user profiles through the Twitter API. The model produces 86.28% F1 score for retweet behaviour prediction, indirectly indicating its reliable performance for susceptibility modeling. To obtain an aggregated susceptibility score for a community, we calculate the mean of individual susceptibility scores for all users in the community. ### Event Extraction and Community Opinion Event extraction (EE) aims to identify triggers and arguments for events mentioned in the text, which enables us to understand users' discussion at a scale Ma et al. (2021); Ma et al. (2022); Ma et al. (2023). We first define a new COVID-19 related event ontology including 9 event types (\(i.e.\) end organization, social distancing, lock down, quarantine, vaccinate, die, fine, transport and extradite) and their corresponding arguments. Since there are no existing EE annotations on COVID-related events, we perform instruction tuning to enable a T5-large model Raffel et al. (2023) to generalize to newly defined event types following UIE Lu et al. (2022) by pre-training it with text-structure pairs and further fine-tuning it on 13 datasets of entity/relation/event/sentiment extraction tasks encoded with a unified language. To provide detailed information about discussion among users in a community when it propagates certain information, We identify the most liked post among downstream posts in an information flow as the representative opinion of the community.
2306.08084
Sensitivity analysis for studies transporting prediction models
We consider the estimation of measures of model performance in a target population when covariate and outcome data are available on a sample from some source population and covariate data, but not outcome data, are available on a simple random sample from the target population. When outcome data are not available from the target population, identification of measures of model performance is possible under an untestable assumption that the outcome and population (source or target population) are independent conditional on covariates. In practice, this assumption is uncertain and, in some cases, controversial. Therefore, sensitivity analysis may be useful for examining the impact of assumption violations on inferences about model performance. Here, we propose an exponential tilt sensitivity analysis model and develop statistical methods to determine how sensitive measures of model performance are to violations of the assumption of conditional independence between outcome and population. We provide identification results and estimators for the risk in the target population, examine the large-sample properties of the estimators, and apply the estimators to data on individuals with stable ischemic heart disease.
Jon A. Steingrimsson, Sarah E. Robertson, Issa J. Dahabreh
2023-06-13T18:58:15Z
http://arxiv.org/abs/2306.08084v1
# Sensitivity analysis for studies transporting prediction models ###### Abstract We present a model for the transport of a fluid in a viscous fluid with a ###### Abstract We consider the estimation of measures of model performance in a target population when covariate and outcome data are available on a sample from some source population and covariate data, but not outcome data, are available on a simple random sample from the target population. When outcome data are not available from the target population, identification of measures of model performance is possible under an untestable assumption that the outcome and population (source or target population) are independent conditional on covariates. In practice, this assumption is uncertain and, in some cases, controversial. Therefore, sensitivity analysis may be useful for examining the impact of assumption violations on inferences about model performance. Here, we propose an exponential tilt sensitivity analysis model and develop statistical methods to determine how sensitive measures of model performance are to violations of the assumption of conditional independence between outcome and population. We provide identification results and estimators for the risk in the target population, examine the large-sample properties of the estimators, and apply the estimators to data on individuals with stable ischemic heart disease. Introduction Users of prediction models are typically interested in obtaining model-derived predictions in a target population of substantive interest. However, the data used for model building and evaluation of model performance (i.e., the source data) are often not a random sample from the target population (e.g., due to convenience sampling or the two data sources coming from different geographic regions or healthcare systems). When prediction error modifiers [1], that is, variables that affect model performance, have a different distribution between the source population and the target population, measures of model performance calculated using data from the source population are not representative of model performance in the target population. It is possible to estimate model performance in the target population under the untestable assumption that the outcome is independent of the population (source or target) given the observed covariates [1, 2, 3, 4] using the source data and covariate data from the target population, even when outcome information is unavailable in the target population. The assumption that the outcome is independent of the population, however, is untestable using the observed data and will often be uncertain, or even controversial, in practical applications. Therefore, it is useful to conduct sensitivity analyses to determine how sensitive conclusions are to violations of the conditional transportability condition. There is a large literature on sensitivity analysis for missing data [5, 6, 7, 8, 9, 10] and unmeasured confounding in observational studies [11, 12, 13, 14, 15]. In addition, a smaller but growing literature consider methods for sensitivity analyses when extending (i.e., generalizing or transporting) inferences about treatment effects from a randomized trial to a target population [16, 17, 18, 19, 20]. To our knowledge there is no prior work developing sensitivity analysis methods for evaluating the performance of prediction models in a target population. This task involves different target parameters and requires different identifiability results and estimation procedures than transportability of measures of model performance. Here, we develop global sensitivity analysis methods for loss-based measures of model performance in the target population using an exponential tilt model [7, 8, 9, 15]. Global sensitivity analysis allows for evaluation of how big the violation of a core assumption needs to be in order for con clusions to change [21]. We provide identification results and derive estimators and large sample properties of the estimators for both "nested" and "non-nested" sampling designs [22]. We show how the range of the sensitivity parameter can be selected by hypothesizing about a reasonable range of prevalence rate of the outcome in the target population. We illustrate the methods using data on individuals with stable ischemic heart disease. ## 2 Goals of the analysis, study design, and data structures Let \(Y\) be a univariate outcome assessed at the end of the study (e.g., binary, count, or continuous) and \(X\in\mathcal{X}\) a baseline covariate vector. Under a non-nested sampling design [23], we assume that we have access to a random sample of outcome and covariate information from the source population \(\{(X_{i},Y_{i}),i=1,\ldots,n_{1}\}\) and a separately obtained random sample of covariates, but no outcome information, from the target population \(\{X_{i},i=1,\ldots,n_{0}\}\). This setup does not restrict the data from the source population to be obtained from a formal sampling process and can be thought of as if sampled from some underlying hypothetical super-population that is potentially not well characterized (e.g., as is the cases when using convenience sampling) [24, 25]. However, we assume that the target population data is representative of a target population of substantive interest. Let \(S\) be an indicator whether the data is from the source population (\(S=1\) if from the source population and \(S=0\) if from the target population). Under this setup, the combined data from the source and target population is \[\mathcal{O}=\{X_{i},S_{i},S_{i}\times Y_{i},i=1,\ldots,n=n_{1}+n_{0}\}.\] Let \(X^{*}\) be a subset of \(X\) that is used for constructing a prediction model and let \(h(X^{*},\beta)\) be a prediction model for the conditional expectation \(\mathrm{E}[Y|X^{*},S=1]\) indexed through the unknown parameter \(\beta\in\mathcal{B}\). We use \(\widehat{\beta}\) to denote an estimator for \(\beta\) and \(h(X^{*},\widehat{\beta})\) as the fitted model. Throughout this paper, we do _not_ assume that the model \(h(X^{*},\beta)\) is correctly specified; thus, our results also hold for misspecified models (under the assumptions listed in Sections 4 and 5). We assume that the model is built (i.e., \(\beta\) is estimated) on a dataset that is independent of the data used to evaluate the model and we use \(f(\cdot)\) to generically denote densities. We focus on estimation of loss-based measures of model performance in the target population. A loss function \(L(Y,h(X^{*},\widehat{\beta}))\) quantifies the discrepancy between the observed outcome \(Y\) and model-derived predictions \(h(X^{*},\widehat{\beta})\). Common examples include the mean squared error, absolute deviation, and Brier loss functions [26]. Our target parameter is the expected loss (risk) in the target population; in the non-nested design that parameter is \(\mathrm{E}[L(Y,h(X^{*},\widehat{\beta}))|S=0]\). An alternative approach is to use a nested design [1, 23] where the source population is nested within a cohort that is a sample from the target population (e.g., using record linkage of the source data with data from the target population). For nested designs, we assume that covariate information is available from the entire cohort but outcome information is only available from the source population data. For nested designs [22], the target parameter is \(\mathrm{E}[L(Y,h(X^{*},\widehat{\beta}))]\). Sampling designs for both nested and non-nested designs have been discussed elsewhere [1, 23] and all expectations and probabilities are under the distribution induced by the sampling design. ## 3 Identification ### Identifiability conditions The following two conditions are sufficient for identifiability of loss-based measures of model performance using the observable data \(\mathcal{O}\) for non-nested designs [1]. 1. Positivity: \(\Pr[S=1|X=x]>0\) for all \(x\in\mathcal{X}\) that have a positive density in the target population \(f_{X,S}(x,S=0)>0\). Condition A1 is in principle testable using the observed data, but performing tests for the validity of that assumption can be challenging with high-dimensional covariates [27]. 2. Conditional transportability: \(Y\perp\!\!\!\perp S|X\). This key condition is untestable using the observed data because outcome information from the target population is unavailable; therefore, in many applications, conditional transportability is an uncertain, and even controversial, assumption. For nested designs we need a slightly modified version of the positivity condition: A1\({}^{*}\). Positivity: \(\Pr[S=1|X=x]>0\) for all \(x\in\mathcal{X}\) such that \(f_{X}(x)>0\). ### Identification of measures of model performance Under conditions A1 and A2, the risk in the target population for a non-nested design can be identified [1] using a nested expectation ("g-formula"-like [28]) expression, \[\phi\equiv\mathrm{E}[\mathrm{E}[L(Y,h(X^{*},\widehat{\beta}))|X,S=1]|S=0];\] or, equivalently, using an inverse odds weighting expression, \[\phi=\frac{1}{\Pr[S=0]}\,\mathrm{E}\left[\frac{I(S=1)\Pr[S=0|X]}{\Pr[S=1|X]}L( Y,h(X^{*},\widehat{\beta}))\right].\] If conditions A1\({}^{*}\) and A2 hold, then the risk in the target population for a nested design can be identified [1] using a different nested expectation expression, \[\psi\equiv\mathrm{E}[\mathrm{E}[L(Y,h(X^{*},\widehat{\beta}))|X,S=1]];\] or, equivalently, using an inverse probability weighting expression, \[\psi=\mathrm{E}\left[\frac{I(S=1)}{\Pr[S=1|X]}L(Y,h(X^{*},\widehat{\beta})) \right].\] These identification results critically depend on assuming that the conditional transportability condition (A2) holds; in the rest of this manuscript, we consider methods for examining how sensitive results are to violations of this assumption. Sensitivity analysis when transporting measures of model performance ### Sensitivity analysis model Assume that condition A2 does not hold, so that \(Y\not{\perp}S|X\), and \[f_{Y|X,S}(y|x,s=0)\neq f_{Y|X,S}(y|x,s=1).\] We use an exponential tilt model [7, 8, 9] to parameterize violations of the conditional transportability assumption. That is, we assume \[f_{Y|X,S}(y|x,s=0)\propto e^{\eta q(y)}f_{Y|X,S}(y|x,s=1),\eta\in\mathbb{R}, \tag{1}\] where \(q\) is a fixed increasing function and \(\eta\) is the sensitivity analysis parameter (which is not identifiable because outcome information is unavailable from the target population). Setting \(\eta=0\) corresponds to the case where conditional transportability holds; \(\eta\) values further from zero represent greater violations of the conditional transportability assumption. Because the left-hand-side of equation (1) is a density, we have that \[f_{Y|X,S}(y|x,s=0)=\frac{e^{\eta q(y)}f_{Y|X,S}(y|x,s=1)}{\mathrm{E}[e^{\eta q (Y)}|X=x,S=1]}. \tag{2}\] For a binary outcome with \(q\) as the identity function, the exponential tilt model implies that \[\Pr[Y=1|X,S=0]\propto e^{\eta}\Pr[Y=1|X,S=1],\eta\in\mathbb{R}; \tag{3}\] it follows that \(\eta>0\) (\(\eta<0\)) implies that the conditional probability of the outcome in the target population is higher (lower) than in the source population. ### Relationship with selection models Using Bayes theorem, we can re-write the exponential tilt model in equation (2) as \[\frac{\Pr[S=0|X,Y=y]}{\Pr[S=1|X,Y=y]}=\frac{\Pr[S=0|X]}{\Pr[S=1|X]}\times\frac{e^ {\eta q(y)}}{\mathrm{E}[e^{\eta q(Y)}|X,S=1]}.\] Taking logarithms gives, \[\mathrm{logit}\big{(}\Pr[S=0|X,Y=y]\big{)}=\mathrm{logit}\big{(}\Pr[S=0|X] \big{)}+\eta q(y)-\ln\left(\mathrm{E}[e^{\eta q(Y)}|X,S=1]\right), \tag{4}\] where for a real number \(0<u<1\), \(\mathrm{logit}(u)=\ln\big{(}u(1-u)^{-1}\big{)}.\) In other words, the exponential tilt model has an interpretation as an odds of a selection model, where selection depends, in addition to the measured covariates \(X\), on the outcome \(Y\) which is not observed when \(S=0\). ## 5 Identifiability, estimation, and inference ### Identifiability of the sensitivity analysis model In Appendix A, we show that, under our sensitivity analysis model, for a fixed \(\eta\), the risk in the target population for a non-nested design is identified by \[\phi(\eta)=\mathrm{E}\Bigg{[}\frac{\mathrm{E}\left[L(Y,h(X^{*},\widehat{\beta} ))e^{\eta q(Y)}|X,S=1\right]}{\mathrm{E}\left[e^{\eta q(Y)}|X,S=1\right]} \Bigg{|}S=0\Bigg{]}. \tag{5}\] Furthermore, in Appendix A, we show that, under our sensitivity analysis model, for a fixed \(\eta\), the risk in the target population for a nested design is identified by \[\psi(\eta)=\mathrm{E}[SL(Y,h(X^{*},\widehat{\beta}))]+\mathrm{E}\left[I(S=0) \frac{\mathrm{E}\left[L(Y,h(X^{*},\widehat{\beta}))e^{\eta q(Y)}|X,S=1\right] }{\mathrm{E}\left[e^{\eta q(Y)}|X,S=1\right]}\right]. \tag{6}\] The first term in expression (6) is independent of the sensitivity parameter \(\eta\) and represents the contribution of the sampled subset of the target population where the model is developed. Using data from participants from the source population does not rely on the sensitivity model (for them no assumption is in doubt) and thus their contribution to the overall analyses should not change with different values of the sensitivity parameter. Because \(\eta\) is not identifiable using the observed data \(\mathcal{O}\) we propose to use expressions (5) and (6) to conduct sensitivity analysis for a reasonable range of \(\eta\) values; in Section 6 we show how knowledge about the marginal probability of the outcome in the target population can be used to inform what range of \(\eta\) values to consider. ### Estimation in the sensitivity analysis model for non-nested designs The sample analog of expression (5) gives the following conditional loss estimator for the risk in the target population under a non-nested design: \[\widehat{\phi}_{cl}(\eta)=\frac{1}{n_{0}}\sum_{i=1}^{n}I(S_{i}=0)\widehat{b}( X_{i};\eta),\] where \(\widehat{b}(X;\eta)\) is an estimator for \[b(X;\eta)=\frac{\mathrm{E}\left[L(Y,h(X^{*},\widehat{\beta}))e^{\eta q(Y)}|X, S=1\right]}{\mathrm{E}\left[e^{\eta q(Y)}|X,S=1\right]}.\] When \(\eta=0\) (i.e., when the conditional transportability condition holds), the estimator \(\widehat{\phi}_{cl}(\eta)\) is equal to conditional loss estimator [29]. For binary \(Y\) and \(q\) as the identity function, we can estimate \(b(X;\eta)\) using \[\widehat{b}(X;\eta)=\left(\frac{L(1,h(X^{*},\widehat{\beta}))e^{\eta}\widehat {g}(X)+L(0,h(X^{*},\widehat{\beta}))(1-\widehat{g}(X))}{1+\widehat{g}(X)(e^{ \eta}-1)}\right),\] where \(\widehat{g}(X)\) is an estimator for \(\Pr[Y=1|X,S=1]\). For a continuous \(Y\), we can estimate \(b(X;\eta)\) using a weighted linear regression of \(L(Y,h(X^{*},\widehat{\beta}))\) on \(X\) using data from the source population with weights equal to \(e^{\eta q(Y)}\). In Supplementary Web Appendix B we show that the influence function for \(\phi(\eta)\) under the non-parametric model [30] of the observable data is \[\Phi^{1}(\eta) =\frac{1}{\Pr[S=0]}\Bigg{\{}I(S=0)\Bigg{\{}\frac{\operatorname{E}[L( Y,h(X^{*},\widehat{\beta}))e^{\eta q(Y)}|X,S=1]}{\operatorname{E}[e^{\eta q(Y)}|X,S=1]} -\phi(\eta)\Bigg{\}}\] \[+\frac{I(S=1)\Pr[S=0|X]e^{\eta q(Y)}}{\Pr[S=1|X]\operatorname{E}[ e^{\eta q(Y)}|X,S=1]}\times\Bigg{\{}L(Y,h(X^{*},\widehat{\beta}))-\frac{ \operatorname{E}[L(Y,h(X^{*},\widehat{\beta}))e^{\eta q(Y)}|X,S=1]}{ \operatorname{E}[e^{\eta q(Y)}|X,S=1]}\Bigg{\}}.\] The influence function \(\Phi^{1}(\eta)\) suggests the augmented estimator \[\widehat{\phi}_{aug}(\eta)=\frac{1}{n_{0}}\sum_{i=1}^{n}\left(I(S_{i}=0) \widehat{b}(X_{i};\eta)+\frac{I(S_{i}=1)(1-\widehat{p}(X_{i}))e^{\eta q(Y_{i} )}}{\widehat{p}(X_{i})\widehat{c}(X_{i};\eta)}\times\left(L(Y_{i},h(X_{i}^{*},\widehat{\beta}))-\widehat{b}(X_{i};\eta)\right)\right),\] where \(\widehat{p}(X)\) is an estimator for \(\Pr[S=1|X]\) and \(\widehat{c}(X;\eta)\) is an estimator for \(\operatorname{E}[e^{\eta q(Y)}|X,S=1]\). The conditional loss estimator \(\widehat{\phi}_{cl}(\eta)\) is a special case of the augmented estimator with \(\widehat{p}(X_{i})=1\) for all \(i\). And if the conditional transportability condition holds (i.e., \(\eta=0\)), then the augmented estimator is identical to the doubly robust estimator developed in [29]. To study the large sample properties of \(\widehat{\phi}_{aug}(\eta)\) we define, for arbitrary functions \(b^{\prime}(X)\), \(c^{\prime}(X)\), \(p^{\prime}(X)\), and \(\gamma^{\prime}\), the function \[H(X, S,Y;b^{\prime}(X),c^{\prime}(X),p^{\prime}(X),\gamma^{\prime})\] \[=\gamma^{\prime}\left(I(S=0)b^{\prime}(X)+\frac{I(S=1)(1-p^{ \prime}(X))e^{\eta q(Y)}\left(L(Y,h(X^{*},\widehat{\beta}))-b^{\prime}(X) \right)}{p^{\prime}(X)c^{\prime}(X)}\right).\] For a random variable \(W\) we define \(\mathbb{P}_{n}(W)=\frac{1}{n}\sum_{i=1}^{n}W_{i}\) and \(\mathbb{G}_{n}(W)=\sqrt{n}(\mathbb{P}_{n}(W)-\operatorname{E}[W])\). Using this notation, we can write the augmented estimator as \[\widehat{\phi}_{aug}(\eta)=\mathbb{P}_{n}(H(X,S,Y;\widehat{b}(X;\eta),\widehat {c}(X;\eta),\widehat{p}(X),n/n_{0})).\] In Supplementary Web Appendix C.1 we prove the following theorem about the large-sample properties of the augmented estimator: **Theorem 1**.: _Let \(b^{*}(X;\eta)\), \(c^{*}(X;\eta)\), and \(p^{*}(X)\) be the asymptotic limits of \(\widehat{b}(X;\eta)\), \(\widehat{c}(X;\eta)\), and \(\widehat{p}(X)\), respectively. Under conditions B1-B5 listed in Supplementary Web Appendix C.1, the aug mented estimator \(\widehat{\phi}_{aug}(\eta)\)_ * _Is consistent, that is,_ \(\widehat{\phi}_{aug}(\eta)\stackrel{{ P}}{{\longrightarrow}}\phi(\eta)\)_;_ * _Has the asymptotic representation_ \[\sqrt{n}(\widehat{\phi}_{aug}(\eta)-\phi(\eta))=\mathbb{G}_{n}(H(X,S,Y;b^{*}(X ;\eta),c^{*}(X;\eta),p^{*}(X),\Pr[S=0]^{-1}))+Rem+o_{P}(1),\] (7) _where the reminder term satisfies_ \[Rem\leq O_{P}\Bigg{(}1+\sqrt{n}\Bigg{|}\frac{\mathrm{E}[L(Y,h(X^ {*},\widehat{\beta}))e^{\eta q(Y)}|X,S=1]}{\mathrm{E}[e^{\eta q(Y)}|X,S=1]}- \widehat{b}(X;\eta)\Bigg{|}\Bigg{|}_{2}^{2}\times||\Pr[S=1|X]-\widehat{p}(X)| |_{2}^{2}\\ +\sqrt{n}\Bigg{|}\frac{\mathrm{E}[L(Y,h(X^{*},\widehat{\beta}))e ^{\eta q(Y)}|X,S=1]}{\mathrm{E}[e^{\eta q(Y)}|X,S=1]}-\widehat{b}(X;\eta) \Bigg{|}\Bigg{|}_{2}^{2}\times||\mathrm{E}[e^{\eta q(Y)}|X,S=1]-\widehat{c}(X; \eta)||_{2}^{2}\Bigg{)}.\] Theorem 1 shows that the augmented estimator has the rate of convergence \[\sqrt{n}(\widehat{\phi}_{aug}(\eta)-\phi(\eta)) \leq O_{P}\Bigg{(}1+\sqrt{n}\Bigg{|}\Bigg{|}\frac{\mathrm{E}[L(Y,h (X^{*},\widehat{\beta}))e^{\eta q(Y)}|X,S=1]}{\mathrm{E}[e^{\eta q(Y)}|X,S=1]} -\widehat{b}(X;\eta)\Bigg{|}\Bigg{|}_{2}^{2}\times||\Pr[S=1|X]-\widehat{p}(X)| |_{2}^{2}\] \[+\sqrt{n}\Bigg{|}\Bigg{|}\frac{\mathrm{E}[L(Y,h(X^{*},\widehat{ \beta}))e^{\eta q(Y)}|X,S=1]}{\mathrm{E}[e^{\eta q(Y)}|X,S=1]}-\widehat{b}(X; \eta)\Bigg{|}\Bigg{|}_{2}^{2}\times||\,\mathrm{E}[e^{\eta q(Y)}|X,S=1]- \widehat{c}(X;\eta)||_{2}^{2}\Bigg{)}.\] This implies that if the combined rate of \(\widehat{b}(X;\eta)\) and \(\widehat{p}(X)\) is at least \(\sqrt{n}\) and the combined rate of \(\widehat{b}(X;\eta)\) and \(\widehat{c}(X;\eta)\) is at least \(\sqrt{n}\), then \(\widehat{\phi}_{aug}(\eta)\) is \(\sqrt{n}\) convergent. Thus, \(\widehat{\phi}_{aug}(\eta)\) can be \(\sqrt{n}\) convergent even if the estimators \(\widehat{b}(X;\eta)\), \(\widehat{p}(X)\), and \(\widehat{c}(X;\eta)\) converge at a slower rate than \(\sqrt{n}\) as long as the two conditions on the combined rate of convergence hold (rate robustness [31]). Hence, augmented estimator can be used with data-adaptive estimators (such as GAMs) while still allowing for asymptotically valid inference [32]. In contrast, the conditional loss estimator inherits the rate of convergence of \(\widehat{b}(X;\eta)\). Assumption \(B1\) in Supplementary Web Appendix C.1 suggests that only one of \(\widehat{b}(X;\eta)\) or \((\widehat{p}(X),\widehat{c}(X;\eta))\) need to be consistent in order for the augmented estimator to be consistent, but not both (model robustness). This result, however, is not as useful as it might appear as implementation of both \(\widehat{b}(X;\eta)\) and \(\widehat{c}(X;\eta)\) relies on specifying the relationship between the outcome and the covariates in the sample from the source population. For example, in the special case of a binary \(Y\) and with \(q\) as the identity function we can write \(\widehat{c}(X;\eta)=e^{\eta}\widehat{g}(X)+1-\widehat{g}(X)\) and the augmented estimator as \[\widehat{\phi}_{aug}(\eta)=\frac{1}{n_{0}}\sum_{i=1}^{n}\left(I(S_{i}=0) \widehat{b}(X_{i};\eta)+\frac{I(S_{i}=1)(1-\widehat{p}(X_{i}))e^{\eta Y_{i}}}{ \widehat{p}(X_{i})(e^{\eta}\widehat{g}(X_{i})+1-\widehat{g}(X_{i}))}\left(L(Y_ {i},h(X_{i}^{*},\widehat{\beta}))-\widehat{b}(X_{i};\eta)\right)\right), \tag{8}\] with \[\widehat{b}(X;\eta)=\left(\frac{L(1,h(X^{*},\widehat{\beta}))e^{\eta}\widehat {g}(X)+L(0,h(X^{*},\widehat{\beta}))(1-\widehat{g}(X))}{e^{\eta}\widehat{g}(X) +1-\widehat{g}(X)}\right). \tag{9}\] So both \(\widehat{b}(X;\eta)\) and \(\widehat{c}(X;\eta)\) rely on specifying \(\widehat{g}(X)\). The following theorem, proved in Supplementary Web Appendix C, shows that if \(\widehat{\phi}_{aug}(\eta)\) is implemented using expression (8) with \(b(X;\eta)\) estimated using expression (9), then consistency relies on correctly specifying the model for \(\Pr[Y=1|X,S=1]\) but does not rely on correctly specifying the model for \(\Pr[S=1|X]\). **Theorem 2**.: _If \(\widehat{g}(X)\stackrel{{ P}}{{\longrightarrow}}\Pr[Y=1|X,S=1]\) and \(\phi(\eta)\) is estimated using expression (8) with \(b(X;\eta)\) estimated using expression (9), then \(\widehat{\phi}_{aug}(\eta)\stackrel{{ P}}{{\longrightarrow}}\phi(\eta)\) whether the model for \(\Pr[S=1|X]\) is correctly specified or not._ The augmented estimator \(\widehat{\phi}_{aug}(\eta)\) (for binary, count, and continuous outcomes) has the advantage over the conditional loss estimator of being able to accommodate more flexible modeling of the nuisance parameters \(\mathrm{E}[Y=1|X,S=1]\) and \(\Pr[S=1|X]\) while allowing for asymptotically valid inference [32]. This is important because subject matter knowledge is often inadequate for specifying parametric models for \(\mathrm{E}[Y|X,S=1]\) and \(\Pr[S=1|X]\). ### Alternative parameterization of the selection model If we parameterize the sensitivity analysis using expression (4) \[\mathrm{logit}\big{(}\Pr[S=0|X,Y=y]\big{)}=a(X;\eta)+\eta q(y),\] where \(a(X;\eta)=\text{logit}\big{(}\Pr[S=0|X]\big{)}-\ln(\text{E}[e^{\eta q(Y)}|X,S=1])\), the augmented estimator can be written as \[\widehat{\phi}_{aug}(\eta)=\frac{1}{n_{0}}\sum_{i=1}^{n}\left(I(S _{i}=0)\widehat{b}(X_{i};\eta)+I(S_{i}=1)e^{\widehat{a}(X_{i};\eta)+\eta q(Y_{ i})}\left(L(Y_{i},h(X_{i}^{*},\widehat{\beta}))-\widehat{b}(X_{i};\eta)\right) \right), \tag{10}\] where \(\widehat{a}(X;\eta)\) is an estimator for \(a(X;\eta)\). In addition to estimating \(b(X;\eta)\), implementation of the augmented estimator using expression (10) also requires estimating \(a(X;\eta)\). To estimate \(a(X;\eta)\) we start by noting that [19, 6] \[\text{E}\left[\frac{I(S=1)e^{a(X;\eta)+\eta q(Y)}}{\Pr[S=0]}-1\right]=0.\] If we posit a parametric model \(a(X,\theta;\eta)\) for \(a(X;\eta)\) where \(\theta\) is a finite dimensional parameter, then \(\theta\) can be estimated using a generalized methods of moments estimator [33], which for a correctly specified parametric model is consistent under a mild conditions [34]. In Supplementary Web Appendix C we prove the following doubly robustness property of \(\widehat{\phi}_{aug}(\eta)\) when the alternative parameterization is used. **Theorem 3**.: _If at least one of \(\widehat{b}(X;\eta)\) or \(\widehat{a}(X;\eta)\) are consistent, then the augmented estimator given by expression (10) in non-nested designs is consistent (i.e., \(\widehat{\phi}_{aug}(\eta)\stackrel{{ P}}{{\longrightarrow}}\phi(\eta)\))._ As the approach for estimating \(\theta\) relies on specifying a parametric model for \(a(X;\eta)\), the estimator cannot be used in combination with the more flexible modeling of \(\widehat{b}(X;\eta)\) and \(\widehat{a}(X;\eta)\). ### Estimation in the sensitivity analysis model for nested designs For nested designs, using the sample-analog of expression (6) gives the conditional loss estimator for the risk in the target population under a nested design \[\widehat{\psi}_{cl}(\eta)=\frac{1}{n}\sum_{i=1}^{n}I(S_{i}=1)L(Y_{i},h(X_{i}^ {*},\widehat{\beta}))+\frac{1}{n}\sum_{i=1}^{n}I(S_{i}=0)\widehat{b}(X_{i};\eta).\] The first term on the right hand side of the above equation represents contributions from observations from \(S=1\) and the second term is a sum over observations with each term in the sum being an estimator for the conditional risk under our sensitivity analysis model. In Supplementary Web Appendix B we derive the non-parametric influence function under a nested design and the corresponding augmented estimator is \[\widehat{\psi}_{aug}(\eta) =\frac{1}{n}\sum_{i=1}^{n}\Bigg{(}S_{i}L(Y_{i},h(X_{i}^{*},\widehat {\beta}))\] \[+I(S_{i}=0)\widehat{b}(X_{i};\eta)+\frac{I(S_{i}=1)(1-\widehat{p}( X_{i}))e^{\eta q(Y_{i})}\left(L(Y_{i},h(X_{i}^{*},\widehat{\beta}))-\widehat{b}(X_{i} ;\eta)\right)}{\widehat{p}(X_{i})\widehat{c}(X_{i};\eta)}\Bigg{)}. \tag{11}\] In Supplementary Web Appendix D we show that the augmented estimator for nested designs has the same robustness and rate of convergence properties as the augmented estimator for non-nested designs. ## 6 Selecting sensitivity parameter values using external information When performing sensitivity analysis, selection of an appropriate range of the sensitivity parameter can be challenging. Now we show how we can base this choice on background knowledge of the marginal probability of the outcome in the target population. Suppose that on the basis of substantive knowledge or prior research the expectation of the outcome in the target population \(\mathrm{E}[Y|S=0]=\mu\) is known. Then, even without individual-level outcome information from the target population sample, we may estimate \(\eta\) as the solution to the sample analog of the following population estimating equation: \[\int\int y\frac{e^{\eta q(y)}f_{Y|X,S}(y|x,s=1)}{\mathrm{E}[e^{\eta q(Y)}|X=x, S=1]}f_{X|S}(x|s=0)dydx-\mu=0. \tag{12}\] For example, for binary outcome \(Y\) and setting \(q\) as the identity function, we can search for the \(\eta\) value that solves \[\sum_{i=1}^{n}I(S_{i}=0)\left\{\frac{e^{\eta}\widehat{g}(X_{i})}{e^{\eta} \widehat{g}(X_{i})+\{1-\widehat{g}(X_{i})\}}-\mu\right\}=0. \tag{13}\] Of course, perfect knowledge of \(\mu\) may not be available in practical applications, but if a good enough approximation to the true value of \(\mu\) can be obtained, it helps anchor the sensitivity analysis by selecting a reasonable range of \(\eta\) values to explore in sensitivity analyses. For the nested design, if the marginal prevalence rate in the target population \(\text{E}[Y]=\alpha\), we may estimate \(\eta\) as the solution to the sample analog of the following population estimating equation: \[\int\int yf_{Y|X,S}(y|x,s=1)f_{X}(x)\left(\Pr[S=1|X=x]+\frac{e^{\eta q(y)}\Pr[S =0|X=x]}{\text{E}[e^{\eta q(Y)}|X=x,S=1]}\right)dydx-\alpha=0.\] ## 7 Sensitivity analysis using the Coronary Artery Surgery Study data We used data from the Coronary Artery Surgery Study (CASS) to illustrate the sensitivity analysis methods. **Study design and data:** CASS [35] included a randomized trial of treatments for stable coronary artery disease that was nested within a cohort study of trial-eligible individuals. Of a total of 2099 trial eligible individuals, 780 were included in the randomized trial and 1319 declined to be randomized and were included in an observational study. In the trial, participants were randomly assigned to coronary artery surgery plus medical therapy (hereafter referred to as the surgery arm) versus medical therapy alone; the same treatments were used among non-randomized participants. Risk stratification is commonly of interest in randomized trials; one approach is to develop a risk model using observational data and then apply the estimated risk model in the randomized trial (e.g., to examine heterogeneity of treatment effects over predicted risk [36, 37]). Here, to illustrate the methods, we used the 430 participants in the observational part of CASS that received surgery as the sample from the source population and used covariate data from the participants in the randomized component that received surgery as the sample from the target population. We used death within 10-years from study entry as the outcome if interest. For simplicity and as previous analysis of the same data has shown limited impact of adjusting for missing data [38], we restricted the analysis to participants with complete covariate information (368 randomized and 955 non-randomized). Table 1 shows the distribution of the baseline covariates stratified by randomization status. **Implementation:** We randomly split the dataset of non-randomized participants into two disjoint and approximately equally sized datasets. The data from the first dataset were used to fit the prediction model \(h(X^{*},\beta)\), a logistic regression model that included all the covariates listed in Table 1 as linear main effects (on the logit scale). The data from the second dataset (here used as the source population sample) were combined with covariate information from the randomized participants (here used as the sample from the target population) and the combined dataset was used to estimate the Brier risk in the target population using expression (8). The vector of covariates \(X\), used to satisfy the conditional transportability condition, included all the covariates listed in Table 1 (i.e., in this analysis we set \(X=X^{*}\)). The nuisance function \(c(X;\eta)\) was estimated using the formula \(\widehat{c}(X;\eta)=e^{\eta}\widehat{g}(X)+(1-\widehat{g}(X))\) and \(\widehat{b}(X;\eta)\) was estimated using expression (9). The estimators for \(\Pr[Y=1|X,S=1]\) and \(\Pr[S=1|X]\) needed to obtain \(\widehat{c}(X;\eta)\) and \(\widehat{b}(X;\eta)\) were based on logistic regression models with linear main effects of the variables listed in Table 1 and we used the non-parametric bootstrap with 1,000 bootstrap replicates to estimate Wald-style 95% point-wise confidence intervals. The 10-year risk (cumulative incidence proportion) among non-randomized participants was \(\widehat{\mu}=0.186\) and we selected \(\eta\) based on the range of the prevalence rate \([\widehat{\mu}/2,2\widehat{\mu}]=[0.093,0.372]\) using expression (13) and we implemented the augmented estimator using \(\eta\) values in the corresponding range with increments of 0.05 (resulting in a range of \(\eta\) from \([-0.95,1.25]\)). As a stability analysis, we i) estimated \(\Pr[Y=1|X,S=1]\) and \(\Pr[S=1|X]\) using generalized additive models with the main effects of all continuous covariates (age and ejection fraction) modeled using B-splines (basis splines) and ii) used the jackknife to estimate the 95% point-wise confidence intervals, with the same range of \(\eta\) as the main analysis. **Results:** Figure 1 shows estimates of the Brier risk in the target population and associated 95% confidence intervals over a range of values of \(\eta\) when the non-parametric bootstrap was used for confidence interval construction and \(\Pr[Y=1|X,S=1]\) and \(\Pr[S=1|X]\) were estimated using linear main effects logistic regression models. The estimates for the Brier risk ranged from 0.11 to 0.27. Figures 2, 3, and 4 in Supplementary Web Appendix E show the results of the stability analysis using a generalized additive model to estimate both \(\Pr[Y=1|X,S=1]\) and \(\Pr[S=1|X]\) and using the jackknife to construct confidence intervals. Generalized additive models produced results similar to those produced by the logistic regression models; the jackknife resulted in slightly narrower confidence intervals compared to the non-parametric bootstrap. \begin{table} \begin{tabular}{l l l} \hline \hline & \(S=0\) & \(S=1\) \\ \hline Number of patients & 337 & 430 \\ Age & 51.42 (7.21) & 51.29 (7.68) \\ History of angina & 264 (78.3) & 360 (83.7) \\ Taken beta-blocker regularly & 152 (45.1) & 241 (56.0) \\ Taken diuretic regularly & 58 (17.2) & 60 (14.0) \\ Ejection fraction & 60.61 (12.78) & 60.20 (11.96) \\ Employed full-time & 237 (70.3) & 286 (66.5) \\ Type of job & & \\ High physical labor job & 136 (40.4) & 158 (36.7) \\ Low mental labor job & 119 (35.3) & 134 (31.2) \\ High mental labor job & 82 (24.3) & 138 (32.1) \\ Left ventricular wall score & 7.42 (2.83) & 7.08 (2.73) \\ Taken nitrates regularly & 194 (57.6) & 253 (58.8) \\ History of MI & 189 (56.1) & 236 (54.9) \\ Female & 34 (10.1) & 39 (9.1) \\ Smoking status & & \\ Never smoked & 56 (16.6) & 76 (17.7) \\ Former smoker & 148 (43.9) & 211 (49.1) \\ Current smoker & 133 (39.5) & 143 (33.3) \\ High limitation of activities & 157 (46.6) & 201 (46.7) \\ High recreational activity & 207 (61.4) & 285 (66.3) \\ Confirmed hypertension & 104 (30.9) & 109 (25.3) \\ Confirmed diabetes & 0.03 (0.16) & 0.03 (0.17) \\ LMCA percent obstruction & 3.66 (10.80) & 7.95 (17.16) \\ PLMA percent obstruction & 36.35 (38.26) & 48.18 (39.81) \\ Any diseased proximal vessels & 205 (60.8) & 317 (73.7) \\ Systolic blood pressure & 129.13 (17.85) & 129.85 (17.83) \\ \hline \hline \end{tabular} LMCA = left main coronary artery; MI = myocardial infarction; PLMA = proximal left anterior artery. For continuous variables we report the mean (standard deviation); for binary variables we report the number of patients (percentage). \end{table} Table 1: Baseline characteristics in CASS (August 1975 to December 1996). \(S=0\) indicates randomized participants that received surgery and \(S=1\) indicates people in the observational component of CASS that received surgery. ## 8 Discussion We considered the problem of estimating model performance in a target population that differs from the source population used for model development or model evaluation, when information on covariates, but not outcomes, is available from the target population. In much of the literature studying this setting, methods for tailoring prediction models and for evaluating performance in the target population rely on a conditional exchangeability assumption that the available covari Figure 1: Sensitivity analysis using CASS data. The values used for the sensitivity parameter (\(\eta\)) are on the x-axis and the corresponding estimates of the Brier risk calculated using the augmented estimator for a non-nested design are on the y-axis (\(\widehat{\phi}_{aug}(\eta)\)). The nuisance functions \(\Pr[Y=1|X,S=1]\) and \(\Pr[S=1|X]\) were estimated using logistic regression models and 95% confidence intervals were calculated using the non-parametric bootstrap with \(1,000\) bootstrap replicates. The solid line connects point estimates and the gray lines are point-wise 95% confidence intervals. ates are sufficient to render the outcome independent of population (source or target population). When subject matter knowledge is insufficient to determine whether the assumption is plausible, analysts need to evaluate how its violations would impact the findings. Here, we developed a global sensitivity analysis approach to violations of the conditional exchangeability condition using an exponential tilt model. We derived two sensitivity analysis estimators: a plug-in estimator and an augmented estimator that is obtained from the non-parametric influence function under the sensitivity analysis model. We suggested an approach for selecting a reasonable range of values for the sensitivity parameters based on background knowledge about the prevalence rate in the target population. Last, we applied the methods to data on individuals with stable ischemic heart disease undergoing coronary revascularization surgery. Our approach addresses a key limitation of methods for transporting prediction models and assessing their performance in a target population. Future research could address issues such as missing data (other than the outcome data in the target population), failure-time outcomes [39], and measurement error, or extensions to more complex measures of model performance such as the area under the receiver operating characteristic curve [40]. The methods proposed here have the advantage of relating the sensitivity parameter to the marginal probability of the outcome in the target population, allowing investigators to choose an initial sensitivity parameter value by drawing on background knowledge about the target population. Depending on how sharp this knowledge is, analysts can choose to expand the sensitivity analysis to cover an appropriately dispersed set of additional sensitivity parameter values around the one implied by the postulated marginal probability in the target population.
2301.03176
A note on infinite series whose terms involve truncated degenerate exponentials
The degenerate exponentials play an important role in recent study on degenerate versions of many special numbers and polynomials, the degenerate gamma function, the degenerate umbral calculus and the degenerate q-umbral calculus. The aim of this note is to consider infinite series whose terms involve truncated degenerate exponentials together with several special numbers and to find either their values or some other expressions of them as finite sums.
Dae San Kim, Hye Kyung Kim, Taekyun Kim
2023-01-09T05:24:59Z
http://arxiv.org/abs/2301.03176v1
# A note on infinite series whose terms involve truncated degenerate exponentials ###### Abstract. The degenerate exponentials play an important role in recent study on degenerate versions of many special numbers and polynomials, the degenerate gamma function, the degenerate umbral calculus and the degenerate \(q\)-umbral calculus. The aim of this note is to consider infinite series whose terms involve truncated degenerate exponentials together with several special numbers and to find either their values or some other expressions of them as finite sums. Key words and phrases:truncated degenerate exponentials; degenerate Stirling numbers of the second kind; generalized falling factorials 2010 Mathematics Subject Classification: 11B83; 11B65; 11B73 ## 1. Introduction The degenerate exponentials play an important role in recent investigations on degenerate versions of many special numbers of polynomials (see [1]). Many of them are introduced by replacing the ordinary exponentials by the degenerate exponentials in their generating functions. These include the degenerate Stirling numbers of the second, the degenerate Bernoulli polynomials, the degenerate Euler polynomials, the partially degenerate Bell polynomials, and the degenerate central factorial numbers, and so on. Not only that, the degenerate gamma function is introduced by replacing the ordinary exponential by the degenerate exponential in the integral representation of the usual gamma function (see [18]). Furthermore, as a degenerate version of the 'classical' umbral calculus, the \(\lambda\)-umbral calculus (also called degenerate umbral calculus) is developed again by making the same replacement in the generating function of the Sheffer sequences. As it turns out, the degenerate umbral calculus (see [11]) is more convenient than the umbral calculus when dealing with degenerate special numbers and polynomials. In the same vein, the \(\lambda\)-\(q\)-umbral calculus (also called degenerate \(q\)-umbral calculus) is recently introduced by replacing the \(q\)-exponential by the \(\lambda\)-\(q\)-exponential (see [19]). In conclusion, we may say that study of degenerate versions has been very fruitful (see [2-4,10-19,21]). The aim of this note is to consider several infinite series whose terms involve the truncated degenerate exponentials, \(e_{\lambda}(y)-\frac{(1)_{h,\lambda}}{1}y-\cdots-\frac{(1)_{h,\lambda}}{n!}y^ {n},\ (n\geq 0)\), and to find either their values or some other expressions of them as finite sums. Some of these infinite series also involve other special numbers, namely binomial coefficients, the generalized falling factorials (see (2)) and the degenerate Stirling numbers of the second kind (see (4), (5)). For any \(\lambda\in\mathbb{R}\), the degenerate exponentials are defined \[e_{\lambda}^{x}(t)=\sum_{n=0}^{\infty}\frac{(x)_{n,\lambda}}{n!}t^{n},\quad \text{and}\quad e_{\lambda}(t)=e_{\lambda}^{1}(t)=\sum_{n=0}^{\infty}\frac{( 1)_{n,\lambda}}{n!}t^{n}, \tag{1}\] where the generalized falling factorials are given by \[(x)_{0,\lambda}=1,\quad(x)_{n,\lambda}=x(x-\lambda)(x-2\lambda)\cdots(x-(n-1) \lambda),\quad(n\geq 1),\quad(\text{see }[10,15]). \tag{2}\] From (1), we note that \(\lim\limits_{\lambda\to 0}e_{\lambda}^{x}(t)=e^{x}\). The Stirling numbers of the second kind are given by \[x^{n}=\sum_{k=0}^{n}S_{2}(n,k)(x)_{k},\quad(n\geq 0),\quad(\text{see }[10]), \tag{3}\] where \((x)_{k}=x(x-1)\cdots(x-k+1),\quad(k\geq 1),\quad(x)_{0}=0\). In [10], the degenerate Stirling numbers of the second kind are defined by \[(x)_{n,\lambda}=\sum_{k=0}^{n}S_{2,\lambda}(n,k)(x)_{k},\quad(n\geq 0). \tag{4}\] From (4), we note that \(\lim\limits_{\lambda\to 0}S_{2,\lambda}(n,k)=S_{2}(n,k)\). By (4), we easily get \[\frac{1}{k!}\Big{(}e_{\lambda}(t)-1\Big{)}^{k}=\sum_{n=k}^{\infty}S_{2, \lambda}(n,k)\frac{t^{n}}{n!},\quad(k\geq 0),\quad(\text{see }[10,16,17]). \tag{5}\] The backward difference operator \(\bigtriangledown\) is defined as \[\bigtriangledown f(x)=f(x)-f(x-1),\quad(\text{see }[17]). \tag{6}\] From (6), we note that \[\binom{x-1}{n-1}=\bigtriangledown\binom{x}{n}=\binom{x}{n}-\binom{x-1}{n}, \quad(n\geq 1). \tag{7}\] Thus, by (7), we get \[\binom{x}{n}=\binom{x+1}{n}-\binom{x}{n-1},\quad(n\geq 0),\quad(\text{see }[3,4,5,6,7]). \tag{8}\] In addition, the degenerate Bell polynomials are defined by \[\phi_{n,\lambda}(x)=e^{-x}\sum_{k=0}^{\infty}\frac{(k)_{n,\lambda}}{k!}x^{k}= \sum_{k=0}^{n}S_{2,\lambda}(n,k)x^{k},\quad(n\geq 0),\quad(\text{see }[10,16]).\] ## 2. Infinite series whose terms involve truncated degenerate exponentials In the section, we will consider infinite series whose terms involve truncated degenerate exponentials. We first observe that \[\frac{1}{x-1} \bigg{(}e_{\lambda}(xy)-e_{\lambda}(y)\bigg{)}=\sum_{k=1}^{ \infty}\frac{(1)_{k,\lambda}}{k!}y^{k}\bigg{(}\frac{x^{k}-1}{x-1}\bigg{)}\] \[=\sum_{k=0}^{\infty}\frac{(1)_{k+1,\lambda}}{(k+1)!}y^{k+1}\sum_ {n=0}^{k}x^{n}=\sum_{n=0}^{\infty}x^{n}\sum_{k=n+1}^{\infty}\frac{(1)_{k, \lambda}}{k!}y^{k}\] \[=\sum_{n=0}^{\infty}x^{n}\bigg{(}e_{\lambda}(y)-1-\frac{(1)_{1, \lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}\cdots-\frac{(1)_{n,\lambda}}{n! }y^{n}\bigg{)}. \tag{9}\] Taking the limit as \(x\to 1\) in (9), we have \[\sum_{n=0}^{\infty}\bigg{(}e_{\lambda}(y)-1-\frac{(1)_{1,\lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}-\cdots-\frac{(1)_{n,\lambda}}{n!}y^{n}\bigg{)}\] \[=\lim_{x\to 1}\sum_{k=1}^{\infty}\frac{(1)_{k,\lambda}}{k!}y^{k} \bigg{(}\frac{x^{k}-1}{x-1}\bigg{)}=\sum_{k=1}^{\infty}\frac{(1)_{k,\lambda}}{ k!}y^{k}k\] \[=y\sum_{k=0}^{\infty}\frac{(1-\lambda)_{k,\lambda}}{k!}y^{k}=ye_{ \lambda}^{1-\lambda}(y)=\frac{y}{1+\lambda y}e_{\lambda}(y). \tag{10}\] Therefore, by (9) and (10), we obtain the following theorem. **Theorem 2.1**.: _The following identities hold true._ \[\frac{1}{x-1}\big{(}e_{\lambda}(xy)-e_{\lambda}(y)\big{)}\] \[=\sum_{n=0}^{\infty}\bigg{(}e_{\lambda}(y)-1-\frac{(1)_{1, \lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}-\cdots-\frac{(1)_{n,\lambda}}{ n!}y^{n}\bigg{)}x^{n},\] \[\frac{y}{1+\lambda y}e_{\lambda}(y)\] \[=\sum_{n=0}^{\infty}\bigg{(}e_{\lambda}(y)-1-\frac{(1)_{1, \lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}-\cdots-\frac{(1)_{n,\lambda}}{ n!}y^{n}\bigg{)}.\] The degenerate hyperbolic cosine is defined by \[\cosh_{\lambda}(x)=\frac{e_{\lambda}(-x)+e_{\lambda}(x)}{2}.\] Note that \(\lim_{\lambda\to 0}\cosh_{\lambda}(x)=\cosh(x)\). The next corollary is immediate from Theorem 2.1. **Corollary 2.2**.: _The following identities hold true._ \[\sum_{n=1}^{\infty}\bigg{(}e_{\lambda}(1)-1-\frac{(1)_{1,\lambda} }{1!}-\frac{(1)_{2,\lambda}}{2!}-\cdots-\frac{(1)_{n,\lambda}}{n!}\bigg{)}x^ {n}=\frac{e_{\lambda}(x)-xe_{\lambda}(1)}{x-1}+1,\] \[\sum_{n=1}^{\infty}\bigg{(}e_{\lambda}(1)-1-\frac{(1)_{1,\lambda }}{1!}-\frac{(1)_{2,\lambda}}{2!}-\cdots-\frac{(1)_{n,\lambda}}{n!}\bigg{)}=1 -\frac{\lambda}{1+\lambda}e_{\lambda}(1),\] _and_ \[\sum_{n=1}^{\infty}\bigg{(}e_{\lambda}(1)-1-\frac{(1)_{1,\lambda} }{1!}-\frac{(1)_{2,\lambda}}{2!}-\cdots-\frac{(1)_{n,\lambda}}{n!}\bigg{)}(-1 )^{n}=1-\cosh_{\lambda}(1).\] From (8), we note that \[\sum_{n=0}^{\infty}\binom{n}{p}\bigg{(}e_{\lambda}(y)-1-\frac{(1)_{1, \lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}-\cdots-\frac{(1)_{n,\lambda}}{n!} y^{n}\bigg{)}\] \[=\sum_{n=p}^{\infty}\binom{n}{p}\sum_{k=n+1}^{\infty}\frac{(1)_{k, \lambda}}{k!}y^{k}=\sum_{k=p+1}^{\infty}\frac{(1)_{k,\lambda}}{k!}y^{k}\sum_{n =p}^{k-1}\binom{n}{p}\] \[=\sum_{k=0}^{\infty}\frac{(1)_{k+p+1,\lambda}}{(k+p+1)!}y^{k+p+1} \binom{k+p+1}{p+1}=\frac{y^{p+1}(1)_{p+1,\lambda}}{(p+1)!}\sum_{k=0}^{\infty} \frac{(1-(p+1)\lambda)_{k,\lambda}}{k!}y^{k}\] \[=\frac{y^{p+1}}{(p+1)!}(1)_{p+1,\lambda}e_{\lambda}^{1-(p+1) \lambda}(y)=\frac{y^{p+1}}{(p+1)!}(1)_{p+1,\lambda}(1+\lambda y)^{-(p+1)}e_{ \lambda}(y). \tag{11}\] Therefore, by (11), we obtain the following theorem. **Theorem 2.3**.: _For \(p\geq 0\), we have_ \[\sum_{n=0}^{\infty}\binom{n}{p}\bigg{(}e_{\lambda}(y)-1-\frac{(1 )_{1,\lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}-\cdots-\frac{(1)_{n, \lambda}}{n!}y^{n}\bigg{)}\] \[\qquad=\frac{y^{p+1}}{(p+1)!}(1)_{p+1,\lambda}(1+\lambda y)^{-(p+ 1)}e_{\lambda}(y).\] _Especially, for \(y=1\), we obtain_ \[\sum_{n=0}^{\infty}\binom{n}{p}\bigg{(}e_{\lambda}(1)-1-\frac{(1 )_{1,\lambda}}{1!}-\frac{(1)_{2,\lambda}}{2!}-\cdots-\frac{(1)_{n,\lambda}}{n! }\bigg{)}\] \[\qquad=\frac{(1)_{p+1,\lambda}}{(p+1)!}(1+\lambda)^{-(p+1)}e_{ \lambda}(1).\] From Theorem 2.3, we note that \[\sum_{n=0}^{\infty}(n)_{p}\bigg{(}e_{\lambda}(y)-1-\frac{(1)_{1, \lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}-\cdots-\frac{(1)_{n,\lambda}}{ n!}y^{n}\bigg{)}\] \[\qquad=\frac{y^{p+1}}{p+1}(1)_{p+1,\lambda}(1+\lambda y)^{-(p+1) }e_{\lambda}(y). \tag{12}\] By (4) and (12), we get \[\sum_{n=0}^{\infty}(n)_{p,\lambda}\bigg{(}e_{\lambda}(y)-1-\frac {(1)_{1,\lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}-\cdots-\frac{(1)_{n, \lambda}}{n!}y^{n}\bigg{)}\] \[=\sum_{n=0}^{\infty}\sum_{k=0}^{p}S_{2,\lambda}(p,k)(n)_{k}\bigg{(} e_{\lambda}(y)-1-\frac{(1)_{1,\lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}- \cdots-\frac{(1)_{n,\lambda}}{n!}y^{n}\bigg{)}\] \[=\sum_{k=0}^{p}S_{2,\lambda}(p,k)\frac{y^{k+1}}{k+1}(1)_{k+1, \lambda}(1+\lambda y)^{-(k+1)}e_{\lambda}(y). \tag{13}\] Therefore, by (13), we obtain the following theorem. **Theorem 2.4**.: _For \(p\geq 0\), we have_ \[\sum_{n=0}^{\infty}(n)_{p,\lambda}\left(e_{\lambda}(y)-1-\frac{(1)_{ 1,\lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}-\cdots-\frac{(1)_{n,\lambda}}{ n!}y^{n}\right)\] \[=\sum_{k=0}^{p}S_{2,\lambda}(p,k)\frac{y^{k+1}}{k+1}(1)_{k+1, \lambda}(1+\lambda y)^{-(k+1)}e_{\lambda}(y).\] _In particular, for \(y=1\), we get_ \[\sum_{n=0}^{\infty}(n)_{p,\lambda}\left(e_{\lambda}(1)-1-\frac{(1 )_{n,\lambda}}{1!}-\frac{(1)_{2,\lambda}}{2!}-\cdots-\frac{(1)_{n,\lambda}}{n!}\right)\] \[=\sum_{k=0}^{p}S_{2,\lambda}(p,k)\frac{(1)_{k+1,\lambda}}{k+1}(1+ \lambda)^{-(k+1)}e_{\lambda}(1).\] From (5), we note that \[\sum_{n=0}^{\infty}S_{2,\lambda}(n,k)\frac{t^{n}}{n!}=\sum_{n=k}^ {\infty}S_{2,\lambda}(n,k)\frac{t^{n}}{n!}=\frac{1}{k!}\big{(}e_{\lambda}(t)-1 \big{)}^{k}\] \[=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}e_{\lambda}^{j} (t)=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\sum_{n=0}^{\infty}\frac{( j)_{n,\lambda}}{n!}t^{n}\] \[=\sum_{n=0}^{\infty}\left(\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}( -1)^{k-j}(j)_{n,\lambda}\right)\frac{t^{n}}{n!}. \tag{14}\] Comparing the coefficients on both sides of (14), we obtain \[S_{2,\lambda}(n,k)=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}(j)_{n, \lambda},\quad(n,k\geq 0). \tag{15}\] Taking the limit as \(\lambda\to 0\) in (15), we have \[S_{2}(n,k)=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}j^{n},\quad(n,k \geq 0). \tag{16}\] By using (16), we derive the following: \[\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\frac{e_{\lambda} \left(jy\right)-e_{\lambda}\left(y\right)}{j-1}\] \[=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\sum_{n=1}^{ \infty}\frac{(1)_{n,\lambda}}{n!}y^{n}\left(\frac{j^{n}-1}{j-1}\right)\] \[=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\sum_{n=1}^{ \infty}\frac{(1)_{n,\lambda}}{n!}y^{n}\sum_{l=0}^{n-1}j^{l}\] \[=\sum_{n=1}^{\infty}\frac{(1)_{n,\lambda}}{n!}y^{n}\sum_{l=0}^{n- 1}\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}j^{l}\] \[=\sum_{n=1}^{\infty}\frac{(1)_{n,\lambda}}{n!}y^{n}\sum_{l=0}^{n- 1}S_{2}(l,k)=\sum_{l=0}^{\infty}S_{2}(l,k)\sum_{n=l+1}^{\infty}\frac{(1)_{n, \lambda}}{n!}y^{n}. \tag{17}\] Therefore, by (17), we obtain the following theorem. **Theorem 2.5**.: _For \(k\geq 0\), we have_ \[\sum_{n=0}^{\infty} S_{2}(n,k)\bigg{(}e_{\lambda}(y)-1-\frac{(1)_{1,\lambda}}{1!}y- \frac{(1)_{2,\lambda}}{2!}y^{2}-\cdots-\frac{(1)_{n,\lambda}}{n!}y^{n}\bigg{)}\] \[=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\frac{e_{\lambda} \left(jy\right)-e_{\lambda}\left(y\right)}{j-1}.\] _In particular, for \(y=1\), we get_ \[\sum_{n=0}^{\infty} S_{2,\lambda}(n,k)\bigg{(}e_{\lambda}(1)-1\frac{(1)_{1, \lambda}}{1!}-\frac{(1)_{2,\lambda}}{2!}-\cdots-\frac{(1)n,\lambda}{n!}\bigg{)}\] \[=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\frac{e_{\lambda} \left(j\right)-e_{\lambda}(1)}{j-1}.\] **Remark 2.6**.: _We may naturally consider the following problem. For any \(k\geq 0\), find the value of_ \[\sum_{n=0}^{\infty}S_{2,\lambda}(n,k)\bigg{(}e_{\lambda}(y)-1-\frac{(1)_{1, \lambda}}{1!}y-\frac{(1)_{2,\lambda}}{2!}y^{2}-\cdots-\frac{(1)_{n,\lambda}}{ n!}y^{n}\bigg{)}.\] **Remark 2.7**.: _Much work has been done as to degenerate and truncated theories. These theories have some applications to mathematics, engineering and physics. Researchers interested in these may refer to [1-22]._ ## 3. Conclusion In this note, we studied infinite series whose terms involve the truncated degenerate exponentials together with binomial coefficients, the generalized falling factorials and the degenerate Stirling numbers of the second kind and determined either their values or some other expressions of them as finite sums. In recent years, we have witnessed that study of degenerate versions yielded many fascinating and fruitful results. We would like to continue to study degenerate versions of many special numbers and polynomials and to find some applications of them to physics, science and engineering.
2308.14101
Superpixels algorithms through network community detection
Community detection is a powerful tool from complex networks analysis that finds applications in various research areas. Several image segmentation methods rely for instance on community detection algorithms as a black box in order to compute undersegmentations, i.e. a small number of regions that represent areas of interest of the image. However, to the best of our knowledge, the efficiency of such an approach w.r.t. superpixels, that aim at representing the image at a smaller level while preserving as much as possible original information, has been neglected so far. The only related work seems to be the one by Liu et. al. (IET Image Processing, 2022) that developed a superpixels algorithm using a so-called modularity maximization approach, leading to relevant results. We follow this line of research by studying the efficiency of superpixels computed by state-of-the-art community detection algorithms on a 4-connected pixel graph, so-called pixel-grid. We first detect communities on such a graph and then apply a simple merging procedure that allows to obtain the desired number of superpixels. As we shall see, such methods result in the computation of relevant superpixels as emphasized by both qualitative and quantitative experiments, according to different widely-used metrics based on ground-truth comparison or on superpixels only. We observe that the choice of the community detection algorithm has a great impact on the number of communities and hence on the merging procedure. Similarly, small variations on the pixel-grid may provide different results from both qualitative and quantitative viewpoints. For the sake of completeness, we compare our results with those of several state-of-the-art superpixels algorithms as computed by Stutz et al. (Computer Vision and Image Understanding, 2018).
Anthony Perez
2023-08-27T13:13:28Z
http://arxiv.org/abs/2308.14101v1
# Superpixels algorithms through network community detection ###### Abstract Community detection is a powerful tool from complex networks analysis that finds applications in various research areas. Roughly speaking, it aims at grouping together nodes of a network that are densely connected while having few links with other groups. Several image segmentation methods rely for instance on community detection algorithms as a black box in order to compute _undersegmentation_[1; 2; 3; 4; 5; 6; 7; 8], _i.e._ a small number of regions that represent areas of interest of the image. However, to the best of our knowledge, the efficiency of such an approach w.r.t. _superpixels_, that aim at representing the image at a smaller level while preserving as much as possible original information, has been neglected so far. The only related work seems to be the one by Liu _et al._[9] that developed a superpixels algorithm using a so-called modularity maximization approach, leading to relevant results. We note that the algorithm used is a variant of a well-known community detection algorithm that has however not been tested in a context other than image segmentation. We follow this line of research by studying the efficiency of superpixels computed by state-of-the-art community detection algorithms on a 4-connected pixel graph, so-called _pixel-grid_. We first detect communities on such a graph and then apply a simple _merging_ procedure that allows to obtain the desired number of superpixels. As we shall see, such methods result in the computation of relevant superpixels as emphasized by both qualitative and quantitative experiments, according to different widely-used metrics based on ground-truth comparison or on superpixels only. We observe that the choice of the community detection algorithm has a great impact on the number of communities and hence on the merging procedure. Similarly, small variations on the pixel-grid may provide different results from both qualitative and quantitative viewpoints. For the sake of completeness, we compare our results with those of several state-of-the-art superpixels algorithms as computed by Stutz _et al._[10]. keywords: image segmentation, superpixels, community detection + Footnote †: journal: ## 1 Introduction In many real-life applications such as image segmentation or video analysis one may need to preprocess the image at hand in order to achieve great performances. One of the most natural such preprocessing is the computation of so-called _superpixels_ or _oversegmentations_ that represent the image at a smaller level while preserving as much as possible original information. Notable applications of superpixels include moving-object tracking, content-based image retrieval, biomedical imaging, indoor scene understanding, clothes parsing and convolutional neural networks. We refer the reader to recent comprehensive surveys for relevant references and more information on the topic [11; 10; 12]. We note that the literature mentions both the notion of superpixels and oversegmentation. According to Stutz _et al._[10], the main difference lies in the possibility to control both the number of generated segments and their compactness, which leads to _superpixels_ when present and to _oversegmentation_ otherwise. As we shall see afterward community detection algorithms usually do not encompass an explicit way to control the number of segments but the merging procedure used does provide such a control. Moreover, the compactness can be adjusted by considering different graphs from the image. Hence the required characteristics for superpixels are fulfilled by this method: the segments form moreover a partition of the pixels, represent connected sets of pixels with great compactness and boundary adherence (see Stutz _et al._[10]). As we shall discuss later, computing superpixels is not efficient at the moment due to the implementation choices made for the sake of reproduciblity. _Our contribution._ We illustrate that the use of community detection algorithms on the most natural graph one can imagine, namely the pixel graph with 4-connectivity (so-called _pixel-grid_, see Figure 1) yields superpixels that achieve state-of-the-art results w.r.t. several metrics [10], both objective and subjective (_i.e._ depending on ground-truth segmentations). We will also consider graphs with a given radius \(r\), meaning that pixels are considered neighbors with all pixels at distance at most \(r\). This work complements a previous analysis of Liu _et al._[9] who used a variant of a well-known community detection algorithm on a similar graph to compute superpixels. We note here that such a variant has not been studied in a context other than image segmentation and that it seems to deeply rely on the particular structure of the pixel-grid. Our study com pares three well-studied community detection algorithms on the same pixel-grid, namely Label Propagation[13], Louvain[14] and InfoMap[15]. We emphasize that using different community detection algorithms on the same graph may lead to great differences on the outputted superpixels, mainly from the qualitative viewpoint. For the sake of reproducibility, we rely on the evaluation of state-of-the-art methods proposed by Stutz _et al._[10]. As noticed by the authors, superpixel algorithms are often compared to other approaches with undisclosed or default parameter settings and with variating implementation of metrics. In particular, Stutz _et al._[10, Appendix D] provide a thorough parameter optimization analysis. All the experiments proposed in our work rigourously reproduce the work of Stutz _et al._[10] for which both implementation and plot files are available free of charge1. The results presented illustrate the relevance of this approach compared to many state-of-the-art algorithms using different principles. Footnote 1: [https://github.com/davidstutz/cviu2018-superpixels](https://github.com/davidstutz/cviu2018-superpixels) _Related work._ There are a tremendous number of superpixels algorithm in the literature that rely on many different techniques. For instance, gradient-ascent and graph-based methods have been proposed for computing superpixels. In both cases some variations are possible but the main idea remains the same: starting with a primary grouping of pixels that is then refined until some convergence criterion is reached for the former; computing a graph based on pixels of the image for the latter (see [11]). We hereafter focus on graph-based methods due to their relevance with our work. In the last decades, many works took advantage of graph theoretic tools to compute segmentation algorithms. This is for instance the case of _graph cuts_ which rely on flow algorithms on the pixel-grid to compute hard segmentations of images [16]. Another notable use of the pixel-grid is the work of Felzenszwalb and Huttenlocher [17] who used minimum spanning trees algorithms to compute efficient segmentations. More recently, community detection algorithms have been used in several works as a tool to compute undersegmentations [1; 2; 3; 4; 5; 6; 7; 8]. In most cases the used algorithm relies either on a large graph or on a presegmented graph augmented with some features. For the former method, let us mention the work of Nguyen _et al._[7] who relied on the Louvain algorithm [14] on a pixel-grid of radius 20 and on a merging procedure to obtain the sought segmentation. Regarding the latter approach, Mourchid _et al._[6] first computed superpixels using Mean-Shift [18] as their initial segmentation and then used color and texture-based features to obtain their final segmentation through community detection algorithms. However, all aforementioned works aim at computing _undersegmentations_ and thus do not evaluate intermediate results that may yield oversegmentations meeting many requirements of superpixels. Note that this phenomenon is actually noticed in the work of Nguyen _et al._[7]. Very recently, Liu _et al._[9] proposed a superpixels algorithm using a 8-connected pixel-grid with radius 1. The superpixels were detected using a greedy modularity maximization, a measure that quantifies the quality of a given community structure. Their algorithm heavily relies on the strong structural properties of pixel-grids and may thus not lead to relevant communities for other graphs. The final undersegmentation was also computed with a merging procedure. _Outline._ We begin by giving a general picture of the approach with a particular focus on community detection algorithms (Section 2). We next turn our attention to experimental results, both qualitative (Section 4.2) and quantitative (Section 4.1). Before doing so, we thoroughly describe datasets (Section 3.1), metrics (Section 3.2) and methods (Section 3.3) used in our comparative studies. Finally, we give insights on the impact of the community detection algorithms used in Section 5 and conclude this work by some perspectives Section 6. ## 2 Description of the framework Throughout the paper we consider \((n\times n)\)-sized images with pixels \(I=\{p_{1,1}\ldots,p_{n,n}\}\). As observed in previous works [8; 10], the \(L^{\textit{s}}\)_a*b*_ space is the closest to the human perception and is hence the one chosen for our study. _Building the graph._ A graph is a pair \(G=(V,E)\) where \(V\) denotes its vertex set and \(E\subseteq[V]^{2}\) its edge set. Here \([V]^{2}\) denotes the set of all pairs of elements of \(V\). Unless stated otherwise the considered graphs are simple (without self-loops nor multiedges), undirected and _weighted_, meaning that every graph \(G=(V,E)\) comes together with a weight function \(\omega:E\rightarrow\mathbb{R}^{+}\). We consider as weight function a _similarity measure_ that will be described Eq.1. We consider the simplest way to obtain a graph from an image, that is by considering pixels as neighbors using the 4-connectivity model within a given distance. More precisely, given an integer \(r\geqslant 1\), called _radius_, we define the \(r\)_-pixel grid_ graph of image \(I\) as the graph \(G_{r}=(V_{r},E_{r})\) where \(V_{r}=I\) and there is an edge \(pp^{\prime}\) in \(E_{r}\) if the corresponding pixels are both on the same row or the same column of \(I\) and at distance at most \(r\) from each other. (Obviously edges with out-of-bounds indices are not considered in this process.) Note that the 1-pixel grid is simply the graph obtained from the direct 4-connected neighborhood of every pixel. In this work we will consider values of \(r\) ranging from 1 to 10, the best results being achieved for \(r=5\) in most cases. For the sake of comparison, Nguyen _et al._[7] used a 20-pixel grid to compute their undersegmentation, which implies that the used graph was significantly larger. Moreover, they removed edges whose similarity was below some fixed threshold \(\rho\) but considered unweighted graphs. In our setting, we remove edges in a similar manner but we weight the remaining edges accordingly. We use a similarity measure based on the channel differences between pixels (or the mean of a region for a presegmented image), that is a Gaussian type radius basis function: \[\omega(p,p^{\prime})=\exp\frac{-|p-p^{\prime}|^{2}}{2\cdot\sigma^{2}} \tag{1}\] where \(\sigma\) is a parameter that defines how _close_ two regions must be for the corresponding edge weight to be significant. There are actually several graphs considered in the literature, some preserving all edges in a given radius (which corresponds to a threshold \(\rho=0\)) while other approaches preserve edges above a given threshold only. Moreover, some authors consider weighted graphs [9] while others use weight as a binary mask to remove or preserve unweighted edges [7]. As we shall see Table 1 and Section 5 these choices may have a great impact on pixel-grids and on computed communities. Computing communitiesThe next step for computing superpixels is to detect communities on the given \(r\)-pixel grid. Roughly speaking, a _community structure_ of a given graph is a partition2 of its vertex set such that every part is densely connected while there are few edges between two distinct parts. One may think of communities in social networks as a way to gather individuals that are similar according to some properties. Regarding image analysis, the intuition is that pixels that are similar (_e.g._ regarding some color features) should be contained in the same community. There exist many community detection algorithms that sometimes exhibit different behaviors and that may hence encompass different properties of the graph [19]. Our study will use three algorithms for the sake of comparison, that we briefly describe hereafter. Footnote 2: Let us mention that communities can sometimes be defined as _overlapping_, a feature that does not suit our purpose since we aim at computing partitions of pixels of the image. * Label Propagation [13] is an iterative algorithm that assigns labels to vertices, corresponding to communities. The initial step of the algorithm labels every vertex \(\{v_{1},\ldots,v_{n}\}\) of an \(n\)-vertex graph with its corresponding index \(1\leqslant i\leqslant n\). Labels are then propagated by considering vertices in a random order and giving to each vertex the majority label of its neighbors. The process stops when all vertices are labeled with the majority label of their respective neighborhoods. A particular feature of this algorithm is that two consecutive runs may end up with rather different community structures. * Louvain[14] is an agglomerative algorithm that greedily optimizes the so-called _modularity_ of the graph, a measure that qualifies a given community structure. Roughly speaking, the algorithm starts from an existing partition into communities (_e.g._ the singleton partition) and computes the _gain_ obtained by moving any vertex to a different community. It stops when no valuable move remains and then merges each community into a single vertex, repeating this process until no valuable move is made. * InfoMap relies on a flow-based and information theoretic method called the map equation [15]. Quoting the work of Rosvall _et al._[20]: _The map equation specifies the theoretical limit of how concisely we can describe the trajectory of a given walker in the network. With such a random walker as a proxy for real flow, minimizing the map equation over all possible network partitions reveals important aspects of network structure with respect to the dynamics of the network._ Rosvall _et al._[20] moreover emphasize that methods based on the map equation and on modularity maximization may yield really different community structures, making the study of those algorithms well-suited for our purpose. Merging communitiesThe final step of the procedure is to reduce further the number of superpixels produced by the algorithms. A similar procedure was used in several works [7; 9] to produce image segmentation based on both the size and the similarity of superpixels. As noticed for instance by Liu _et al._[9], the sought number of superpixels \(K\) can be explicitely managed by merging all communities with size smaller than \(\frac{(n^{2})}{K}\). Note that there are many criteria that can be considered for merging initial oversegmentations, such as the size of regions and the similarity between regions. We choose in this work to focus on a Figure 1: Different steps of the algorithm starting from an image from BSDS500. The lower part of the resulting image has 200 superpixels while the upper part shows 1000 regions. merging procedure based on the sizes of the communities, namely by merging any small enough region with its closest neighboring one w.r.t. similarity defined Eq. (1) until the number of desired superpixels is reached. To that aim, we compute a _Region Adjacency Graph_ with radius 1. We note that in order to avoid too many very small superpixels, w e begin by merging communities the size of which is less than \(\frac{n^{2}}{10\cdot K}\). Obviously, this approach allows to control the number of outputted regions, thus meeting the requirements of a superpixels algorithm. See Figure 1 for an example of the framework. Let us mention here that Liu _et al._[9] chose a different approach in their work by merging small regions with the _largest_ neighboring one. Details of implementationWe conducted all experiments using python3 with scikit-image for dealing with images, networkx and networkit for most graph-related operations and the module Infomap. Moreover, metrics are replicated from the C++ implementation of Stutz _et al._[10]. Source code is available on a public github repository. As mentioned in the introductory section, the aim of this study is to enlight the relevance of community detection to compute superpixels. We hence did not aim at achieving great performance in computing the segmentations. We will discuss this matter more thoroughly Section 6. ## 3 Experimental setup ### Datasets We now give a brief description of the datasets used in our experiments as well as references and links to retrieve the _raw_ datasets. Let us recall that we closely follow the work of Stutz _et al._[10, Section 4] and use datasets that have been preprocessed. Most of the following paragraphs are extracted from [10]. BSDS500 [21]The Berkeley Segmentation Dataset 500 is one of the earliest that has been used for superpixel algorithm evaluation. It contains 500 images that come together with a set of at least 5 ground truth for every image. We consider the 200 images that are part of the test set. Following the work of Stutz _et al._[10] we choose for each image the ground truth providing the _worst_ score for a given metric and average such values over all images. Sbd[22]The Stanford Background dataset combines 715 images from several datasets implying that the sizes, qualities and scenes are varying between images. The scenes tend to be more complex than those of the BSDS500 dataset. The used dataset provided by Stutz _et al._[10] has been preprocessed to ensure connected segments in ground truth and contains 477 randomly chosen images. NYUV2[23]The NYU Depth Dataset V2 contains 1449 images. The labels provided by Silberman _et al._[23] ensure connected segments and the data has been further preprocessed by Stutz _et al._[10] (following Ren and Bo [24]) to remove small unlabeled regions. Note that the provided ground truth is of lower quality than in the BSDS500 dataset. The preprocessed dataset contains 399 randomly chosen images. Sunngbd[25]The Sun RGB-D dataset is made of 10335 images combined from several datasets [26, 27], including NYUV2 (which has here been excluded). Images have been acquired through several devices [25] and ground truth has been preprocessed by Stutz _et al._[10] in a similar manner than for NYUV2. The preprocessed dataset contains 400 images chosen at random. Pixel-gridsTable 1 provides some information about the pixel-grid obtained for different radii and threshold values on the BSDS500 dataset. We use \(G_{r}^{\rho}\) to denote a \(r\)-pixel grid where only edges with a weight greater than \(\rho\) are preserved. Note that due to the threshold some vertices may become isolated and are thus not added to the graph. We also illustrate Figure 2 the impact the radius has on computed superpixels. We will discuss further these differences Section 4. \begin{table} \begin{tabular}{c c c c c} \hline \hline & vertices & edges & weight & CO (Table 2) \\ \hline \hline \(G_{1}^{0}\) & 154401 & 308000 & 254752 & 0.31892 \\ \(G_{1}^{0.98}\) & 97808 & 117528 & 116846 & 0.19405 \\ \(G_{2}^{0.98}\) & 154401 & 615198 & 480182 & 0.39695 \\ \(G_{2}^{0.98}\) & 103765 & 199446 & 198228 & 0.18930 \\ \(G_{5}^{0}\) & 154401 & 1531980 & 1079130 & 0.50955 \\ \(G_{5}^{0.98}\) & 110143 & 377524 & 375030 & 0.17446 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of vertices, edges and total edge weight for \(r\)-pixel grids with \(r\) in in \(\{1,2,5\}\) and \(\rho\) in \(\{0,0.98\}\). Values are rounded. Last column indicates the compactness for 1000 superpixels on BSDS500. Figure 2: Impact of the radius for a sample image from BSDS500. Larger radius implies larger superpixels on background areas with more small regions around objects and shapes. All images are computed with a threshold \(\rho=0.98\) and \(K=1000\). Finally, the last column gives the Compactness (see Section3.2) on the BSDS500 dataset for \(K=1000\) superpixels. As one can observe, the best values are achieved for a radius \(r=2\)_and_ a small threshold. This property actually holds regardless of the number of sought superpixels and thus allows to control the compactness of the superpixels. However, all other metrics presented Table2 have worse values for such combinations of radius and threshold. We hence choose not to include them in the remainder of our study. Nonetheless, let us mention that Stutz _et al._[10, Fig. 8] provides Compactness values for all considered datasets. For 1000 superpixels, the best value is achieved by **TPS** with 0.56146 while **SEEDS** and **ETPS** are the two lowest with 0.08952 and 0.16260, respectively. This enlightens that results provided by InfoMap are also relevant w.r.t. Compactness. ### Evaluation metrics There are many metrics that can be used to assess the quality of superpixels algorithms, mostly relying on a ground truth segmentation of the image at hand. In this work we choose four metrics that have been introduced in various articles [28; 29; 30; 31]. Our choice is here again guided by Stutz _et al._[10, Section 5] who give a detailed analysis of many metrics and ultimately focus on these ones. In the remaining of this section and in a slight abuse of notation we let \(I(p_{i,j})\), \(1\leqslant i\leqslant j\leqslant n\) denote the intensity of pixel \(p_{i,j}\). We moreover let \(\mathcal{S}=\{S_{1},\ldots,S_{p}\}\) and \(\mathcal{G}=\{G_{1},\ldots,G_{t}\}\) be partitions of a same image \(I\) into superpixels and ground truth segmentation, respectively. All metrics are summarized Table2. We give hereafter a brief overview of such metrics as presented by Stutz _et al._[10]. _Boundary recall_ (Rec) is widely used to assess the quality of a superpixel segmentation, and more precisely boundary adherence given ground truth. _Under-segmentation error_ (UE) aims at measuring the overlap of superpixels with multiple, nearby ground truth segments. We note that the original formulation for Under-Segmentation Error was given by Levinshtein _et al._[32] as follows: \[\mathrm{UE}_{Levin}(\mathcal{G},\mathcal{S})=\frac{1}{|\mathcal{G}|}\cdot \sum_{G_{i}}\frac{\big{(}\sum_{S_{j}\cap G_{i}\neq\emptyset}|S_{j}|\big{)}-|G_ {i}|}{|G_{i}|}\] However, as observed for instance in [28; 33] such a definition penalizes superpixels overlapping only slightly with neighboring ground truth segments and is not constrained to lie in \([0,1]\). The adapted version chosen by Stutz _et al._[10] has been proposed by Neubert and Protzel [28]. _Explained variation_ (EV) [31] quantifies the quality of a superpixel segmentation by assessing boundary adherence without relying on human annotations. High EV scores means better superpixels explanation of the variation of the image. _Compactness_ (CO) has been introduced by Schick _et al._[30] to evaluate the compactness of superpixels. Compactness compares the area of a superpixel \(S_{j}\) with the area of a circle with the same perimeter \(P(S_{j})\). High CO value means better compactness. ### Methods for comparison We now briefly describe the state-of-the art methods used in our experiments. We based our choices on the work of Stutz _et al._[10] which highlighted the efficiency of chosen methods w.r.t. metrics defined Section3.2. In order to present a comparative study as meaningful as possible, we selected a couple of the best methods for different superpixels paradigms, namely density, graph, path, clustering and energy optimization based methods. The presentation of such methods is inspired by [10, Section 3]. _Density-based._ Such algorithms _perform mode-seeking in a computed density image and usually do not offer control on the number of superpixels or their compactness_[10]. Yet tuning the parameters can allow to achieve such objectives. We compare the presented method to **EAMS**[18] and **QS**[34]. _Graph-based._ The common feature of these methods is to extract a graph from the image at hand and then to use graph-theoretic tools to compute superpixels. Many tools have been used, such as network flows (graph cuts [16]) and minimum spanning trees [17]. Hence the main difference lies in the method used to compute the superpixels, the graph usually being based on a pixel (dis)similarity measure. We compare the presented method to **FH**[17] and **ERS**[29]. _Path-based._ Methods based on path detection compute superpixels by creating paths between seeds respecting some criteria. We compare the presented method to the **TPS**[35] algorithm, which relies on edge detection. _Clustering-based._ These algorithms are inspired by clustering methods such as \(k\)-means and are close to the approach we consider here. Like for path-based methods, clustering-based superpixels algorithms use seeds and several image-related information as features to group pixels together. As noted in Stutz _et al._[10], the resulting superpixels may be disconnected and further postprocessing is needed to ensure connectivity. We compare the presented method to **SLIC**[33]. _Energy optimization-based._ Starting from a regular grid, pixels are then exchanged with neighboring superpixels w.r.t. some energy function. These methods achieve high accuracy and efficiency. We compare the presented method to **SEEDS**[36] and **ETPS**[37]. ## 4 Experimental results We focus in this section on superpixels obtained using InfoMap, which provides the best trade-off w.r.t. the quality of both qualitative and quantitative comparisons. We will see Section5 that using different community detection algorithms such as Label Propagation or Louvain may lead to a bit worse quantitative results with a significant qualitative difference. Moreover, one advantage of such an algorithm compared to Label Propagation or Louvain is that it gives consistent community structures from one image to another, allowing a better control on the number of superpixels in the merging procedure. ParametersBesides the parameters specific to the chosen community detection algorithms this approach does not require a lot of parameter tuning. More precisely, one needs to fix values for the radius of the pixel-grid \(r\), the threshold \(\rho\) relative to Eq.1 to decide which edges to preserve, the number of desired superpixels and the parameter \(\alpha\) in Eq.1. The latter has been experimentally chosen as \(\alpha=125\). We recall Table1 for the size of the pixel-grid according to both radius and threshold values. Our experiments show that choosing a combination of \(r=5\) and \(\rho=0.98\) provides relevant superpixels while maintaining a graph of reasonable size. We will present Section5 some results using different radii and observe that their performances are a bit worse. Similarly, metrics computed with smaller threshold values were almost always significantly worse and are thus not reported here, with the notable exception of Compactness which can be improved using smaller values for \(\rho\) (the better being 0, meaning not filtering the pixel-grid, recall Table1). Regarding the merging procedure the only parameter needed is the desired number of superpixels, which will vary from 200 to 5000 with different increasing steps: 200 up to 1000 and 500 up to 2500. We also compute 5000 superpixels. ### Quantitative comparison We first turn our attention to quantitative comparison with methods mentioned Section3.3 on datasets described Section3.1. We compare our results with results extracted from the work of Stutz _et al._[10]. Recall that we compute the metrics on superpixels generated using InfoMap on a 5-pixel grid and with \(\rho=0.98\) as threshold for all datasets, which provide the most relevant results. However, we will see Figure5 that small variations of the radius may have an impact on the computed metrics. As one can see Figure3, this approach produces relevant results for all metrics. The recall values lie in the top three methods in any dataset. A similar observation holds for UE values while the best EV is achieved for a large number of superpixels. We also display standard deviation on metrics for all datasets. For the sake of readability, we do not display standard deviations for state-of-the-art methods. Such values can be found in [10, Fig. 9] and indicate that **SEEDS** and **ETPS** tend to have smaller standard deviations than InfoMap for Rec. On the other hand, the displayed values for UE and EV are rather similar than the ones in [10, Fig. 9]. We note that for the SBD dataset the mean number of superpixels when \(K=5000\) is actually 4984. This comes from the fact that for a few images, InfoMap computed a small number of communities and thus the method does not produce the exact number of superpixels. This is the case for exactly five images, and the number of superpixels produced for three of them is less than 4000. Figure 3: Quantitative comparison on datasets described Section 3.1. Values for other methods are reported from Stutz _et al._[10]. The results presented for this framework use a threshold value \(\rho=0.98\) and a radius \(r=5\). that present the best result w.r.t. chosen metrics as seen Section 4.1. On the other hand, their regularity and compactness is not as clearly defined as **SLIC**. Note however that as shown Table 1 and Figure 2 the above properties can be adjusted by using different radius and threshold values. Hence, depending on the image at hand a simple adjustment of these values may lead to significant differences between superpixels. ## 5 The impact of community detection In this section we discuss the impact the choice of the community detection algorithm may have on the computed superpixels. We limit our study to the BSDS500 dataset which provides meaningful insights. We begin by showing Table 3 the number of communities computed by each algorithm for radii \(r\in\{2,5\}\) and threshold \(\rho\in\{0,0.98\}\) used for building the pixel-grid. The difference in the number of computed communities along with their sizes emphasizes a difference in the regularity and the sizes of said communities. This has a great impact on the merging procedure that provides the final superpixels. Quantitative difference between algorithmsAs one may observe Figure 5, considered methods provide relevant results with respect to chosen metrics. While all methods result in significant outcomes, InfoMap outperforms other considered algorithms. To emphasize the impact the radius may have on each algorithm, we choose \(K=1000\) Figure 4: Qualitative comparison with selected methods, namely **SEEDS**, **SLIC** and **ETPS** on selected datasets with a number of superpixels of \(K=1000\). Figure 5: Quantitative comparison between Label Propagation, Louvain and InfoMap on the BSDS500 dataset. The results presented use a threshold value of \(0.98\) and are shown with radii \(\{1,2,5,10\}\). as the number of computed superpixels. The difference observed on metrics is actually similar for all values of superpixels and across datasets. An interesting observation is that using a 10-pixel grid improves both Rec and EV but not UE while increasing the size of the graph. Even if performance is not taken into account in this study, we note that increasing the radius may slow down the computation of communities and hence does not seem to be well-suited for this framework. As mentioned previously, the set of communities computed by each algorithm shows significant variations that may explain the difference in computed metrics. This is also emphasized by a qualitative comparison of each algorithm. Qualitative difference between algorithmsFigure 6 shows that InfoMap tends to exhibit more compact and regular superpixels. In particular, on images with a clearly identified background both Label Propagation and Louvain tend to group the background into a few large superpixels. This is mainly due to the ability of InfoMap to produce smaller communities (recall Table 3), which hence results in a regular initial oversegmentation for which the merging procedure yields relevant results. ## 6 Conclusion In this work we analyzed the relevance of superpixels computed through community detection algorithms and a merging procedure on a simple graph extracted from the image at hand. As mentioned Section 1 this approach has previously been used to compute _undersegmentation_ in several works [1; 2; 3; 4; 5; 6; 7; 8] but, to the best of our knowledge, its use for the computation of superpixels has been neglected so far, the work of Liu _et al._[9] being the only notable exception. We hence enlight the relevance of this approach by providing both qualitative and quantitative results w.r.t. state-of-the-art methods. One interesting property of this approach is its ability to evolve in time: novel community detection algorithms are frequently introduced (see _e.g._[38]) and any improvement may also improve this approach. Notice that in this work we chose to rely on three well-studied and widely used algorithms to ensure the relevance of presented results. Hence as one of the most straightforward extension one may consider the analysis of more recent community detection algorithms in order to better understand the impact of such a choice. In particular it would be interesting to use a community detection algorithm that allows control on the number of computed communities. Another approach is to consider the graph extracted from the image. We here made the choice to work on a very simple graph, in particular using a weight function that only relies on color similarities. It would be interesting to study the impact of a more intricate similarity measure, for instance accounting for the histogram of oriented gradients as in [7] or with a trade-off between feature similarity and Euclidean distance as in [39]. We would like to mention that we also conducted experiments without weighting the graph, resulting in less relevant results. This seems to indicate that choosing a different weight function may have some impact on the results. Finally, the choices made for the merging procedure may result in different outcomes. We studied a basic strategy that provide relevant results, and it could be interesting to study further the impact of such a procedure. We note here that we tried another approach that merged really small regions first and then neighboring remaining regions if their similarity is above the similarity threshold \(\rho=0.98\). Quite expectedly, this approach provided similar results and does not seem to lead to any improvement. AcknowledgmentsThe author is deeply grateful to David Stutz for fruitful discussion and for pointing out the availability of the LaTe source for the paper Superpixels: An evaluation of the state-of-the-art [10]. \begin{table} \begin{tabular}{l c c||c c||c c} \(G_{2}^{0}\) & \multicolumn{2}{c}{Communities} & \multicolumn{2}{c}{Max. size} & \multicolumn{2}{c}{Min. size} \\ & \(mean\) & \((std)\) & \(mean\) & \((std)\) & \(mean\) & \((std)\) \\ \hline LP & 5760 & (3228) & 2579 & (1231) & 1.2 & (0.39) \\ Louvain & 79 & (29) & 5840 & (1653) & 97 & (234) \\ InfoMap & 2780 & (634) & 206 & (25) & 2 & (0.34) \\ \hline \(G_{5}^{0}\) & \multicolumn{2}{c}{\#Communities} & \multicolumn{2}{c}{Max. size} & \multicolumn{2}{c}{Min. size} \\ & \(mean\) & \((std)\) & \(mean\) & \((std)\) & \(mean\) & \((std)\) \\ \hline LP & 1924 & (1308) & 10940 & (6120) & 1.79 & (0.41) \\ Louvain & 46 & (19) & 10759 & (3786) & 180 & (563) \\ InfoMap & 1054 & (335) & 684 & (93) & 2.06 & (0.31) \\ \end{tabular} \end{table} Table 3: Statistics of the communities computed by algorithms on the BSDS500 dataset. The number of communities as well as the maximum and minimum sizes are reported. For all values we show mean and standard deviation. Figure 6: Qualitative comparison of outputted superpixels for selected datasets and community detection algorithms.
2303.16464
Lipschitzness Effect of a Loss Function on Generalization Performance of Deep Neural Networks Trained by Adam and AdamW Optimizers
The generalization performance of deep neural networks with regard to the optimization algorithm is one of the major concerns in machine learning. This performance can be affected by various factors. In this paper, we theoretically prove that the Lipschitz constant of a loss function is an important factor to diminish the generalization error of the output model obtained by Adam or AdamW. The results can be used as a guideline for choosing the loss function when the optimization algorithm is Adam or AdamW. In addition, to evaluate the theoretical bound in a practical setting, we choose the human age estimation problem in computer vision. For assessing the generalization better, the training and test datasets are drawn from different distributions. Our experimental evaluation shows that the loss function with a lower Lipschitz constant and maximum value improves the generalization of the model trained by Adam or AdamW.
Mohammad Lashkari, Amin Gheibi
2023-03-29T05:33:53Z
http://arxiv.org/abs/2303.16464v3
Lipschitzness Effect of a Loss Function on Generalization Performance of Deep Neural Networks Trained by Adam and AdamW Optimizers ###### Abstract The generalization performance of deep neural networks with regard to the optimization algorithm is one of the major concerns in machine learning. This performance can be affected by various factors. In this paper, we theoretically prove that the Lipschitz constant of a loss function is an important factor to diminish the generalization error of the output model obtained by Adam or AdamW. The results can be used as a guideline for choosing the loss function when the optimization algorithm is Adam or AdamW. In addition, to evaluate the theoretical bound in a practical setting, we choose the human age estimation problem in computer vision. For assessing the generalization better, the training and test datasets are drawn from different distributions. Our experimental evaluation shows that the loss function with lower Lipschitz constant and maximum value improves the generalization of the model trained by Adam or AdamW. + Footnote †: journal: Amirkabir University of Technology (Tehran Polytechnic) + Footnote †: journal: Amirkabir University of Technology (Tehran Polytechnic) + Footnote †: journal: Amirkabir University of Technology (Tehran Polytechnic) ## 1 Introduction The adaptive moment estimation (Adam) algorithm is one of the most widely used optimizers for training deep learning models. Adam is an efficient algorithm for stochastic optimization, based on adaptive estimates of first-order and second-order moments of gradient [1]. The method is computationally efficient and has little memory usage. Adam is much more stable than stochastic gradient descent (SGD) and the experiments of work [1] show that it is faster than previous stabilized versions of SGD, such as SGDNesterov [2], RMSProp [3] and AdaGrad [4] to minimize the loss function in the training phase. It is recently used in several machine learning problems and performs well. Thus, any improvement in the generalization performance of a model trained by Adam is essential. One of the main concerns in machine learning is the generalization performance of deep neural networks (DNNs). A generalization measurement criterion is the generalization error which is defined as the difference between the true risk and the empirical risk of the output model [5]. One established way to address the generalization error of machine learning models in order to derive an upper bound for it, is the notion of uniform stability [5; 6; 7]. Roughly speaking, the uniform stability measures the difference in the error of the output model caused by a slight change in the training set. The pioneering work of [6], shows that if a deterministic learning algorithm is more stable, then the generalization error of the ultimate model achieves a tighter upper bound. In the following work of [7], Hardt _et al._ extend the notion of uniform stability to randomized learning algorithms to drive an upper bound for the expected generalization error of a DNN trained by SGD. They prove that SGD is more stable, provided that the number of iterations is sufficiently small. In the recent work of [5], Ali Akbari _et al._ derive a high probability generalization error bound instead of an expected generalization error bound. They demonstrate that if SGD is more uniformly stable, then the generalization error bound is tighter. They also proved the direct relationship between the uniform stability of SGD and loss function properties i.e. its Lipschitzness, resulting in the generalization error connection to the Lipschitz constant of a loss function. In our work, Adam is central instead of SGD. We distinguish the relationship between the uniform stability of Adam and Lipschitzness of a loss function. Through this way, we connect the generalization error of a DNN trained by Adam to the properties of a loss function including its Lipschitzness. Subsequently, we assess the generalization performance of a DNN trained by AdamW optimizer which decouples weight decay from estimates of moments to make the regularization technique more effective [8]. We connect the uniform stability of AdamW and the generalization error of a DNN trained by it to Lipschitzness of a loss function. In Experiments, we evaluate our theoretical results in the human age estimation problem. Human age estimation is one of the most significant topics in a wide variety of applications such as age-specific advertising, customer profiling, or recommending new things. However, we are facing many challenges to solve this problem. Face makeup, insufficient light, skin color, and unique features of each person are the factors that can affect the accuracy of the model. Based on these reasons, collecting more data cannot necessarily reduce the generalization error of the final model. The practical results show that choosing a stable loss function can improve the accuracy of the model trained by Adam or AdamW. ## 2 Related Work There is a variety of approaches to derive upper bounds for the generalization error including algorithmic stability [5; 6; 7; 9; 10; 11], robustness [12; 13], and PAC-Bayesian Theory [14; 15]. Each of these approaches theoretically analyzes some effective factors and gives the researchers some information which can enhance the generalization performance of deep learning models. The notion of uniform stability was firstly introduced in [6] for deterministic algorithms. It was extended to randomized algorithms in the work of [7] to derive an expected upper bound for the generalization error which is directly related to the number of training epochs of SGD. Recently, based on the uniform stability definition of SGD, the generalization error of a DNN trained by it, with high probability, is upper-bounded by a vanishing function which is directly related to the Lipschitz constant and the maximum value of a loss function [5]. In our work, we analyze the uniform stability for Adam and AdamW and its relationship with the Lipschitz constant of a loss function. We show that the loss function proposed in [5] stabilizes the training process and reduces the generalization error when the optimization algorithm is Adam or AdamW. ## 3 Preliminaries Let \(X\) and \(Y\subseteq\mathbb{R}^{\mathbb{M}}\) be the input and output spaces of a problem respectively, and \(F\) be the set of all mappings from \(X\) to \(Y\). A learning problem is to find \(f^{\theta}:X\to Y\), parameterized by \(\theta\in H\) where \(H\) is the set of all possible values for the neural network parameters. Assume \(\ell:Y\times Y\rightarrow\mathbb{R}^{+}\) denotes the loss function of the problem. The goal of a learning algorithm is to minimize the true risk \(R_{true}(f^{\theta})\coloneqq\mathbb{E}_{\mathrm{(x,y)}}\left[\ell(f^{\theta }(\mathrm{x}),y)\right]\) where \(\mathrm{(x,y)}\in X\times Y\): \[f^{\theta}_{true}=\operatorname*{argmin}_{f^{\theta}\in F}R_{true}(f^{\theta}). \tag{1}\] Since the distribution of \(X\times Y\) is unknown; \(f^{\theta}_{true}\) cannot be found in the equation (1). Hence, we have to estimate the true risk. Let \(S\in(X\times Y)^{N}\) be the training set. The true risk is estimated by the empirical risk \(R_{emp}(f^{\theta})\coloneqq\frac{1}{N}\sum_{i=1}^{N}\ell(f^{\theta}(\mathrm{ x_{i}}),\mathrm{y_{i}})\) in which, \(N=|S|\) and \(\mathrm{(x_{i},y_{i})}\in S\). In current deep learning algorithms, training the model means minimizing \(R_{emp}(f^{\theta})\). In the rest of this paper, in the theorems and proofs, the loss function is denoted by \(\ell(\mathrm{y},\mathrm{y})\) where \(\mathrm{\hat{y}}\) is the prediction vector and \(\mathrm{y}\) is the target vector. **Definition 3.1** (Partition): _Suppose that \(S\) is a training set of size \(N\). Let \(1<k<N\) be a number that \(N\) is divisible by \(k\) (if it is not possible, we repeat a sample enough to make divisibility possible). A partition of \(S\), which we denote by \(B_{S}=\{B_{1},B_{2},\ldots,B_{k}\}\), is a set of \(k\) subsets of \(S\) such that every sample is in exactly one set and the size of each subset is \(\frac{N}{k}\)._ We use the definition 3.1 to formalize the training process of deep learning models mathematically. Assume \(S\) is the training set and \(B_{S}=\{B_{1},B_{2},\ldots,B_{k}\}\) is a partition of it. Each element of \(B_{S}\) represents a mini-batch of \(S\). Without loss of generality we suppose that in each iteration of the optimization algorithm, a mini-batch \(B_{i}\in B_{S}\) is randomly selected to the parameters be updated. This is done by the algorithm using a random sequence \(R=(r_{1},r_{2},\ldots,r_{T})\) of indices of elements in \(B_{S}\), where \(T\) is the number of iterations. We use \(f^{\theta}_{B_{S},R}\) to denote the output model of the optimization algorithm, applied to a partition \(B_{S}\) and a random sequence \(R\). **Definition 3.2** (Generalization Error): _Given a partition \(B_{S}\) of a training set \(S\) and a sequence \(R\) of random indices of \(B_{S}\) elements, the generalization error of \(f^{\theta}_{B_{S},R}\) trained by an arbitrary optimization algorithm, is defined as \(E(f^{\theta}_{B_{S},R})=R_{true}(f^{\theta}_{B_{S},R})-R_{emp}(f^{\theta}_{B_ {S},R})\)._ **Definition 3.3** (Lipschitzness): _Let \(Y\subseteq\mathbb{R}^{\mathrm{M}}\) be the output space of a problem. A loss function \(\ell(\hat{\mathrm{y}},\mathrm{y})\) is \(\gamma\)-Lipschitz with regard to its first argument, if \(\forall\,\mathrm{y}_{1},\mathrm{y}_{2}\in Y\), we have:_ \[\left|\ell(\mathrm{y}_{1},\mathrm{y})-\ell(\mathrm{y}_{2},\mathrm{y})\right| \leq\gamma\left\|\mathrm{y}_{1}-\mathrm{y}_{2}\right\|,\] _where \(\left\|.\right\|\) is the \(L_{2}\) norm._ As mentioned before, uniform stability of the optimization algorithm is effective to the generalization performance of the ultimate model \(f^{\theta}_{B_{\mathrm{S}},R}\)[5]. We follow the uniform stability definition of work [7] to link Lipschitzness of the loss function to the generalization error of \(f^{\theta}_{B_{\mathrm{S}},R}\). For simplicity, moving forward, we denote \(f^{\theta}_{B_{\mathrm{S}},R}\) by \(f_{B_{\mathrm{S}},R}\) and \(E(f^{\theta}_{B_{\mathrm{S}},R})\) by \(E(f_{B_{\mathrm{S}},R})\). Along with the notion of uniform stability which we define in Section 5, another concept called bounded difference condition (BDC) affects the generalization error [5]: **Definition 3.4** (Bdc): _Consider two numbers \(k,T\in\mathbb{N}\). If \(G:\{1,2,\ldots,k\}^{T}\rightarrow\mathbb{R}^{+}\), is a measurable function and for \(R,R^{\prime}\in Dom(G)\) which are different only in two elements, constant \(\rho\) exists such that_ \[\sup_{R,R^{\prime}}\left|G(R^{\prime})-G(R)\right|\leq\rho,\] _then, \(G(.)\) holds bounded difference condition (BDC) with the constant \(\rho\). We use the \(\rho\)-BDC expression to denote that a function holds this condition with the constant \(\rho\)._ In Definition 3.4, we assumed the slight change in the input to be the difference in two elements, which we will see its reason in the proof of the theorems. Intuitively, if a function satisfies the above condition, its value does not differ much due to a slight change in the input. Such functions are dense around their expectation with respect to the input random sequence \(R\)[16]. ## 4 Formulation of Age Estimation Problem Our problem in the experimental part is human age estimation. Let \((\mathrm{x},y)\) be a training sample where \(\mathrm{x}\) is the input image of a person's face and \(y\in\mathbb{N}\) is the corresponding age label. Due to the correlation of the neighboring ages, classification methods based on single-label learning [17] are not efficient because these methods ignore this correlation. Also, regression-based models are not stable to solve this problem [5]. According to the aforementioned reasons, another method based on label distribution learning (LDL) framework which was firstly introduced in the work of [18], is used for this problem [5]. In this method \(y\) is replaced by \(\mathrm{y}=[y_{1},y_{2},\ldots,y_{\mathrm{M}}]\in\mathbb{R}^{\mathrm{M}}\) where \(y_{i}\) is the probability of facial image \(\mathrm{x}\) belongs to class \(i\). As usual, \(\mathrm{y}\) is assumed to be a normal distribution, centering at \(y\) and standard deviation \(\sigma\) which controls the spread of the distribution [18]. Therefore, the output space, \(Y\) is a subset of \(\mathbb{R}^{\mathrm{M}}\) and our objective is to find \(f^{\theta}\) which maps \(\mathrm{x}\) to \(\mathrm{y}\in Y\). ### Loss Functions for Age Estimation Problem Let \((\mathrm{x},\mathrm{y})\in S\) be a training instance where \(\mathrm{x}\) represents the facial image and \(\mathrm{y}\in\mathbb{R}^{\mathrm{M}}\) is the corresponding label distribution. Consider \(\hat{\mathrm{y}}=f^{\theta}(\mathrm{x})\), representing the estimated label distribution by \(f^{\theta}\). To obtain \(f^{\theta}\), a convex loss function named Kullbeck-Leibler (KL) divergence has been widely utilized. The KL loss function is defined as below: \[\ell_{KL}(\hat{\mathrm{y}},\mathrm{y})=\sum_{m=1}^{\mathrm{M}}y_{m}\log(\frac{ y_{m}}{\hat{y}_{m}}).\] As an alternative to KL, another convex loss function called Generalized Jeffries-Matusita (GJM) distance has been proposed in [5] under the LDL framework, defined as \[\ell_{GJM}(\hat{\mathrm{y}},\mathrm{y})=\sum_{m=1}^{\mathrm{M}}y_{m}\left|1- \left(\frac{\hat{y}_{m}}{y_{m}}\right)^{\alpha}\right|^{\frac{1}{2}},\] where \(\alpha\in(0,1]\). According to the experiments of [5], the best value of \(\alpha\) for good generalization is \(0.5\). It has been proved that if \(\alpha=0.5\), then the Lipschitz constant and the maximum value of GJM are less than the Lipschitz constant and the maximum value of KL respectively 1[5]. ## 5 Uniform Stability and Generalization Error Analysis The notion of uniform stability was firstly introduced in [6] for deterministic learning algorithms. They demonstrate that smaller stability measure of the learning algorithm, the tighter generalization error is. However, their stability measure is limited to deterministic algorithms and is not appropriate for randomized learning algorithms such as Adam. Therefore, we follow [5, 7] to define the uniform stability measure for randomized optimization algorithms generally: **Definition 5.1** (Uniform Stability): _Let \(S\) and \(S^{\prime}\) denote two training sets drawn from a distribution \(\mathbb{P}\). Suppose that \(B_{S}\) and \(B_{S^{\prime}}\) of equal size k, are two partitions of \(S\) and \(S^{\prime}\) respectively, which are different in only one element (mini-batch). Consider a random sequence \(R\) of \(\{1,2,\ldots k\}\) to select a mini-batch at each iteration of an optimization algorithm, \(A_{opt}\). If \(f_{B_{S},R}\) and \(f_{B_{S^{\prime}},R}\) are output models obtained by \(A_{opt}\) with the same initialization, then \(A_{opt}\) is \(\beta\)-uniformly stable with regard to a loss function \(\ell\), if_ \[\forall S,S^{\prime}\ \ \sup_{(\mathrm{x},\mathrm{y})}\mathbb{E}_{R}\left[| \ell(f_{B_{S^{\prime}},R}(\mathrm{x}),\mathrm{y})-\ell(f_{B_{S},R}(\mathrm{x} ),\mathrm{y})|\right]\leq\beta.\] To evaluate the uniform stability of Adam and AdamW in order to prove its link to loss function properties, a lemma named **Growth recursion** which has been stated in [7] for SGD is central to our analysis. In the following, we state this lemma for an arbitrary iterative optimization algorithm, but before stating the lemma, we need some definitions. As we know, gradient-based optimization algorithms are iterative, and in each iteration, the network parameters are updated. Consider \(H\) as the set of all possible values for the network parameters. Let \(A_{opt}\) be an arbitrary iterative optimization algorithm that runs \(T\) iterations. In the \(t\)-th iteration, the update that is computed in the last command of the loop for the network parameters, is a function \(A^{t}:H\to H\) mapping \(\theta_{t-1}\) to \(\theta_{t}\) for each \(1\leq t\leq T\). We call \(A^{t}\) the **update rule** of \(A_{opt}\). Let's define two characteristics of an update rule: The update rule, \(A^{t}(.)\) is \(\sigma\)**-bounded** if \[\sup_{\theta\in H}\left\|\theta-A^{t}(\theta)\right\|\leq\sigma, \tag{2}\] and it is \(\tau\)**-expensive** if \[\sup_{\theta,\,\theta^{\prime}\in H}\frac{\left\|A^{t}(\theta)-A^{t}(\theta^{ \prime})\right\|}{\left\|\theta-\theta^{\prime}\right\|}\leq\tau, \tag{3}\] where \(\left\|.\right\|\) is the \(L_{2}\) norm. **Lemma 5.2** (Growth recursion): _[_7_]_ _Given two training set \(S\) and \(S^{\prime}\), suppose that \(\theta_{0},\theta_{1},\ldots,\theta_{T}\) and \(\theta^{\prime}_{0},\theta^{\prime}_{1},\ldots\theta^{\prime}_{T}\) are two updates of network parameters with update rules \(A^{t}_{S}\) and \(A^{t}_{S^{\prime}}\), running on \(S\) and \(S^{\prime}\) respectively such that for each \(1\leq t\leq T\), \(\theta_{t}=A^{t}_{S}(\theta_{t-1})\) and \(\theta^{\prime}_{t}=A^{t}_{S^{\prime}}(\theta^{\prime}_{t-1})\). If \(A^{t}_{S}\) and \(A^{t}_{S^{\prime}}\) are both \(\tau\)-expensive and \(\sigma\)-bounded, then for \(\Delta_{t}=\left\|\theta_{t}-\theta^{\prime}_{t}\right\|\), we have:_ * _If_ \(A^{t}_{S}=A^{t}_{S^{\prime}}\) _then_ \(\Delta_{t}\leq\tau\Delta_{t-1}\)_._ * _If_ \(A^{t}_{S}\neq A^{t}_{S^{\prime}}\) _then_ \(\Delta_{t}\leq\Delta_{t-1}+2\sigma\)__2_._ Footnote 2: In the work of [7], this inequality has been written as \(\Delta_{t}\leq\min(1,\tau)\Delta_{t-1}+2\sigma\) which is less than \(\Delta_{t}+2\sigma\) that we just need in the proofs of the theorems. We state the proof of Lemma 5.2 in Appendix A. In Subsection 5.1, we discuss the uniform stability of Adam to upper-bound the generalization error of a DNN trained by it. Subsequently, In Subsection 5.2, we state different theorems for the uniform stability of AdamW and the generalization error because AdamW exploits decoupled weight decay, and its update parameters statement is different from Adam. ### Adam Optimizer Let \(\ell(f^{\theta};B)\) represents the computation of a loss function on an arbitrary mini-batch, \(B=\{(\mathrm{x}_{i},\mathrm{y}_{i})\}_{i=1}^{b}\), which we use at each iteration to update parameters in order to minimize \(R_{emp}(f^{\theta})\): \[\ell(f^{\theta};B)=\frac{1}{b}\sum_{i=1}^{b}\ell(f^{\theta}(\mathrm{x}_{i}), \mathrm{y}_{i}),\] in which \(\theta\) is the parameters and \(b\) is the batch size. Let \(g(\theta)=\nabla_{\theta}\ell(f^{\theta};B)\) where \(\nabla_{\theta}\) is the gradient. For \(t\geq 1\) suppose that \(m_{t}\), \(v_{t}\) are estimates of the first and second moments respectively: \[m_{t} =\beta_{1}\cdot m_{t-1}+(1-\beta_{1})\cdot g(\theta_{t-1});\ m_{0 }=0, \tag{4}\] \[v_{t} =\beta_{2}\cdot v_{t-1}+(1-\beta_{2})\cdot g^{2}(\theta_{t-1});\ v _{0}=0, \tag{5}\] where \(\beta_{1},\beta_{2}\in(0,1)\) are exponential decay rates and the multiply operation is element-wise. Let \(\widehat{m}_{t}=m_{t}/(1-\beta_{1}^{t})\) and \(\widehat{v}_{t}=v_{t}/(1-\beta_{2}^{t})\) be the bias-corrected estimates; Adam computes the parameters update using \(\widehat{m}_{t}\) adapted by \(\widehat{v}_{t}\): \[\theta_{t}=\theta_{t-1}-\eta\cdot\frac{\widehat{m}_{t}}{(\sqrt{\widehat{v}_{t}} +\epsilon)},\] where \(\eta\) is the learning rate and \(\epsilon=10^{-8}\). Based on what we discussed so far, to evaluate the uniform stability of Adam, we need to formulate its update rule. Given \(\beta_{1},\beta_{2}\in(0,1)\) for each \(1\leq t\leq T\) let \[\hat{M}(m_{t-1},\theta) =\frac{\beta_{1}\cdot m_{t-1}+(1-\beta_{1})\cdot g(\theta)}{1- \beta_{1}^{t}}, \tag{6}\] \[\hat{V}(v_{t-1},\theta) =\frac{\beta_{2}\cdot v_{t-1}+(1-\beta_{2})\cdot g^{2}(\theta)}{ 1-\beta_{2}^{t}}, \tag{7}\] where \(m_{t-1}\) and \(v_{t-1}\) are the biased estimates for the first and second moments of the gradient at the previous step respectively as we explained in the equations (4) and (5). Adam's update rule is obtained as follows: \[A^{t}(\theta)=\theta-\eta\cdot\left(\frac{\hat{M}(m_{t-1},\theta)}{\sqrt{\hat {V}(v_{t-1},\theta)}+\epsilon}\right), \tag{8}\] where \(\eta\) is the learning rate and the division operation is element-wise. We use the following lemma in the proof of Theorem 5.4: **Lemma 5.3**.: _Let \(m_{t-1}=\beta_{1}\cdot m_{t-2}+(1-\beta_{1})\cdot g(\theta_{t-2})\) such that \(\beta_{1}\in(0,1)\) is constant and \(m_{0}=0\). Let \(\ell(\mathrm{y},\mathrm{y})\) be \(\gamma\)-Lipschitz. Then for all \(t\geq 1\) and \(\theta\in H\), we have \(\left\|\hat{M}(m_{t-1},\theta)\right\|\leq\gamma\)._ The proof of Lemma 5.3 is available in Appendix A. Now we can state the theorems which link the generalization error with the loss function properties. In Theorem 5.4 we assess the stability measures including the uniform stability and in Theorem 5.5, we drive an upper bound for the generalization error of a DNN trained by Adam. **Theorem 5.4**.: _Assume Adam is executed for \(T\) iterations with a learning rate \(\eta\) and batch size \(b\) to minimize the empirical risk in order to obtain \(f_{B_{g},R}\). Let \(\ell(\mathrm{y},\mathrm{y})\) be convex and \(\gamma\)-Lipschitz. Then, Adam is \(\beta\)-uniformly stable with regard to the loss function \(\ell\), and for each \((\mathrm{x},\mathrm{y})\), \(\ell(f_{B_{g},R}(\mathrm{x}),\mathrm{y})\) holds the \(\rho\)-BDC with respect to \(R\). Consequently, we have_ \[\beta\leq\frac{2\eta}{c}\cdot\frac{bT\gamma^{2}}{N},\quad\rho\leq\frac{8\eta}{ c}\cdot\left(\frac{b\gamma}{N}\right)^{2},\] _in which \(c\in(0,1)\) is a constant number and \(N\) is the size of the training set._ **Proof.** Consider Adam's update rule, \(A^{t}(.)\) in the equation (8). In order to prove that \(A^{t}(.)\) satisfies the conditions of Lemma 5.2, \(\sigma\)-boundedness and \(\tau\)-expensiveness of \(A^{t}(.)\) are needed to be evaluated. From the formula (2), we have: \[\left\|\theta-A^{t}(\theta)\right\|=\left\|\eta\cdot\left(\frac{\hat{M}(m_{t- 1},\theta)}{\sqrt{\hat{V}(v_{t-1},\theta)}+\epsilon}\right)\right\|\] where \(m_{t-1}\) and \(v_{t-1}\) are the biased estimates for \(\mathbb{E}\left[g\right]\) and \(\mathbb{E}\left[g^{2}\right]\geq 0\) in the \(t\)-th step respectively. Therefore: \[\left\|\eta\cdot\left(\frac{\hat{M}(m_{t-1},\theta)}{\sqrt{\hat {V}(v_{t-1},\theta)}+\epsilon}\right)\right\| \leq\eta\cdot\left\|\frac{\hat{M}(m_{t-1},\theta)}{\epsilon}\right\| \tag{9}\] \[\leq\frac{\eta\gamma}{\epsilon}. \tag{10}\] Because \(\epsilon>0\) and \(\hat{V}(v_{t-1},\theta)\geq 0\), we deduced the inequality (9). In the inequality (10), Lemma 5.3 has been applied, which implies that, \(A^{t}(.)\)\(\sigma\)-bounded such that \(\sigma\leq\frac{\eta\gamma}{\epsilon}\). Now, we check the \(\tau\)-expensiveness condition: we know that for all \(\theta\in H\), \(\frac{\hat{M}(m_{t-1},\theta)}{\sqrt{\hat{V}(v_{t-1},\theta)}}\simeq\pm 1\) because \(|\mathbb{E}[g]|/\sqrt{\mathbb{E}[g^{2}]}\leq 1\). On the other hand \(\ell(\mathrm{y},\mathrm{y})\) is convex. Thus, for two updates of network parameters \(\theta_{t-1}\) and \(\theta_{t-1}^{\prime}\) in an arbitrary iteration \(t\) with the same initialization, by choosing a sufficiently small learning rate, the two vectors \(\frac{\hat{M}(m_{c-1},\theta_{t-1})}{\sqrt{V(v_{t-1},\theta_{t-1})}}\) and \(\frac{\hat{M}(m_{c-1},\theta_{t-1}^{\prime})}{\sqrt{V(v_{t-1},\theta_{t-1}^{ \prime})}}\) are approximately equal. Thus, by substituting \(A^{t}(.)\) in the formula (3), it is concluded that, \(A^{t}(.)\) is \(1\)-expensive. Let \(B_{S}\) and \(B_{S^{\prime}}\) having equal size \(k\), be two partitions of training sets \(S\) and \(S^{\prime}\) respectively, such that \(B_{S}\) and \(B_{S^{\prime}}\) are different in only one mini-batch. Let \(\theta_{0},\theta_{1},\ldots,\theta_{T}\) and \(\theta_{0}^{\prime},\theta_{1}^{\prime},\ldots\theta_{T}^{\prime}\) be two parameters updates obtained from training the network by Adam with update rules \(A_{S}^{t}\) and \(A_{S^{\prime}}^{t}\) respectively where \(A_{S}^{t}\) runs on \(B_{S}\) and \(A_{S^{\prime}}^{t}\) runs on \(B_{S^{\prime}}\) with the same random sequence \(R\) such that \(\theta_{0}=\theta_{0}^{\prime}\). Let two mini-batches \(B\) and \(B^{\prime}\) have been selected for updating the parameters in the \(t\)-th iteration. If \(B=B^{\prime}\), then \(A_{S^{\prime}}^{t}=A_{S}^{t}\) else \(A_{S^{\prime}}^{t}\neq A_{S}^{t}\). \(B=B^{\prime}\) occurs with probability \(1-\frac{1}{k}\) and the opposite occurs with probability \(\frac{1}{k}\). At the beginning of the proof, we demonstrated that \(A^{t}(.)\) (for an arbitrary training set) is \(\sigma\)-bounded and \(1\)-expensive. Let \(\Delta_{t}=\|\theta_{t}-\theta_{t}^{\prime}\|\), from Lemma 5.2, we have: \[\Delta_{t} \leq(1-\frac{1}{k})\Delta_{t-1}+\frac{1}{k}\left(\Delta_{t-1}+ \frac{2\eta\gamma}{\epsilon}\right)\] \[=\Delta_{t-1}+\frac{1}{k}\cdot\frac{2\eta\gamma}{\epsilon}.\] We know \(k=\frac{N}{b}\). Therefore, solving the recursive relation gives \[\Delta_{T}\leq\Delta_{0}+2T\eta\cdot\frac{\gamma}{k\epsilon}=2\eta\cdot\frac {bT\gamma}{N\epsilon}.\] Let \(\theta_{T,i}\) are the effective parameters of \(\theta_{T}\) on the \(i\)-th neuron of the last layer with \(M\) neurons. notation \(\langle.,.\rangle\) is inner product and \(\left[f(i)\right]_{i=1}^{M}\) for an arbitrary function \(f\), denotes the vector \(\left[f(1),f(2),\ldots,f(M)\right]\). Now we proceed to prove Adam's uniform stability. According to Definition 5.1, we have: \[\mathbb{E}_{R}\left(\left|\ell(f_{B_{S^{\prime}},R}(\mathrm{x}), \mathrm{y})-\ell(f_{B_{S},R}(\mathrm{x}),\mathrm{y})\right|\right)\] \[\leq\mathbb{E}_{R}\left(\gamma\left\|f_{B_{S^{\prime}},R}( \mathrm{x})-f_{B_{S},R}(\mathrm{x})\right\|\right)\] \[=\gamma\mathbb{E}_{R}\left(\left\|\left[\left(\theta_{T,i}^{ \prime},\mathrm{x}\right)\right]_{i=1}^{M}-\left[\left(\theta_{T,i},\mathrm{x }\right)\right]_{i=1}^{M}\right\|\right)\] \[\leq\gamma\mathbb{E}_{R}\left(\|\theta_{T}^{\prime}-\theta_{T}\|\right) \tag{11}\] \[=\gamma\mathbb{E}_{R}\left[\Delta_{T}\right]\] \[\leq 2\eta\cdot\frac{bT\gamma^{2}}{N\epsilon}. \tag{12}\] In the inequality (11), we assumed \(\|\mathrm{x}\|\leq 1\); that is the re-scaling technique that is common in computer vision. In the last inequality, \(\epsilon\) is a constant number between \(0\) and \(1\). After showing the relation between the uniform stability of Adam and the Lipschitz constant of the loss function, we evaluate the bounded difference condition for the loss function with respect to the random sequence and a fixed training set. Suppose that \(R\) and \(R^{\prime}\) are two random sequences of batch indices to update the parameters in which only the location of two indices has been changed; that is if \(R=(\ldots,i,\ldots,j,\ldots)\) then \(R^{\prime}=(\ldots,j,\ldots,i,\ldots)\). Without loss of generality, assume \(1\leq i\leq\frac{k}{2}\) and \(\frac{k}{2}+1\leq j\leq k\). The probability of selecting two identical batches in the \(t\)-th iteration is \(1-\frac{4}{Tk^{2}}\). Thus, two updates of neural network parameters as \(\theta_{0}^{R},\theta_{1}^{R},\ldots,\theta_{T}^{R}\) and \(\theta_{0}^{R^{\prime}},\theta_{1}^{R^{\prime}},\ldots,\theta_{T}^{R^{\prime}}\) are made with the same initialization, \(\theta_{0}^{R}=\theta_{0}^{R^{\prime}}\). Let \(\Delta_{t}=\left\|\theta_{t}^{R}-\theta_{t}^{R^{\prime}}\right\|\). From Lemma 5.2, we have: \[\Delta_{T}\leq\frac{8}{Tk^{2}}\cdot\frac{\eta T\gamma}{\epsilon}=\frac{8}{k^{2}} \cdot\frac{\eta\gamma}{\epsilon}.\] According to Definition 3.4, we have: \[\left|\ell(f_{B_{S},R^{\prime}}(\mathrm{x}),\mathrm{y})-\ell(f_{B _{S},R}(\mathrm{x}),\mathrm{y})\right|\] \[\leq\gamma\left\|f_{B_{S},R^{\prime}}(\mathrm{x})-f_{B_{S},R}( \mathrm{x})\right\|\] \[=\gamma\left\|\left[\left(\theta_{T,i}^{R^{\prime}},\mathrm{x} \right)\right]_{i=1}^{M}-\left[\left(\theta_{T,i}^{R},\mathrm{x}\right)\right]_{i =1}^{M}\right\|\] \[\leq\gamma\left\|\theta_{T}^{R^{\prime}}-\theta_{T}^{R}\right\| \tag{13}\] \[=\gamma\Delta_{T}\] \[\leq\frac{8}{k^{2}}\cdot\frac{\eta\gamma^{2}}{\epsilon}. \tag{14}\] The inequality (13) has been obtained similar to (12). Replacing \(k\) by \(\frac{N}{b}\) in the inequality (14) leads to the inequality in the proposition. **Theorem 5.5**.: _Let \(\ell(\mathrm{y},\mathrm{y})\) with the maximum value of \(L\) be convex and \(\gamma\)-Lipschitz. Assume Adam is run for \(T\) iterations with a learning rate \(\eta\) and batch size \(b\) to obtain \(f_{B_{S},R}\). Then we have the following upper bound for \(E(f_{B_{S},R})\) with probability at least \(1-\delta\):_ \[E(f_{B_{S},R})\leq\frac{2\eta}{c}\left(4\left(\frac{b\gamma}{N}\right)^{2} \sqrt{T\ log(2/\delta)}+\frac{bT\gamma^{2}}{N}\left(1+\sqrt{2N\log(2/\delta)} \right)\right)+L\sqrt{\frac{\log(2\delta)}{2N}}, \tag{15}\] _in which \(c\in(0,1)\) is a constant number and \(N\) is the size of the training set._ **Proof.** In the work of [5], an upper bound for the generalization error of the output model trained by any optimization algorithm \(A_{opt}\) is established with probability at least \(1-\delta\), under the condition \(A_{opt}\) satisfies uniform stability measure with bound \(\beta\) and for each \(\mathrm{(x,y)}\), \(\ell(f_{B_{S},R}\mathrm{(x,y)}\mathrm{)}\) holds the \(\rho\)-BDC with regard to \(R\)3: Footnote 3: In the assumptions of the main theorem in the work of [5], it has been stated that the model trained by stochastic gradient descent, but by studying the proof, we realize that their argument can be extended to any iterative algorithm that is \(\beta\)-uniformly stable because, in their proof, the upper bound has been derived independently of the update rule of stochastic gradient descent. The proof is available at [http://proceedings.mlr.press/v139/akbari21a/akbari21a-supp.pdf](http://proceedings.mlr.press/v139/akbari21a/akbari21a-supp.pdf). \[E(f_{B_{S},R})\leq\rho\sqrt{T\log(2/\delta)}+\beta(1+\sqrt{2N\log(2/\delta)}) +L\sqrt{\frac{\log(2/\delta)}{2N}}. \tag{16}\] By combining Theorem 5.4 and the inequality (16), we have the following upper bound with probability \(1-\delta\): \[E(f_{B_{S},R})\leq\frac{2\eta}{c}\left(4\left(\frac{b\gamma}{N}\right)^{2} \sqrt{T\ log(2/\delta)}+\frac{bT\gamma^{2}}{N}\left(1+\sqrt{2N\log(2/\delta)} \right)\right)+L\sqrt{\frac{\log(2\delta)}{2N}}, \tag{17}\] where \(c\in(0,1)\) is a constant number. \(\Box\) Theorem 5.5 shows how the generalization error bound of deep learning models trained by Adam depends on the Lipschitz constant \(\gamma\) and the maximum value \(L\). Furthermore, the inequality (15), implies the sensitivity of the generalization error to the batch size; when the batch size grows, \(E(f_{B_{S},R})\) increases. On the other hand, from the basics of machine learning, we know, if the batch size is too small, the parameters update is very noisy. Thus, an appropriate value should be considered for the batch size according to the training set size. As we mentioned in Section 4 for the KL and GJM losses we have \(\gamma_{KL}\leq\gamma_{GJM}\) and \(L_{KL}\leq L_{GJM}\)[5]. Hence, following Theorem 5.5 we have the following corollary: **Corollary 5.6**.: _Let \(f^{KL}_{B_{S},R}\) and \(f^{GJM}_{B_{S},R}\) be the output models trained by Adam optimizer using the KL and GJM loss functions respectively and the partition \(B_{S}\) obtained from the training set \(S\). We have_ \[E(f^{GJM}_{B_{S},R})\leq E(f^{KL}_{B_{S},R}).\] **Proof.** We know if \(\alpha=0.5\), then the Lipschitz constant and the maximum value of GJM are less than the Lipschitz constant and the maximum value of KL respectively [5]. So, under the same settings for hyper-parameters of Adam and the same initialization, from Theorem 5.5, we have: \[E(f^{GJM}_{B_{S},R})\leq E(f^{KL}_{B_{S},R}).\] \(\Box\) ### AdamW Optimizer The objective of regularization techniques is to control the network parameters' domain in order to prevent the over-fitting issue. \(L_{2}\)-regularization which exploits the \(L_{2}\) norm of the parameters vector, is more practical than \(L_{1}\) because it keeps the loss function differentiable and convex. In continuation, we study \(L_{2}\)-regularization and note its effect on SGD and Adam. The lack of significant effect of this technique on Adam led to AdamW 4[8]. Let \(\ell^{reg}(f^{\theta};B)\) be a regularized loss function computed on a mini-batch, \(B=\{(\mathrm{x}_{i},\mathrm{y}_{i})\}_{i=1}^{b}\): \[\ell^{reg}(f^{\theta};B)=\frac{1}{b}\left(\sum_{i=1}^{b}\ell(f^{\theta}(\mathrm{ x}_{i}),\mathrm{y}_{i})+\frac{\lambda}{2}\left\|\theta\right\|^{2}\right), \tag{18}\] where \(\left\|.\right\|\) is the \(L_{2}\) norm, \(\lambda\in\mathbb{R}^{+}\) is the weight decay and \(b\) is the batch size. According to the equation (18), to compute the parameters update in SGD, we have: \[\theta_{t}=\left(1-\frac{\eta\lambda}{b}\right)\theta_{t-1}-\frac{\eta}{b} \sum_{i=1}^{b}\nabla_{\theta}\ell(f^{\theta}(\mathrm{x}_{i}),\mathrm{y}_{i}).\] In SGD, minimizing the regularized loss function can improve the output model generalization. However, this technique, cannot be effective in Adam because it uses adaptive gradients to update the parameters [8]. In AdamW, the weight decay hyperparameter was decoupled from optimization steps taking gradient of the loss function. Let \(\widehat{m}_{t}\) and \(\widehat{v}_{t}\) denote the bias-corrected estimates illustrated in Subsection 5.1. The parameters update is computed as follows: \[\theta_{t}=\theta_{t-1}-\alpha_{t}\left(\eta\cdot\frac{\widehat{m}_{t}}{( \sqrt{\widehat{v}_{t}}+\epsilon)}+\lambda\theta\right) \tag{19}\] where \(\alpha_{t}\) is the schedule multiplier. The equation (19) exhibits that AdamW updates the parameters in a different way than Adam. Hence, we need to state theorems specific to AdamW for the stability and the generalization error. Consider \(\hat{M}(m_{t-1},\theta)\) and \(\hat{V}(v_{t-1},\theta)\) in the equations (6) and (7). According to update parameters statement of adamW in formula 19, The AdamW's update rule is defined as \[A_{W}^{t}(\theta)=\theta-\alpha_{t}\left(\eta\cdot\frac{\hat{M}(m_{t-1}, \theta)}{\sqrt{\hat{V}(v_{t-1},\theta)}+\epsilon}+\lambda\theta\right). \tag{20}\] where \(0\mid\mathrm{a}_{t}\lambda<1\) because otherwise, the update occurs in a wrong direction which means it goes away from the minimum. Consider the set of all possible values for the network parameters, \(H\subset\mathbb{R}^{K}\). Without loss of generality we can assume \(H\) is bounded 5. Let \(\left\|\theta\right\|_{\mathrm{sup}}=\sup_{\theta\in H}\left\|\theta\right\|\): Footnote 5: We know the number of iterations, T is finite. Therefore the set of visible values for parameters in the training stage is finite. So we can assume the set of all possible values is an infinite bounded superset of visible values. **Theorem 5.7**.: _Assume AdamW is executed for \(T\) iterations with a learning rate \(\eta\), batch size \(b\), weight decay \(\lambda\), and schedule multiplier \(\alpha_{t}\) to minimize the empirical risk in order to obtain \(f_{B_{S},R}\). Let \(\ell(\hat{\mathrm{y}},\mathrm{y})\) be convex and \(\gamma\)-Lipschitz. Then, Adam is \(\beta\)-uniformly stable with regard to the loss function \(\ell\), and for each \((\mathrm{x},\mathrm{y})\), \(\ell(f_{B_{S},R}(\mathrm{x}),\mathrm{y})\) holds the \(\rho\)-BDC with respect to \(R\). Consequently, we have_ \[\beta\leq\frac{2bT}{N}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta\gamma^{2}}{c}+ \gamma\lambda\left\|\theta\right\|_{\mathrm{sup}}\right),\quad\rho\leq\frac{8 b^{2}}{N^{2}}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta\gamma^{2}}{c}+\gamma \lambda\left\|\theta\right\|_{\mathrm{sup}}\right),\] _in which \(c\in(0,1)\) is a constant number and \(N\) is the size of the training set._ **Proof.** First, we check the \(\sigma\)-boundedness of \(A_{W}^{t}(\theta)\): \[\left\|\theta-A_{W}^{t}(\theta)\right\| =\left\|\alpha_{t}\left(\eta\cdot\frac{\hat{M}(m_{t-1},\theta)}{ \sqrt{\hat{V}(v_{t},\theta)}+\epsilon}+\lambda\theta\right)\right\|\] \[\leq\left\|\alpha_{t}\cdot\frac{\hat{M}(m_{t-1},\theta)}{\epsilon} \right\|+\alpha_{t}\lambda\left\|\theta\right\|\] \[=\alpha_{t}\left(\eta\cdot\frac{\left\|\hat{M}(m_{t-1},\theta) \right\|}{\epsilon}+\lambda\left\|\theta\right\|\right)\] \[\leq\alpha_{t}\left(\frac{\eta\gamma}{\epsilon}+\lambda\left\| \theta\right\|_{\mathrm{sup}}\right). \tag{21}\] By applying Lemma 5.3, we concluded the inequality (21), which shows that \(A_{W}^{t}(\theta)\) is \(\sigma\)-bounded. Now we evaluate the \(\tau\)-expensiveness of AdamW. According to the formula (3), we have \[\frac{\left\|A_{W}^{t}(\theta)-A_{W}^{t}(\theta^{\prime})\right\|}{ \left\|\theta-\theta^{\prime}\right\|}\] \[=\frac{\left\|-\alpha_{t}\left(\eta\cdot\frac{\bar{M}(m_{t-1}, \theta)}{\sqrt{\bar{V}(v_{t-1},\theta)+\epsilon}}+\lambda\theta\right)+\alpha_ {t}\left(\eta\cdot\frac{\bar{M}(m_{t-1},\theta^{\prime})}{\sqrt{\bar{V}(v_{t-1 },\theta^{\prime})+\epsilon}}+\lambda\theta^{\prime}\right)+\theta-\theta^{ \prime}\right\|}{\left\|\theta-\theta^{\prime}\right\|}. \tag{22}\] As said in the proof of Theorem 5.4, for every \(\theta\in H\), we have \(\frac{\bar{M}(m_{t-1},\theta)}{\sqrt{\bar{V}(v_{t-1},\theta)}}\simeq\pm 1\) because \(|\mathbb{E}[g]|/\sqrt{\mathbb{E}[g^{2}]}\leq 1\). Therefore, the equation (22) is written as follows: \[\frac{\left\|-\alpha_{t}\lambda\theta+\alpha_{t}\lambda\theta^{ \prime}+\theta-\theta^{\prime}\right\|}{\left\|\theta-\theta^{\prime}\right\|} =\frac{\left\|\alpha_{t}\lambda(\theta^{\prime}-\theta)+\theta- \theta^{\prime}\right\|}{\left\|\theta-\theta^{\prime}\right\|}\] \[=\frac{\left|1-\alpha_{t}\lambda\right\|\left\|\theta-\theta^{ \prime}\right\|}{\left\|\theta-\theta^{\prime}\right\|}\] \[=\left|1-\alpha_{t}\lambda\right|<1. \tag{23}\] AdamW update rule in the equation (20) implies that \(0<\alpha_{t}\lambda<1\) which its consequent is the inequality (23). with an analogous demonstration to what we did in the proof of Theorem 5.4, i.e. considering update sequences and using Lemma 5.2 in order to evaluate the uniform stability and bounded difference condition according to their definitions, we conclude the following inequalities: \[\beta \leq\frac{2bT}{N}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta\gamma^{ 2}}{\epsilon}+\gamma\lambda\left\|\theta\right\|_{\sup}\right),\] \[\rho \leq\frac{8b^{2}}{N^{2}}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta \gamma^{2}}{\epsilon}+\gamma\lambda\left\|\theta\right\|_{\sup}\right).\] **Theorem 5.8**.: _Let \(\ell(\hat{\mathrm{y}},\mathrm{y})\) with the maximum value of \(L\) be convex and \(\gamma\)-Lipschitz. Assume AdamW is run for \(T\) iterations with a learning rate \(\eta\), batch size \(b\), weight decay \(\lambda\), and schedule multiplier \(\alpha_{t}\) to obtain \(f_{B_{S},R}\). Then we have the following upper bound for \(E(f_{B_{S},R})\) with probability at least \(1-\delta\):_ \[E(f_{B_{S},R})\leq\frac{2b}{N}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta\gamma^{ 2}}{c}+\gamma\lambda\left\|\theta\right\|_{\sup}\right)\left(\frac{4b}{N} \sqrt{T\log(2/\delta)}+T\sqrt{2N\log(2/\delta)}\right)+L\sqrt{\frac{\log(2/ \delta)}{2N}}. \tag{24}\] _in which \(c\in(0,1)\) is a constant number and \(N\) is the size of the training set._ **Proof.** By combining the equation (16) and Theorem 5.7 we conclude the proposition. \(\Box\) The inequality (24) implies that the generalization error growth of a DNN trained by AdamW, is directly related to the Lipschitz constant and the maximum value of a loss function. Following Theorem 5.8 we have the following corollary for the KL and GJM loss functions: **Corollary 5.9**.: _Let \(f_{B_{S},R}^{KL}\) and \(f_{B_{S},R}^{GJM}\) be the output models trained by AdamW optimizer using the KL and GJM loss functions respectively using the partition \(B_{S}\) obtained from the training set \(S\). We have_ \[E(f_{B_{S},R}^{GJM})\leq E(f_{B_{S},R}^{KL}).\] **Proof.** The proposition is concluded by Theorem 5.8 and an analogous argument to Corollary 5.6. ## 6 Experimental Evaluation ### Datasets We use \(4\) datasets, including UTKFace [19], AgeDB [20], MegaAge-Asian [21], and FG-NET [22] to evaluate age estimation performance. UTKFace dataset contains \(23,708\) facial images, providing enough samples of all ages, ranging from \(0\) to \(116\) years-old. AgeDB contains \(16,488\) in-the-wild images in the age range from \(0\) to \(100\) years-old. MegaAge-Asian has been already split into MegaAge-Train and Mega-Age-Test datasets, containing \(40,000\) and \(3,945\) images respectively, belonging to Asian people with the age label in the range from \(1\) to \(69\) years-old. FG-NET dataset contains \(1,002\) facial images in the age range of \(0\) to \(69\) years. This dataset covers variations in pose, age expression, resolution, and lighting conditions. By collecting the samples from UTKFace, MegaAge-Train, and AgeDB datasets whose ages are in the range from \(0\) to \(100\) years-old, we create a new dataset called UAM, which includes \(80,174\) images. We use UTKFace and UAM as the training sets. FG-NET, MegaAge-Test, and \(10\%\) randomly selected from AgeDB called AgeDB-Test, are left as the test sets. ### Settings All images are pre-processed by the following procedures: face detection and alignment are done by prepared modules in OpenCV package. All images are reshaped to the size of \(256\times 256\) and standard data augmentation techniques, including random cropping and horizontal flipping, are carried out during the training phase. We use two neural network architectures VGG16 [23] and ResNet50 [24], pre-trained on ImageNet [25] and VGGFace2 [26] datasets respectively, to estimate human age. VGGFace2 dataset was created with the aim of estimating human pose and age. With the same seed, the last layer of these models is replaced with a M-neurons dense layer with random weights. The last layer of VGG16 is trained on UTKFace in \(5\) epochs and the last layer of ResNet50 is trained on UAM in \(15\) epochs. M is set to \(116\) in VGG16 and \(101\) in ResNet50 model. We train the models via Adam and AdamW with learning rate \(2\times 10^{-5}\) for KL and \(10^{-4}\) for GJM 6. The batch size and AdamW's weight decay are set to \(64\) and \(0.9\) respectively. We set \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) for both Adam and AdamW as the authors of [1] and [8] suggested. Footnote 6: In our experiments, when we set the learning rate to \(2\times 10^{-5}\) for the GJM loss, the ultimate model at the last epoch remained under-fit. ### Evaluation Metrics and Results As the first observation, we measure the generalization error estimate in the training steps of ResNet50 trained by Adam and AdamW which is defined as \[\hat{E}(f_{B_{S},R})=|R_{train}(f_{B_{S},R})-R_{val}(f_{B_{S},R})|,\] where \(f_{B_{S},R}\) is the output model, \(R_{train}(f_{B_{S},R})\), \(R_{val}(f_{B_{S},R})\) are the average of loss values on the training and validation sets respectively. The results of this experiment are shown in Figure 1 and Figure 2. In the first epochs, the models are still under-fit and the loss is far from its minimum; therefore, \(\hat{E}(f_{B_{S},R})\) does not give us critical information about the generalization error, but in the rest of epochs, when the experimental loss of the models approaches its minimum, \(\hat{E}(f_{B_{S},R})\) can represent the generalization error. As can be seen in Figure 0(a) and Figure 0(a), after epoch \(5\) or \(6\) the generalization error estimate of the models trained by Adam and AdamW using the GJM loss function is lower than the models trained using the KL loss. In addition, we measure the generalization performance in terms of Mean Absolute Error (MAE) and Cumulative Score (CS). Consider the training set \(S\), and the test set \(S_{test}\in(X\times Y)^{D}\). Let \((\mathrm{x}_{k},y_{k})\in S_{test}\) represents a test example where \(y_{k}\in\mathbb{R}\) is the label of \(k\)-th example of the test set. Since we use label distribution learning, for each \((\mathrm{x},\mathrm{y})\in S\), \(\mathrm{y}\in\mathbb{R}^{\mathrm{M}}\) is the probability distribution corresponding to \(\mathrm{x}\). Therefore, in the evaluation phase, the output of the model per the test example \(\mathrm{x}_{k}\) is the predicted probability distribution \(\hat{\mathrm{y}}_{k}=[\hat{y}_{k,1},\hat{y}_{k,2},\ldots,\hat{y}_{k,\mathrm{M}}]\). MAE is defined as \(\frac{1}{D}\sum_{k=1}^{D}|\hat{l}_{k}-l_{k}|\) where \(\hat{l}_{k}\) is the index of the largest element of \(\hat{\mathrm{y}}_{k}\) and \(l_{k}\) is the true label. CS is defined as \(\frac{D_{I}}{D}\times 100\%\) where \(D_{I}\) is the number of test samples such that \(|\hat{l}_{k}-l_{k}|<I\). Commonly, the value of \(I\) is set to 5 [5][27]. The results are reported in Tables 1-3. The ResNet50 models are more accurate than the VGG16 models because the version of VGG16 is pre-trained on ImageNet dataset which is not suitable for age estimation. Tables 1- 3 show that when we train a DNN by Adam or AdamW, the GJM loss performs better than the KL loss.
2303.01118
On Constructions and Enumeration of Vectorial Hyper-bent Functions in the $\cP\cS_{ap}^{\#}$ Class
The purpose of this paper is to give explicit constructions of vectorial hyper-bent functions in the $\cP\cS_{ap}^{\#}$ class. It seems that the explicit constructions were so far known only for very special cases. To this end, we present a sufficient and necessary condition of this family of vectorial functions to be hyper-bent. The conditions are expressed in terms of group ring. Using this characterization, explicit constructions of vectorial hyper-bent functions of the $\cP\cS_{ap}^{\#}$ class via balanced functions are proposed. Furthermore, exact number of vectorial hyper-bent functions in the $\cP\cS_{ap}^{\#}$ class is found. The results improve some previous work. Moreover, we solve a problem of counting vectorial hyper-bent functions left by Muratovi\'c-Ribi\'c, Pasalic and Ribi\'c in [{\em IEEE Trans. Inform. Theory}, 60 (2014), pp. 4408-4413].
Jingkun Zhou, Chunming Tang, Fengrong Zhang
2023-03-02T10:00:58Z
http://arxiv.org/abs/2303.01118v1
On Constructions and Enumeration of Vectorial Hyper-bent Functions in the \(\mathcal{PS}_{ap}^{\#}\) Class ###### Abstract The purpose of this paper is to give explicit constructions of vectorial hyper-bent functions in the \(\mathcal{PS}_{ap}^{\#}\) class. It seems that the explicit constructions were so far known only for very special cases. To this end, we present a sufficient and necessary condition of this family of vectorial functions to be hyper-bent. The conditions are expressed in terms of group ring. Using this characterization, explicit constructions of vectorial hyper-bent functions of the \(\mathcal{PS}_{ap}^{\#}\) class via balanced functions are proposed. Furthermore, exact number of vectorial hyper-bent functions in the \(\mathcal{PS}_{ap}^{\#}\) class is found. The results improve some previous work. Moreover, we solve a problem of counting vectorial hyper-bent functions left by Muratovic-Ribic, Pasalic and Ribic in [_IEEE Trans. Inform. Theory_, 60 (2014), pp. 4408-4413]. Keywords: Boolean function, Bent function, Hyper-bent function, Vectorial function, Maximum nonlinearity. ## 1 Introduction A hyper-bent function, firstly introduced by A.M.Youssef and G. Gong [8] in 2001, is a Boolean function \(f:\ \mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}\) such that its extended Walsh-Hadamard transform \[\widehat{\chi}_{f}(\lambda,t)=\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{f(x)+\mathrm{ Tr}_{1}^{n}(\lambda x^{t})}\] only take the values \(\pm 2^{\frac{n}{2}}\), where \(\lambda\in\mathbb{F}_{2^{n}}\) and \(t\) is an integer coprime with \(2^{n}-1\). Hyper-bent functions are defined as a special class of bent functions for the purpose of avoiding approximation by a bijective monomial function. The class of hyper-bent functions proposed in [8] belong to the \(\mathcal{PS}_{ap}\) class of bent functions introduced by Dillon [10] and is the only known infinite class of hyper-bent functions up to now. The hyper-bent property of Boolean functions can be extended to vectorial functions. For a vectorial function \(F:\ \mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}^{k}\), we say \(F\) is hyper-bent if all nonzero combinations of the component functions of \(F\) are hyper-bent. That is, a vectorial function \[F(x)=(f_{1}(x),\ldots,f_{k}(x))\] is called hyper-bent if \(a_{1}f_{1}(x)+\cdots+a_{k}f_{k}(x)\) is a hyper-bent function for any choice of \(a_{i}\in\mathbb{F}_{2}\), where not all of the \(a_{i}\)'s are zero. The trace functions are useful tools for the study of bent functions. Carlet and Gaborit [9] have shown that hyper-bent Boolean functions of the \(\mathcal{PS}_{ap}^{\#}\) class are of the form \[f(x)=\mathrm{Tr}_{1}^{2m}\left(\sum_{i=1}^{2^{m}}a_{i}x^{i(2^{m}-1)}+a_{0} \right),\] where \(a_{i}\in\mathbb{F}_{2^{n}}\). Charpin and Gong [16] gave a characterization of hyper-bent Boolean functions on \(\mathbb{F}_{2^{2m}}\) of the form \(\sum_{r\in R}\mathrm{Tr}_{1}^{2m}(a_{r}x^{r(2^{m}-1)})\) in terms of Dickson polynomials and Kloosterman sums, where \(a_{r}\in\mathbb{F}_{2^{m}}\). See paper [17],[11] for recent progress on hyper-bent Boolean functions with Dillon-like exponents. By employing the Mobius transformation, Carlet et al. presented a characterization for the hyper-bentness property of functions with Dillon-like exponents with coefficients in the whole \(\mathbb{F}_{2^{2m}}\)[1]. In [6; 15] some necessary conditions of vectorial bent functions with Dillon-like exponents are given. In [7], the authors considered the vectorial case and obtained a sufficient and necessary condition for the function \(F(x)=\mathrm{Tr}_{m}^{2m}(\sum_{i=1}^{2^{m}}a_{i}x^{i(2^{m}-1)})\) to be a vectorial bent function. They also showed that each vectorial bent function of this form is vectorial hyper-bent. Besides, the authors counted the exact number of vectorial hyper-bent functions of the form \(F(x)=\mathrm{Tr}_{m}^{2m}(\sum_{i=1}^{2^{m}}a_{i}x^{i(2^{m}-1)}+c)\). They also left the question about the cardinality of vectorial hyper-bent functions for a general case \(k|n\) as an open problem. In this paper, we study the hyper-bent property of a family of vectorial functions \(F:\ \mathbb{F}_{2^{2m}}\rightarrow\mathbb{F}_{2}^{k}\) such that \(f(\gamma^{2^{m}+1}x)=f(x)\) hold for each nonzero combination \(f\) of the component functions of \(F\) and \(F(0)=0\). The notion of vectorial hyper-bent functions under this condition is a generalization of hyper-bent Boolean functions of the \(\mathcal{PS}_{ap}^{\#}\) class. We attain a sufficient and necessary condition of this family of vectorial functions to be hyper-bent. A numerical result for vectorial hyper-bent functions of a typical form is given, which solve the open problem on counting vectorial hyper-bent functions in [7],and a construction of vectorial hyper-bent functions of arbitrary dimension is obtained. The rest of this paper is organized as follows. In Section 2, we present some preliminaries on group rings and vectorial hyper-bent functions. In Section 3, we establish a sufficient and necessary condition for a class of vectorial functions to be hyper-bent. The number of vectorial hyper-bent functions of a typical form is counted and an explicit construction of vectorial hyper-bent functions is given by using balanced functions. And finally Section 4 concludes the paper. ## 2 Preliminaries ### Group ring and Fourier analysis In this section we introduce some basic results on group rings. **Definition 2.1**.: _Let \(G\) be an abelian group. The group ring \(\mathbb{Q}[G]\) is defined as the set of the formal sums of elements of \(G\) with coefficients in \(\mathbb{Q}\). The addition, the scalar multiplication and the multiplication in \(\mathbb{Q}[G]\) are respectively defined as follows:_ \[\sum_{g\in G}a_{g}\cdot g+\sum_{g\in G}b_{g}\cdot g =\sum_{g\in G}(a_{g}+b_{g})\cdot g,\] \[a\sum_{g\in G}a_{g}\cdot g =\sum_{g\in G}(aa_{g})\cdot g,\] _and_ \[\left(\sum_{g\in G}a_{g}\cdot g\right)\cdot\left(\sum_{g\in G}b_{g}\cdot g \right)=\sum_{g\in G}\left(\sum_{h\in G}a_{h}b_{gh^{-1}}\right)\cdot g.\] It becomes conventional to abuse the notation \(S\) as a subset of \(G\) and the corresponding element \(\sum_{s\in S}s\) in \(\mathbb{Q}[G]\) at the same time. A character \(\chi\) of an abelian group \(G\) is a group homomorphism from \(G\) to the multiplicative group of the complex field \(\mathbb{C}\). Denote by \(\hat{G}\) the set of all the characters of \(G\). Let \(A=\sum_{g\in G}a_{g}\cdot g\in\mathbb{Q}[G]\). The character sum of \(\chi\) on \(A\) is \(\chi(A)=\sum_{g\in G}a_{g}\chi(g)\). The following inversion formula tells that two elements in \(\mathbb{Q}[G]\) coincide if the values of character sums on them equal for each character. **Proposition 2.1**.: _Suppose \(A=\sum_{g\in G}a_{g}g\) is an element of the group ring \(\mathbb{Q}[G]\) for a finite abelian group \(G\), then the coefficients \(a_{g}\)'s of \(A\) can be computed explicitly by_ \[a_{g}=\frac{1}{|G|}\sum_{\chi\in\hat{G}}\chi(A)\chi(g^{-1}),\] _where \(\hat{G}\) denotes the character group of \(G\). In particular, if \(A,B\in\mathbb{Q}[G]\) satisfy \(\chi(A)=\chi(B)\) for all characters \(\chi\in\hat{G}\), then \(A=B\)._ Since the field \(\mathbb{F}_{2^{k}}\) are identical to \(\mathbb{F}_{2}^{k}\) as a vector space over \(\mathbb{F}_{2}\), there are two main approaches for describing the set of all characters of an elementary abelian group, one using the dot product and the other using the trace function. **Proposition 2.2**.: _Let \(k\) be a positive integer. (1) For each \(a=(a_{1},\ldots,a_{k})\in\mathbb{F}_{2}^{k}\), define the function \(\chi_{a}:\ \mathbb{F}_{2}^{k}\to\{\pm 1\}\) by_ \[\chi_{a}(x)=(-1)^{\sum_{i=1}a_{i}x_{i}}=(-1)^{\langle a,x\rangle}\] _for each \(x=(x_{1},\ldots,x_{k})\in\mathbb{F}_{2}^{k}\), where \(\langle a,x\rangle\) is the usual dot product. Then_ \[\widehat{\mathbb{F}_{2}^{k}}=\{\chi_{a}:a\in\mathbb{F}_{2}^{k}\}.\] _(2) For each \(a\in\mathbb{F}_{2^{k}}\), define the function \(\rho_{a}:\mathbb{F}_{2^{k}}\to\{\pm 1\}\) by_ \[\rho_{a}(x)=(-1)^{\mathrm{Tr}_{1}^{k}(ax)}\] _for each \(x\in\mathbb{F}_{2^{k}}\). Then the set \(\{\rho_{a}:a\in\mathbb{F}_{2^{k}}\}\) comprises all the characters of \(G\), where \(G\) is the additive group of \(\mathbb{F}_{2^{k}}\)._ ### Hyper-bent functions We establish two useful propositions which will be utilized in Section 3. Suppose that \(n=2m\) is an even positive integer. We have a straightforward partition of \(\mathbb{F}_{2^{n}}^{*}\) as follows: \[\mathbb{F}_{2^{n}}^{*}=\bigcup_{u\in U}u\mathbb{F}_{2^{m}}^{*},\] where \(U\) is the cyclic subgroup of \(\mathbb{F}_{2^{n}}^{*}\) of order \(2^{m}+1\). The following proposition is well known, and it can be found in a slightly different form in [18]. We give its proof here for the sake of completeness. **Proposition 2.3**.: _Let \(\gamma\) be a primitive element of \(\mathbb{F}_{2^{n}}\). Let \(f\) be a Boolean function defined on \(\mathbb{F}_{2^{n}}\) such that_ \[f(\gamma^{2^{m}+1}x)=f(x) \tag{1}\] _for every \(x\in\mathbb{F}_{2^{n}}\) and \(f(0)=0\). Then \(f\) is a hyper-bent function if and only if_ \[\sum_{u\in U}(-1)^{f(u)}=1,\] _In this case \(f\) is said to belong to the \(\mathcal{PS}_{qp}^{\#}\) class._ Proof.: Denote \(q=2^{m}\). Since \(\gamma^{q+1}\) is a primitive element of \(\mathbb{F}_{q}\), by (1) we see that for each \(u\in U\), the restriction of the function \(f\) to \(u\mathbb{F}_{q}^{*}\) is constant. Let \(i\) be an integer coprime with \(q^{2}-1\) and \(a\in\mathbb{F}_{q^{2}}^{*}\). Then we have \[\sum_{x\in\mathbb{F}_{q^{2}}^{*}}(-1)^{f(x)+\mathrm{Tr}_{1}^{n}(ax^{i})}=\sum _{u\in U}(-1)^{f(u)}\sum_{x\in u\mathbb{F}_{q}^{*}}(-1)^{\mathrm{Tr}_{1}^{n}( ax^{i})}=\sum_{u\in U}(-1)^{f(u)}\sum_{y\in\mathbb{F}_{q}^{*}}(-1)^{\mathrm{ Tr}_{1}^{n}(au^{i}y^{i})}.\] For \(a\in\mathbb{F}_{q^{2}}^{*},\;u\in U\) and \(y\in\mathbb{F}_{q}^{*}\), we have \[\mathrm{Tr}_{1}^{n}(au^{i}y^{i})=\mathrm{Tr}_{1}^{m}(\mathrm{Tr}_{m}^{n}(au^{i }y^{i}))=\mathrm{Tr}_{1}^{m}(au^{i}y^{i}+a^{q}u^{qi}y^{qi}),\] by transitivity of trace. Since \(u^{q}=u^{-1}\) and \(y^{q}=y\), we have \[\mathrm{Tr}_{1}^{n}(au^{i}y^{i})=\mathrm{Tr}_{1}^{m}(au^{i}y^{i}+a^{q}u^{qi}y ^{qi})=\mathrm{Tr}_{1}^{m}((au^{i}+a^{q}u^{-i})y^{i}).\] Then \(au^{i}+a^{q}u^{-i}=0\) if and only if \(u=a^{t(q-1)}\), where \(t\) is the multiplicative inverse of \(2i\) modulo \(q^{2}-1\). Set \(u_{0}=a^{t(q-1)}\). If \(u=u_{0}\), then for any \(y\in\mathbb{F}_{q}^{*}\), we have \(\mathrm{Tr}_{1}^{n}(au^{i}y^{i})=0\), and hence \[\sum_{y\in\mathbb{F}_{q}^{*}}(-1)^{\mathrm{Tr}_{1}^{n}(au^{i}y^{i})}=q-1.\] If \(u\in U\setminus\{u_{0}\}\), we have \(au^{i}+a^{q}u^{-i}\neq 0\). Then the set \[\{y\in\mathbb{F}_{q}^{*}:\mathrm{Tr}_{1}^{m}((au^{i}+a^{q}u^{-i})y^{i})=0\}\] has size \(q/2-1\) and the set \[\{y\in\mathbb{F}_{q}^{*}:\mathrm{Tr}_{1}^{m}((au^{i}+a^{q}u^{-i})y^{i})=1\}\] has size \(q/2\) as \(y\mapsto y^{i}\) is a bijection. So we deduce that \[\sum_{y\in\mathbb{F}_{q}^{*}}(-1)^{\mathrm{Tr}_{1}^{n}(au^{i}y^{i})}=1\times(q /2-1)+(-1)\times q/2=-1\] for each \(u\in U\setminus\{u_{0}\}\). Hence \[\sum_{x\in\mathbb{F}_{q^{2}}^{*}}(-1)^{f(x)+\mathrm{Tr}_{1}^{n}( ax^{i})} =\sum_{u\in U}(-1)^{f(u)}\sum_{y\in\mathbb{F}_{q}^{*}}(-1)^{\mathrm{ Tr}_{1}^{n}(au^{i}y^{i})}\] \[=(q-1)\cdot(-1)^{f(u_{0})}-\sum_{u\in U\setminus\{u_{0}\}}(-1)^{ f(u)}\] \[=q\cdot(-1)^{f(u_{0})}-\sum_{u\in U}(-1)^{f(u)}\] \[=\pm q-\sum_{u\in U}(-1)^{f(u)}.\] Therefore, we have \[\sum_{x\in\mathbb{F}_{q^{2}}^{*}}(-1)^{f(x)+\mathrm{Tr}_{1}^{n}(ax^{i})}=\pm q +1-\sum_{u\in U}(-1)^{f(u)}.\] Then by the definition of a hyper-bent function we conclude that \(f\) is a hyper-bent function if and only if \[\sum_{u\in U}(-1)^{f(u)}=1,\] which completes the proof of the proposition. **Proposition 2.4**.: _Let \(f\) be a hyper-bent function in \(\mathcal{PS}_{ap}^{\#}\) with \(f(0)=0\). Then there exists a function \(g\) from \(U\) to \(\mathbb{F}_{2}\) such that_ \[f(x)=g(x^{2^{m}-1})\] _for \(x\in\mathbb{F}_{2^{n}}^{*}\), and_ \[\sum_{u\in U}(-1)^{g(u)}=1.\] _Conversely, if \(g\) is a function from \(U\) to \(\mathbb{F}_{2}\) such that \(\sum_{u\in U}(-1)^{g(u)}=1\), then the function \(f\) from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2}\) such that \(f(x)=g(x^{2^{m}-1})\) for \(x\in\mathbb{F}_{2^{n}}^{*}\) and \(f(0)=0\) is a hyper-bent function from \(\mathcal{PS}_{ap}^{\#}\)._ Proof.: First assume that \(f\) is a hyper-bent function in \(\mathcal{PS}_{ap}^{\#}\) with \(f(0)=0\). Take \(s\) to be the multiplicative inverse of \(2^{m}-1\) modulo \(2^{m}+1\) and define \(g:\ U\to\mathbb{F}_{2}\) such that \(g(u)=f(u^{s})\) for each \(u\in U\). For \(x\in\mathbb{F}_{2^{n}}^{*}\), take \(u\in U\) and \(y\in\mathbb{F}_{q}^{*}\) such that \(x=uy\). Then we have \[f(x)=f(u)=f(u^{s(2^{m}-1)})=f((u^{2^{m}-1})^{s})=g(u^{2^{m}-1})=g(x^{2^{m}-1}).\] And we see that \[\sum_{u\in U}(-1)^{g(u)}=\sum_{u\in U}(-1)^{f(u^{s})}=\sum_{u\in U}(-1)^{f(u)}=1\] by Proposition 2.3 and \(\gcd(2^{m}+1,s)=1\). Now assume that \(g\) is a function from \(U\) to \(\mathbb{F}_{2}\) such that \(\sum_{u\in U}(-1)^{g(u)}=1\), and that \(f\) is a Boolean function of \(\mathbb{F}_{2^{n}}\) such that \(f(x)=g(x^{2^{m}-1})\) for \(x\in\mathbb{F}_{2^{n}}^{*}\) and \(f(0)=0\). Notice that \[f(\alpha^{q+1}x)=f(x)\] where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{n}}\), and that \[\sum_{u\in U}(-1)^{f(u)}=\sum_{u\in U}(-1)^{g(u^{2^{m}-1})}=\sum_{u\in U}(-1)^ {g(u)}=1\] as \(\gcd(2^{m}-1,2^{m}+1)=1\). By Proposition 2.3, \(f\) is a hyper-bent function from \(\mathcal{PS}_{ap}^{\#}\). The proof is now complete. ## 3 Constructions and enumeration of vectorial hyper-bent functions Let \(n,m,\gamma\) and \(U\) be as in Section 2.2. We have a the following characterization of vectorial hyper-bent functions in the context of group rings. **Theorem 3.1**.: _Let \(n=2m\). Let \(F(x)\) be a vectorial function from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2}^{k}\) such that \(f(\gamma^{2^{m}+1}x)=f(x)\) hold for each nonzero combination \(f\) of the component functions of \(F\) and \(F(0)=0\). Then the following conditions are equivalent: (1) \(F\) is a vectorial hyper-bent function of dimension \(k\). (2) \(\sum_{u\in U}(-1)^{\langle v,F(u)\rangle}=1\) for all \(v\in\mathbb{F}_{2}^{k}\setminus\{0\}\). (3) \(\sum_{u\in U}F(u)=2^{m-k}H+0_{H}\) holds in the group ring \(\mathbb{Z}[H]\), where \(H\) is the additive group of \(\mathbb{F}_{2}^{k}\)._ Proof.: By the definition of a vectorial hyper-bent function, we deduced that \(F\) is a vectorial hyper-bent function if and only if \(\langle v,F(u)\rangle\) is bent for all \(v\in\mathbb{F}_{2}^{k}\setminus\{0\}\), which is equivalent to (2) by Proposition 2.3. Hence (1) and (2) are equivalent. Suppose \(\sum_{u\in U}F(u)=2^{m-k}H+0_{H}\) holds. Taking an arbitrary non-principal character \(\chi_{v}\) of \(H\), we see that \[\sum_{u\in U}(-1)^{\langle v,F(u)\rangle}=\sum_{u\in U}\chi_{v}(F(u))=\chi_{v} (2^{m-k}H+0_{H})=1,\] where \(\chi_{v}(x)=(-1)^{\langle v,x\rangle}\). Suppose \(\sum_{u\in U}(-1)^{\langle v,F(u)\rangle}=1\) for all \(v\in\mathbb{F}_{2}^{k}\setminus\{0\}\). Then \(\chi_{v}(\sum_{u\in U}F(u))=\chi_{v}(2^{m-k}H+0_{H})=1\) for each non-principal character \(\chi_{v}\). Also we have \(\psi(\sum_{u\in U}F(u)=\psi(2^{m-k}H+0_{H})=2^{m}+1\) where \(\psi\) denotes the principal of \(H\). Then by Proposition 2.1 we conclude that (3) holds. Thus (2) and (3) are equivalent and the proof is now complete. Since each hyper-bent Boolean function of \(\mathbb{F}_{2^{n}}\) from \(\mathcal{PS}^{\#}_{ap}\) are of the form \[f(x)=\operatorname{Tr}_{1}^{n}\left(\sum_{i=1}^{2^{m}}a_{i}x^{i(2^{m}-1)}+a_{0 }\right),\] any vectorial hyper-bent function from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2}^{k}\) satisfying the condition given in Theorem 3.1 has the following expression: \[F(x)=\left(\operatorname{Tr}_{1}^{n}\left(\sum_{i=1}^{2^{m}}a_{1,i}x^{i(2^{m}- 1)}+a_{1,0}\right),\ldots,\operatorname{Tr}_{1}^{n}\left(\sum_{i=1}^{2^{m}}a_{ k,i}x^{i(2^{m}-1)}+a_{k,0}\right)\right).\] Our next theorem counts the number of vectorial hyper-bent functions of this form. **Theorem 3.2**.: _Let \(n=2m\). Let \(\mathcal{N}_{n,k}\) denote the number of vectorial hyper-bent functions of the form_ \[F(x)=\left(\operatorname{Tr}_{1}^{n}\left(\sum_{i=1}^{2^{m}}a_{1,i}x^{i(2^{m}- 1)}+a_{1,0}\right),\ldots,\operatorname{Tr}_{1}^{n}\left(\sum_{i=1}^{2^{m}}a_ {k,i}x^{i(2^{m}-1)}+a_{k,0}\right)\right).\] _Then,_ \[\mathcal{N}_{n,k}=2^{k}\cdot\binom{2^{m}+1}{2^{m-k}+1}\cdot\prod_{i=1}^{2^{k}- 1}\binom{2^{m}-i\cdot 2^{m-k}}{2^{m-k}}.\] Proof.: We first consider the number \(N\) of vectorial hyper-bent functions \(F\) of the given form such that \(F(0)=0\). By Theorem 3.1 we deduce that \[N=\binom{2^{m}+1}{2^{m-k}+1}\cdot\prod_{i=1}^{2^{k}-1}\binom{2^{m}-i\cdot 2^ {m-k}}{2^{m-k}}.\] Since each vectorial hyper-bent function of the given form is a translation \(F(x)+r,\ r\in\mathbb{F}_{2^{k}}\), of a vectorial hyper-bent function \(F(x)\) of the given form such that \(F(0)=0\) and each such \(F(x)\) has exactly \(2^{k}\) translations, we obtain the desired result. Let \(k\) be a positive integer. Given an ordered basis \(A=(\alpha_{1},\ldots,\alpha_{k})\) of \(\mathbb{F}_{2^{k}}\) over \(\mathbb{F}_{2}\), its dual basis is defined to be a basis \(B=(\beta_{1},\ldots,\beta_{k})\) satisfying \[\operatorname{Tr}_{1}^{k}(\alpha_{i}\beta_{j})=\delta_{i,j}\text{ for }i,j=1,2, \ldots,k,\] where \(\delta_{i,j}\) denotes the Kronecker delta function. It is well known that each basis of \(\mathbb{F}_{2^{k}}\) over \(\mathbb{F}_{2}\) has a unique dual basis. Let \(n=2m\) and \(k\) be an integer with \(k|m\). Let us denote by \(\mathcal{HB}_{n,k}\) the set of all the hyper-bent functions from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2}^{k}\), of the form \[F(x)=\left(\operatorname{Tr}_{1}^{n}\left(\sum_{i=1}^{2^{m}}a_{1,i}x^{i(2^{m}- 1)}+a_{1,0}\right),\ldots,\operatorname{Tr}_{1}^{n}\left(\sum_{i=1}^{2^{m}}a_{ k,i}x^{i(2^{m}-1)}+a_{k,0}\right)\right),\] where \(a_{i,j}\in\mathbb{F}_{2^{n}}\) for \(i\in\{1,\ldots,k\}\) and \(j\in\{0,\ldots,2^{m}\}\). Let \(\widetilde{\mathcal{HB}}_{n,k}\) denote the set of all the hyper-bent functions from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{k}}\), of the form \[\widetilde{F}(x)=\operatorname{Tr}_{k}^{n}\left(\sum_{i=1}^{2^{m}}b_{i}x^{i(2 ^{m}-1)}+b_{0}\right),\] where \(b_{i}\in\mathbb{F}_{2^{n}}\) for \(i\in\{0,\ldots,2^{m}\}\). Let \(A=(\alpha_{1},\ldots,\alpha_{k})\) be a basis of \(\mathbb{F}_{2^{k}}\) over \(\mathbb{F}_{2}\) and \(B=(\beta_{1},\ldots,\beta_{k})\) its dual basis. We define a mapping \(\pi\) from \(\mathcal{HB}_{n,k}\) to \(\widetilde{\mathcal{HB}}_{n,k}\) as follows: \(\pi(f_{1}(x),\ldots,f_{k}(x))=\sum_{j=1}^{k}f_{j}(x)\alpha_{j}\), where \((f_{1}(x),\ldots,f_{k}(x))\in\mathcal{HB}_{n,k}\). And define a mapping \(\sigma\) from \(\widetilde{\mathcal{HB}}_{n,k}\) to \(\mathcal{HB}_{n,k}\) as follows: \(\sigma(\widetilde{F}(x))=\left(\operatorname{Tr}_{1}^{k}(\beta_{1}\widetilde {F}(x)),\ldots,\operatorname{Tr}_{1}^{k}(\beta_{k}\widetilde{F}(x))\right)\), where \(\widetilde{F}(x)\in\widetilde{\mathcal{HB}}_{n,k}\). A trivial verification shows that \(\pi\sigma(\widetilde{F}(x))=\widetilde{F}(x)\) and \(\sigma\pi(f_{1}(x),\ldots,\)\(f_{k}(x))=(f_{1}(x),\ldots,f_{k}(x))\) for \(\widetilde{F}(x)\in\widetilde{\mathcal{HB}}_{n,k}\) and \((f_{1}(x),\ldots,f_{k}(x))\in\mathcal{HB}_{n,k}\). Consequently, the hyper-bent functions from \(\widetilde{\mathcal{HB}}_{n,k}\) are exactly those elements of \(\mathcal{HB}_{n,k}\). **Remark 3.1**.: _Let \(n=2m\). The number of vectorial hyper-bent functions in \(\widetilde{\mathcal{HB}}_{n,m}\) is counted in [7], so our result is a generalization of the case \(k=m\)._ **Theorem 3.3**.: _Let \(n=2m\) and \(u_{0}\in U\setminus\{1\}\). Let \(T_{u_{0}}\) be the vectorial function defined on \(\mathbb{F}_{2^{n}}\) by_ \[T_{u_{0}}(x)=\operatorname{Tr}_{m}^{n}\left(u_{0}\sum_{i=1}^{2^{m-1}}x^{i(2^{m }-1)}\right).\] _Then \(T_{u_{0}}\) is a vectorial hyper-bent function._ Proof.: Denote \[g(u)=\operatorname{Tr}_{m}^{n}\left(u_{0}\sum_{i=1}^{2^{m-1}}u^{i}\right)\text { for }u\in U\] and notice that \(g(1)=0\). By Proposition 2.4, it suffices to show that \(g|_{U\setminus\{1\}}\) maps \(U\setminus\{1\}\) onto \(\mathbb{F}_{2^{m}}\). Suppose that there exist \(u_{1}\neq u_{2}\in U\setminus\{1\}\) such that \[\operatorname{Tr}_{m}^{n}\left(u_{0}\sum_{i=1}^{2^{m-1}}u_{1}^{i}\right)= \operatorname{Tr}_{m}^{n}\left(u_{0}\sum_{i=1}^{2^{m-1}}u_{2}^{i}\right). \tag{2}\] Now that \[\operatorname{Tr}_{m}^{n}\left(u_{0}\sum_{i=1}^{2^{m-1}}u_{1}^{i}\right) =\operatorname{Tr}_{m}^{n}\left(u_{0}u_{1}\frac{1+u_{1}^{2^{m-1 }}}{1+u_{1}}\right)\] \[=u_{0}u_{1}\frac{1+u_{1}^{2^{m-1}}}{1+u_{1}}+u_{0}^{2^{m}}u_{1}^{2 ^{m}}\frac{1+u_{1}^{2^{2m-1}}}{1+u_{1}^{2^{m}}}\] \[=u_{0}u_{1}\frac{1+u_{1}^{2^{m-1}}}{1+u_{1}}+u_{0}^{-1}\frac{1+u_ {1}^{-2^{m-1}}}{1+u_{1}},\] we compute that \[\left(\mathrm{Tr}_{m}^{n}\left(u_{0}\sum_{i=1}^{2^{m-1}}u_{1}^{i} \right)\right)^{2} =u_{0}^{2}u_{1}^{2}\frac{1+u_{1}^{-1}}{1+u_{1}^{2}}+u_{0}^{-2}\frac {1+u_{1}}{1+u_{1}^{2}}\] \[=u_{0}^{2}\frac{1+u_{1}^{-1}}{1+u_{1}^{-2}}+u_{0}^{-2}\frac{1+u_{1 }}{1+u_{1}^{2}}\] \[=\frac{u_{0}^{2}}{1+u_{1}^{-1}}+\frac{u_{0}^{-2}}{1+u_{1}}.\] Similarly, we have \[\left(\mathrm{Tr}_{m}^{n}\left(u_{0}\sum_{i=1}^{2^{m-1}}u_{2}^{i} \right)\right)^{2}=\frac{u_{0}^{2}}{1+u_{2}^{-1}}+\frac{u_{0}^{-2}}{1+u_{2}}.\] Then (2) implies that \[\frac{u_{0}^{2}}{1+u_{1}^{-1}}+\frac{u_{0}^{-2}}{1+u_{1}}=\frac{u_{0}^{2}}{1+ u_{2}^{-1}}+\frac{u_{0}^{-2}}{1+u_{2}},\] that is, \[\frac{u_{0}^{2}u_{1}+u_{0}^{-2}}{1+u_{1}}=\frac{u_{0}^{2}u_{2}+u_{0}^{-2}}{1+u _{2}}.\] Then \[(u_{0}^{2}u_{1}+u_{0}^{-2})(1+u_{2})=(u_{0}^{2}u_{2}+u_{0}^{-2})(1+u_{1}).\] It follows that \[(u_{0}^{2}+u_{0}^{-2})(u_{1}+u_{2})=0.\] Since \(u_{0}\neq 1\), we have \(u_{0}^{2}\neq u_{0}^{-2}\). Hence \(u_{1}=u_{2}\), contradictory. Then \(g|_{U\setminus\{1\}}\) is injective, and since \(U\setminus\{1\}\) and \(\mathbb{F}_{2^{m}}\) are of equal size it is also surjective. The proof is now complete. A function \(h\) from \(\mathbb{F}_{2^{m}}\) to \(\mathbb{F}_{2}^{k}\) is called _balanced_ if \(\#\{x\in\mathbb{F}_{2^{m}}:h(x)=b\}=2^{m-k}\) for any \(b\in\mathbb{F}_{2}^{k}\). We have the following straightforward construction of vectorial hyper-bent functions from \(\mathbb{F}_{2^{2m}}\) to \(\mathbb{F}_{2}^{k}\). **Theorem 3.4**.: _Let \(T_{u_{0}}\) be defined as in Theorem 3.3. Let \(h\) be a balanced function from \(\mathbb{F}_{2^{m}}\) to \(\mathbb{F}_{2}^{k}\) with \(h(0)=0\). Then \(h(T_{u_{0}}(x))\) is a vectorial hyper-bent function from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2}^{k}\)._ Proof.: It follows directly from Theorem 3.1 and Theorem 3.3. Finally we present some infinite classes of hyper-bent functions employing permutation polynomials and binary m-sequences. A polynomial \(h(X)\in\mathbb{F}_{2^{m}}[X]\) is called a permutation polynomial (PP) of \(\mathbb{F}_{2^{m}}\) if the associated polynomial function \(h:x\mapsto f(x)\) from \(\mathbb{F}_{2^{m}}\) to itself is a permutation of \(\mathbb{F}_{2^{m}}\). A very important class of polynomials whose permutation behavior is well understood is the class of Dickson polynomials, which we will define below. We recall that the \(r\)-th binary Dickson polynomial \(D_{r}(x)\in\mathbb{F}_{2}[x]\) is defined by \[D_{r}(x)=\sum_{i=0}^{\lfloor r/2\rfloor}\frac{r}{r-i}\binom{r-i}{i}x^{r-2i},\] where \(\lfloor r/2\rfloor\) denotes the largest integer less than or equal to \(r/2\). For \(r=0\), we set \(D_{0}(x)=0\). The first eight binary Dickson polynomials are \[\begin{array}{cc}D_{0}(x)=0,&D_{1}(x)=x,&D_{2}(x)=x^{2},&D_{3}(x)=x^{3}+x,&D_ {4}(x)=x^{4},\\ D_{5}(x)=x^{5}+x^{3}+x,&D_{6}(x)=x^{6}+x^{2},&D_{7}(x)=x^{7}+x^{5}+x.\end{array}\] We write \(x=\frac{1}{y}\) with \(y\neq 0\) an indeterminate. Then binary Dickson polynomials can often be rewritten (also referred as functional expression) as \[D_{r}(x)=D_{r}\left(y+\frac{1}{y}\right)=y^{r}+\frac{1}{y^{r}}.\] For any non-zero positive integers \(r\) and \(s\), Dickson polynomials satisfy: \[D_{r}(D_{s}(x))=D_{rs}(x).\] The PPs among the Dickson polynomials have been completely classified. We state the following theorem due to Nobauer [19]. Dickson in his 1896 Ph. D. thesis observed and partially proved the theorem. **Theorem 3.5**.: _The Dickson polynomial \(D_{r}(x)\) is a permutation polynomial of \(\mathbb{F}_{2^{m}}\) if and only if \(\gcd(r,2^{2m}-1)=1\)._ Combining Theorem 3.4 with Theorem 3.5 gives the following construction of vectorial hyper-bent functions from Dickson polynomials. **Corollary 3.1**.: _Let \(n=2m\) and let \(r\) be a positive integer such that \(\gcd(r,2^{2m}-1)=1\). Let \(T_{u_{0}}\) be defined as in Theorem 3.3 and let \(D_{r}(x)\) be the \(r\)-th binary Dickson polynomial. Then \(D_{r}(T_{u_{0}}(x))\) is a vectorial hyper-bent function from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{m}}\)._ Any permutation binomial or permutation trinomial proposed in [14] can be plugged into Theorem 3.4 to obtain a vectorial hyper-bent function from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2^{m}}\). The binary m-sequences of period \(2^{m}-1\) are the sequences of elements in \(\mathbb{F}_{2}\) of the form \(\left\{\mathrm{Tr}_{1}^{m}\left(\gamma^{di+t}\right)\right\}_{i\in\mathbb{Z}}\) where \(\gamma\) is a generator of \(\mathbb{F}_{2^{m}}^{*}\), \(t\) is an integer, and the _decimation_\(d\) has \(\gcd(d,2^{m}-1)=1\). The crosscorrelation \(C_{d}(t)\) between the two m-sequences \(\{\mathrm{Tr}_{1}^{m}\left(\gamma^{i}\right)\}_{i\in\mathbb{Z}}\) and \(\left\{\mathrm{Tr}_{1}^{m}\left(\gamma^{di}\right)\right\}_{i\in\mathbb{Z}}\) can be described by the following exponential sum by using the trace function representation \[\begin{array}{ll}C_{d}(t)&=\sum_{i=0}^{2^{m}-2}(-1)^{\mathrm{Tr}\left(\gamma ^{t+i}+\gamma^{di}\right)}\\ &=\sum_{x\in\mathbb{F}_{2^{m}}^{*}}(-1)^{\mathrm{Tr}\left(x^{d}+cx\right)}, \end{array}\] where \(c=\gamma^{t}\). We say that \(C_{d}(t)\) is \(v\)-valued to mean that \(\#\left\{C_{d}(t):t\in\mathbb{Z}\right\}=v\). It was shown by Katz [13] that \(C_{d}(t)\) always takes on \(-1\) as one of the values if the crosscorrelation function \(C_{d}(t)\) is three-valued. Thus, we have the following construction of hyper-bent functions from three-valued m-sequences. **Corollary 3.2**.: _Let \(n=2m\) and \(u_{0}\in U\setminus\{1\}\). Let \(d\) be an arbitrary positive integer such that \(\gcd(d,2^{m}-1)=1\) and \(C_{d}(t)\) is three-valued. Then there always exists \(\lambda\in\mathbb{F}_{2^{m}}^{*}\) such that \(\operatorname{Tr}_{1}^{m}\left(\left(\operatorname{Tr}_{m}^{n}\left(u_{0}\sum_ {i=1}^{2^{m-1}}x^{i(2^{m}-1)}\right)\right)^{d}\right)+\operatorname{Tr}_{1}^ {n}\left(\lambda u_{0}\sum_{i=1}^{2^{m-1}}x^{i(2^{m}-1)}\right)\) is a hyper-bent function._ For binary m-sequences of length \(2^{m}-1\), the following is a complete list of all decimations known to give three-valued crosscorrelation. It is a challenging and open problem to decide whether this list is complete. 1. Gold [5]: \(d=2^{k}+1\), \(\frac{m}{\gcd(k,m)}\) odd. 2. Kasami [20]: \(d=2^{2k}-2^{k}+1\), \(\frac{m}{\gcd(k,m)}\) odd. 3. Cusick and Dobbertin [2]: \(d=2^{\frac{m}{2}}+2^{\frac{m+2}{4}}+1\), \(m\equiv 2\pmod{4}\). 4. Cusick and Dobbertin [2]: \(d=2^{\frac{m+2}{2}}+3\), \(m\equiv 2\pmod{4}\). 5. Canteaut, Charpin and Dobbertin [3]: \(d=2^{\frac{m-1}{2}}+3\), \(m\) odd. 6. Dobbertin [4], Hollmann and Xiang [12]: \[d=\left\{\begin{array}{ll}2^{\frac{m-1}{2}}+2^{\frac{m-1}{4}}-1,&m\equiv 1 \pmod{4}\\ 2^{\frac{m-1}{2}}+2^{\frac{3m-1}{4}}-1,&m\equiv 3\pmod{4}\end{array}\right..\] ## 4 Conclusion In this paper we are devoted to deducing a sufficient and necessary condition of vectorial hyper-bent functions which is a generalization for case \(k=n/2\) of Theorem 1 in [7]. We also get a numerical result for the number \(\mathcal{N}_{n,k}\) of vectorial hyper-bent functions of the form \[F(x)=\left(\operatorname{Tr}_{1}^{n}\left(\sum_{i=1}^{2^{m}}a_{1,i}x^{i(2^{m} -1)}+a_{1,0}\right),\ldots,\operatorname{Tr}_{1}^{n}\left(\sum_{i=1}^{2^{m}}a _{k,i}x^{i(2^{m}-1)}+a_{k,0}\right)\right),\] which generalizes the result of Theorem 4 in [7]. By Theorem 3.4, the problem of searching for vectorial hyper-bent functions \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2}^{k}\) is reduced to the one of finding balanced functions from \(\mathbb{F}_{2^{n}}\) to \(\mathbb{F}_{2}^{k}\). **Acknowledgement.** This work was supported by National Natural Science Foundation of China under Grant Nos. 12171428, 12231015 and 61972400.
2307.14784
New approach to designing functional materials for stealth technology: Radar experiment with bilayer absorbers and optimization of the reflection loss
Microwave power absorption by a two-layer system deposited on a metallic surface has been studied in the experimental setup emulating the response to a radar signal. Layers containing hexaferrite and iron powder in a dried paint of thickness under 1mm have been used. The data is analyzed within a theoretical model derived for a bilayer system from the transmission line theory. A good agreement between experimental and theoretical results is found. The advantage of using a bilayer system over a single-layer system has been demonstrated. How the maximum microwave absorption (minimum reflection loss) can be achieved through the optimization of the filling factors and thicknesses of the two layers is shown.
Jaume Calvo de la Rosa, Aleix Bou Comas, Joan Manel Hernandez, Pilar Marin, Jose Maria Lopez-Villegas, Javier Tejada, Eugene M. Chudnovsky
2023-07-27T11:26:43Z
http://arxiv.org/abs/2307.14784v2
New approach to designing functional materials for stealth technology: Radar experiment with bilayer absorbers and optimization of the reflection loss ###### Abstract Microwave power absorption by a two-layer system deposited on a metallic surface has been studied in the experimental setup emulating the response to a radar signal. Layers containing hexaferrite and iron powder in a dried paint of thickness under 1mm have been used. The data have been analyzed within a theoretical model derived for a bilayer system from the transmission line theory. Good agreement between experimental and theoretical results have been found. The advantage of using a bilayer system over a single-layer system has been demonstrated. We show how the maximum microwave absorption (minimum reflection loss) can be achieved through the optimization of the filling factors and thicknesses of the two layers. \({}^{1}\)Departament de Fisica de la Materia Condensada, Universitat de Barcelona, Marti i Franques 1, 08028 Barcelona, Spain \({}^{2}\)Institut de Nanociencia i Nanocnologia (IN2UB), Universitat de Barcelona, 08028 Barcelona, Spain \({}^{3}\)Graduate Program in Physics and Initiative for the Theoretical Sciences, Graduate Center, The City University of New York, New York, NY 10016, USA \({}^{4}\) Instituto de Magnetismo Aplicado (IMA-UCM-ADIF), 28230 Madrid, Spain \({}^{5}\) Departamento de Fisica de Materiales, Facultad de Fisicas, Universidad Complutense de Madrid (UCM), 28040 Madrid, Spain \({}^{6}\) Departament d'Enginyeria Electronica i Biomedica, Universitat de Barcelona, 08028 Barcelona, Spain \({}^{7}\) Department of Physics and Astronomy, Herbert H. Lehman College, The City University of New York, Bronx, NY 10468-1589, USA ## 1 Introduction The absorption of microwaves by thin layers of composite materials with specially designed magnetic and dielectric properties has recently moved to the forefront of electromagnetic research by many physics, chemistry, and engineering labs, see, e.g., Refs. [1, 2, 3] and references therein. It has been largely driven by the military applications related to radar and stealth technology, as well as by the needs of medical, educational, and research facilities to shield rooms and equipment from the background microwave radiation that has been steadily growing in the last decades due to the increase in wireless communications. Magnetic systems have been among the most utilized materials for that purpose because ferromagnetic resonance naturally falls into the microwave frequency range. The research in this area has been focused on two aspects of microwave absorption. The first is related to the absorption properties of the material per se, and how effective it is in converting microwave power into heat. To provide a significant effect, the material must be dielectric. Otherwise, the absorption would be limited to the skin layer. If the material is comprised of metallic particles, they must be either coated with an insulating layer or embedded in the dielectric matrix to suppress the overall conductivity which would result in a significant reflection of the microwaves. The second aspect of microwave absorption, which is important for applications, arises when thin layers of the absorbing material are used. It is related to the interference of microwaves reflected by the two surfaces; the most inner surface being typically interfaced with a metal. In this case, the maximum absorption by the layer should occur at the wavelength that provides destructive interference of the two reflected waves. This effect, however, has limited use because it is sharply peaked at a certain wavelength, which stealth technology and other applications of microwave absorbers cannot rely upon. To have practical importance, the absorber must be broadband, see, e.g., Refs. [4, 5]. Our focus is on microwave absorption by magnetic bilayers. While single-layer systems have been intensively studied, with hundreds of articles entering the literature annually, experiments on bilayer systems and their theoretical analysis have been scarce. In Ref. [6] microwave absorption by nanocrystalline NiZn ferrite and iron microfibers forming single and double layers were studied in the frequency range of 2-18 GHz. The advantage of double layers in obtaining strong broadband absorption was demonstrated experimentally. A formula for the impedance of the double layer was used to fit the experimental data. Significant improvement in microwave absorption was observed. The enhancement of the absorption in the 2-18 GHz range by a double layer based upon nickel oxide and CoNiZnFeO ferrite composites, as well as the analysis of the results with the use of the impedance formula for the double layer, were reported in Ref. [7]. In Ref. [8] the improvement of microwave absorption properties in the 8-18 GHz range was demonstrated by using up to a 3mm double layer of carbon black/epoxy resin and NiZnFeO/epoxy resin. Microwave absorption by BaFeO and BaCoZnFeO multilayers in the 7-13 GHz range was studied and analyzed with the use of the multilayer impedance formula in Ref. [9]. The optimal absorption was achieved below 500 nm total thickness. The problem of thickness optimization of a double-layered microwave absorber containing magnetite particles was investigated in Ref. [10]. The procedure provided a significant increase in the reflection loss in the 8-12 GHz frequency range. Figure 1: Schematic representation of the anechoic chamber measurement method in our experiments. GHz radiation is generated by the emitter (e) before interacting with the sample in a reflection mode and being detected by the receiver (r). Here we report our effort to optimize both aspects of microwave absorption (by the material itself and due to the destructive interference) mentioned above by experimenting with bilayers based upon magnetic particles embedded in a dielectric medium. All previously reported results were obtained by studying electric and magnetic response to microwaves of small samples using a network analyzer, typically by means of waveguides or coaxial probes. The data were then plugged into the theoretical formula for the reflection loss by a layer and the prediction was made regarding the power absorption by a thin layer of the material. While the use of Maxwell equations for the absorbing medium is always justified, real systems always have features unaccounted for, such as, e.g., fluctuations in the composition and thickness of the absorbing layer. To account for such features, we directly measure the microwave response of a thin layer or bilayer of the absorbing material deposited on a macroscopic-size metallic surface, see Fig. 1. Such a method emulates real situations relevant to stealth and shielding technology. The paper is structured as follows. Materials and experimental setup are discussed in Section 2. The general expression for the reflection loss by a bilayer system, that we were unable to find in textbooks or published literature, is derived in Section 3. Our experimental results and their fit by theoretical formulas are presented in Section 4. Section 5 contains a discussion of the results and suggestions for applications of microwave absorbers. ## 2 Materials and Experimental Setup The preparation of the absorbing samples requires multiple components that need to be carefully processed in order to produce sheets with the desired properties. The most significant element in this preparation is the functional powder, which is the material that provides the specific dielectric and magnetic character to the sample. We worked with two types of powder materials: barium-hexaferrite ceramic materials (from now, noted as "_HF_") and soft magnetic iron particles (referred as "_Fe_"). The _Fe_ powders were provided by AMES enterprise and a complete structural and magnetic characterization may be found in our previous works [1, 2]. It consists of Fe-core particles of 100 \(\mu\)m in diameter with high permeability, which are widely used for kHz and MHz applications. The core's crystallinity is high, and no impurities are detected by XRD. On the other hand, the _HF_ powder samples were synthesized by our own at the laboratory, and have been selected for this work because they have extended reputation as microwave absorbers [4, 5, 6, 7]. ### Synthesis of functional powder The barium hexaferrite (BaFe\({}_{12}\)O\({}_{19}\)) powder was synthesized through conventional co-precipitation route. Stoichiometric amounts of Ba(NO\({}_{3}\))\({}_{2}\), Fe(NO\({}_{3}\))\({}_{3}\cdot 9\)H\({}_{2}\)O (Scharlab, as received) were dissolved in 300 mL of deionized water under continuous stirring at room temperature for two hours. Once complete dissolution was reached, a 2M solution of NaOH was added dropwise until reaching pH \(\sim\) 10. The brown precipitate was cleaned and separated from water by doing 3 cycles of 10 minutes of centrifugation at 3000 rpm with ethanol. The obtained solid part was dried 24 hours at 80\({}^{\circ}\)C and then ground to powder with a mortar before being heated at 900\({}^{\circ}\)C for 2 hours in a furnace. The final product was finally grounded again. ### Sheets preparation The functional powders need to be later processed in order to produce two-dimensional sheets, in such a way that the material can be deposited into a surface. To do so, the functional powder was mixed with paint (commercial Titan's Unilak white water enamel) in a 4%-96% weight fraction (_ffw_), respectively. The mixture was then manually deposited on top of a 50 \(\upmu\)m thickness polyester sheet of 25 cm \(\times\) 25 cm. The painted sheets were left drying at room temperature for 24 hours. For each type of functional powder, we prepared three different sheets varying the total thickness, as described in Table 1 below. Therefore, we are not only able to evaluate the effect of the intrinsic electromagnetic properties of the powder dispersed on it, but also to figure out the repercussion that a geometrical aspect has on the absorption procedure. Due to the manual deposition method used for the sheets' preparation and the elevated density of the magnetic powder, the obtained sheets do not show a homogeneous dispersion of magnetic particles along its surface. Consequently, all sheets show a larger concentration of particles at their central region compared to the perimeter, leading to an effective gradient of filling factor from the center to the borders. This will be treated by the model in Section 4. ### Characterization Powder x-ray diffraction (XRD) measurements were done by using a PANalytical X'Pert PRO MPD 0/0 Bragg-Brentano powder diffractometer of 240 mm radius using Cu K-alpha radiation (\(\lambda=1.5418\) A). The static magnetic properties from powder samples were measuring through a Quantum Design MPMS Superconducting QUantum Interference Device (SQUID) magnetometer. On the other hand, in order to measure the complex magnetic permeability and dielectric permittivity in the microwave region (from 0.05 to 20 GHz) we designed a coaxial probe by using two coaxial 3.5 mm connectors coupled to a Keysight E5071C ENA Series Network Analyzer. The system was electronically calibrated prior to the measurements and two-port \(S\) parameters were recorded along 1601 points from 0.05 to 20 GHz. Port-extension correction was also done to all the measurements to deal with all the potential signal phase shifts due to the probe dimensions. Finally, the experimental absorption of the sheets was measured inside an anechoic chamber. The sample was subjected on a metallic plate while two antennas were oriented in reflection configuration (see Fig. 1). An Agilent E8362B PNA Series Network Analyzer was connected to the antennas and the reflection loss was recorded along 1601 points from 0.5 GHz to 18 GHz. \begin{table} \begin{tabular}{|c|c|c|} \hline **Sheet ID** & **Contained powder ID** & **Total thickness (mm)** \\ \hline P1 & _HF_ & \(0.23\pm 0.07\) \\ \hline P2 & _HF_ & \(0.58\pm 0.05\) \\ \hline P3 & _HF_ & \(0.76\pm 0.09\) \\ \hline P4 & _Fe_ & \(0.32\pm 0.09\) \\ \hline P5 & _Fe_ & \(0.51\pm 0.08\) \\ \hline P6 & _Fe_ & \(0.70\pm 0.06\) \\ \hline \end{tabular} \end{table} Table 1: Description of the prepared sheets. Thickness values are obtained as the average of four measurements at the sheet’s contour. ## 3 Bi-layer system absorption model The Reflection-Loss of a single layer material has been successfully analyzed using _Transmission Line Theory_[8, 9] \[Z=Z_{0}\sqrt{\frac{\mu_{r}}{\varepsilon_{r}}}tanh\left[\left(j\,\frac{2\pi fd}{c} \right)\sqrt{\mu_{r}\varepsilon_{r}}\right]\] \[R_{L}(dB)=20\log\frac{\left|Z/Z_{0}-1\right|}{\left|Z/Z_{0}+1\right|}\] where \(f\) is the frequency of the microwave, \(c\) is the velocity of light in vacuum and \(d\) is the thickness of the material. In this section we want to find a similar expression for a bi-layer system. In the transmission line theory, the impedance of a material can be modelled as a series of resistors, inductors and capacitors which can be related to relative permeabilities and permitivities [8]. Following the derivation for a single layer impedance, \[\begin{cases}\frac{\partial v(x,t)}{\partial t}=-L(x)\frac{\partial i(x,t)} {\partial x}\\ \frac{\partial i(x,t)}{\partial t}=-C(x)\frac{\partial v(x,t)}{\partial x} \end{cases}\] In a single layer system, the capacity (\(C\)) and the induction (L) are constant. In our case they are thickness dependent due to the fact that there are two layers. The system of equations can be solved assuming oscillatory behavior of the magnitudes \(v(x,t)=V(x)\exp(\pm j\omega t)\) and \(i(x,t)=l(x)\exp(\pm j\omega t)\). Considering a bi-layer system with the first material \(C_{l}\) and \(L_{1}\) and the second material of thickness \(d_{2}\) characterized by \(C_{2}\) and \(L_{2}\) the solutions are: \[\begin{cases}\hskip 28.452756ptV_{1}(x)=A\exp\bigl{(}j\omega\sqrt{L_{1}C_{1}}x \bigr{)}+B\,\exp\bigl{(}-j\omega\sqrt{L_{1}C_{1}}x\bigr{)}\\ \hskip 28.452756ptI_{1}(x)=\sqrt{\frac{C_{1}}{L_{1}}}\bigl{(}-A\exp\bigl{(}j \omega\sqrt{L_{1}C_{1}}x\bigr{)}+B\,\exp\bigl{(}-j\omega\sqrt{L_{1}C_{1}}x \bigr{)}\bigr{)}\\ \hskip 28.452756ptV(x)=E\exp\bigl{(}j\omega\sqrt{L_{2}C_{2}}(x-d_{1})\bigr{)}+ F\,\exp\bigl{(}-j\omega\sqrt{L_{2}C_{2}}(x-d_{1})\bigr{)}\\ \hskip 28.452756ptI_{2}(x)=\sqrt{\frac{C_{2}}{L_{2}}}\bigl{(}-E\exp\bigl{(}j \omega\sqrt{L_{2}C_{2}}(x-d_{1})\bigr{)}+F\,\exp\bigl{(}-j\omega\sqrt{L_{2}C_{ 2}}(x-d_{1})\bigr{)}\bigr{)}\end{cases}\] Imposing continuity conditions for the voltage, \(V_{1}(d_{1})=V_{2}(d_{1})\), and for the current, \(I_{1}(d_{1})=I_{2}(d_{1})\), yields the general expression: \[Z=\frac{\overline{L_{2}}}{\sqrt{\frac{L_{2}}{C_{2}}}\frac{Z_{M}(1-\eta\tan( \omega\gamma_{1}d_{1})\tan(\omega\gamma_{2}(x-d_{1})))+Z_{L}(j\tan(\omega \gamma_{1}d_{1})+jtan(\omega\gamma_{2}(x-d_{1})))}{Z_{M}(-\eta\tan(\omega \gamma_{1}d_{1})-jtan(\omega\gamma_{2}(x-d_{1})))+Z_{L}(\tan(\omega\gamma_{1}d _{1})\tan(\omega\gamma_{2}(x-d_{1}))-\eta)}}\] where \(\eta=\sqrt{C_{1}L_{2}/L_{1}C_{2}}\) and \(\gamma_{i}=\sqrt{C_{i}L_{i}}\). The quantities \(Z_{M}\) and \(Z_{L}\) are the impedances of the material and the transmission line respectively and \(x>d_{1}\). The material impedance is set to zero. In \(\gamma_{i}=\frac{\sqrt{\epsilon_{i}\mu_{i}}}{c}\), where we have dropped the subindex r, \(\epsilon_{i}\) and \(\mu_{i}\) are relative permeability and permittivity respectively, and \(\sqrt{L_{i}/C_{i}}=\sqrt{\mu_{i}/\epsilon_{i}}Z_{0}\). We obtain the following expression (that reduces to the single layer impedance if the same material for both layers is considered): \[Z = -Z_{0}\frac{\sqrt{\frac{\mu_{1}}{\epsilon_{1}}}\tanh\left(j\frac{2 \pi\omega}{c}\sqrt{\epsilon_{1}\mu_{1}}d_{1}\right)+\sqrt{\frac{\mu_{2}}{ \epsilon_{2}}}\tanh\left(j\frac{2\pi\omega}{c}\sqrt{\epsilon_{2}\mu_{2}}d_{2} \right)}{1+\sqrt{\frac{\mu_{1}\epsilon_{2}}{\epsilon_{1}\mu_{2}}}\tanh\left(j \frac{2\pi\omega}{c}\sqrt{\epsilon_{1}\mu_{1}}d_{1}\right)\tanh\left(j\frac{2 \pi\omega}{c}\sqrt{\epsilon_{2}\mu_{2}}d_{2}\right)}\] ## 4 Results The synthesized hexaferrites were first chemically and crystallographically characterized by the XRD. Figure 2 shows the diffraction patterns obtained for each sample. The analysis of these patterns confirms the expected composition. No impurities or further structures were detected. The _HF_ and _Fe_ powder samples, together with the commercial paint, were all electromagnetically characterized in the GHz frequency range with the coaxial probe. The Nicolson-Ross-Weir (NRW) model [8, 9, 10] was used to deduce the complex magnetic permeability and dielectric permittivity from the measured complex _S_-parameters. In the case of the paint, which is a non-magnetic material, the NRW method has been adapted to pure dielectric conditions [11]. Figure 3 depicts the complex spectra obtained for each of the three materials, both for the electric and magnetic contributions. Figure 2: Experimental XRD pattern measured for _HF_ powder sample. he effective electromagnetic properties of the sheets (which contain a mixture of paint and functional powder) have been modeled by using the Maxwell-Garnett (MG) model [12], [13]: \[\frac{\varepsilon_{eff}-\varepsilon_{h}}{\varepsilon_{eff}+2\varepsilon_{h}}=ff_{ v}\frac{\varepsilon_{i}-\varepsilon_{h}}{\varepsilon_{i}+2\varepsilon_{h}}\] where \(\varepsilon_{eff}\) refers to the effective (paint + powder) permittivity, while \(\varepsilon_{h}\) and \(\varepsilon_{i}\) refer to the host (paint) and inclusions (powder) permittivity. The same conversion has been done with the permeability. Given that this model depends on the volume filling factor (_ffv_), reference densities of 1.50 g/cm\({}^{3}\), 5.30 g/cm\({}^{3}\) and 7.87 g/cm\({}^{3}\) have been used for the paint, ceramic and metallic components - respectively - for converting the weight fill factor (_ffw_) to the volumetric one (_ffv_). The resulting effective electromagnetic properties are, obviously, dependent on the filling factor. Given that our sheets are not uniform and therefore we do have a concentration gradient in the radial direction, we have run this computation for different _ffv_ (i.e., _ffw_) to produce a clear picture of the electromagnetic properties of the samples. This modeling is represented in Figure 4, which shows the complex permittivity and permeability obtained for each mixture of materials and _ffw_ of 4%, 20%, 40%, 60% and 80%, which correspond to _ffv_ of 1.17%, 6.61%, 15.87%, 29.80% and Figure 3: Complex permittivity (left column) and permeability (right column) for the paint (A and B), the hexaferrite (C and D) and the iron powder (E, and F). Data obtained after processing the \(S\)-parameter through the NWR method. 53.01% for samples containing HF (P1, P2 and P3), and \(\mathit{ff_{v}}\) of 0.79%, 4.55%, 11.27%, 22.23% and 43.26% for those with \(Fe\) powder (P4, P5 and P6). As it may be observed, for low filling factors both components have very similar electromagnetic properties compared to the pure paint, which is the major component. While the filling factor increases, the effective properties tend to be closer to the ones of the functional particles dispersed in the paint and each type of layer starts to have a more particular behavior. As it has been mentioned before, our sheets have a global \(\mathit{ff_{w}}=4\%\) but have a concentration gradient from the center to the edges. Therefore, at the center of the sheet \(\mathit{ff_{w}}>4\%\), where the interaction with the electromagnetic radiation is stronger. According to our estimations, the sheets have \(\mathit{ff_{w}}\sim 11\%\) at the 15\(\times\)15 cm\({}^{2}\) central region, \(\mathit{ff_{w}}\sim 25\%\) at the central 10\(\times\)10 cm\({}^{2}\) or even \(\mathit{ff_{w}}\sim 70\%\) at the 6\(\times\)6 cm\({}^{2}\) central part. Therefore, we must consider later with our model the possibility of having these considerable amounts of loading at the region with the strongest electromagnetic interaction. With these permittivity and permeability complex data, we first validate the \(R_{L}\) results obtained for each individual layer. To do so, we compare the experimentally measured \(R_{L}\) at the anechoic chamber with the \(R_{L}\) calculated by the _Transmission Line Theory_ using the sheets' electromagnetic properties measured before. This comparison is represented in Figure 5, being the first row of subplots devoted to sheets P1-P3 (paint \(+\)\(\mathit{HF}\)) and the second one to P4-P6 (paint \(+\)\(\mathit{Fe}\)). In each case, first the comparison between the experimental and calculated values is done for the three thicknesses prepared at the laboratory. On the right side, a 2D \(R_{L}\) simulation for a wider and continuous range of thicknesses is done in order to have a broad view of the expected absorption of each sheet, even outside of the experimental domain. The agreement between the experimental and the calculated \(R_{L}\) data has been verified from \(\mathit{ff_{w}}=4\%\) to 80%. [FIGURE Figure 4: Modeling of the mixture’s effective properties by applying the MG model to the permittivity and permeability data of each of the components using \(\mathit{ff_{w}}\) of 4%, 20%, 40%, 60% and 80%. One may immediately see that none of the layers presents significant electromagnetic absorption when being irradiated with microwaves between 0.5 and 18 GHz. The spectra are flat and the small deviation from the zero is attributed to experimental noise. The calculations done with the single-layer _Transmission Line Theory_ reinforce these observations, as no absorption is expected in this frequency range for any of the compositions used. The 2D simulations provide some additional information of interest. According to these predictions, we should not observe any absorption peaks for samples thinner than 0.8 mm below 18 GHz, as happened in the real radar experiments. Moreover, these results suggest that slightly thicker samples would start to absorb radiation in this frequency range. This prediction agrees with the experimental data, which seems to show the start of a peak at the top frequency limit for the thicker sample, for both types of samples. These simulations also predict that absorptions up to 12.5 and 15.0 dB would be feasible for the _HF_ and _Fe_ sheets, respectively. Given that both types of sheets have a predominant paint content (and thus, their electromagnetic properties do not diverge that much) the resultant \(R_{L}\) spectra are also similar, though small changes are appreciated due to the functional powder effect. Once that single layer systems have been discussed, we move to double layer systems, which represent a much more unexplored territory in literature, both theoretically and experimentally. To do so, again we provide a detailed comparison between the anechoic chamber measured data and the predictions done by our own model using the measured complex electromagnetic properties. Let's start by analyzing a system consisting of the combination of the thickest _HF_ layer (P3) with the different thickness _Fe_ sheets, i.e., we have a system of two layer with different electromagnetic properties and varying thickness. Figure 6 below shows the experimental data, measured in the anechoic chamber, together with the calculation from the bilayer model for _ff\({}_{w}\)_ = 20%, 40%, 60% and 80%. Figure 5: Comparison between the experimental and calculated data for the sheets containing _HF_ (top row) and _FE_ (bottom row) particles. Left side plots correspond to the comparison for the specific thicknesses prepared at the laboratory, while on the right side there is a 2D simulation of the expected \(R_{L}\) for sheets between 0.1 and 1.5 mm. Solid lines correspond to experimental data, while dashed lines correspond to the calculated values. As a first observation, it must be highlighted that that the model is capable, in all cases, to reproduce with reasonable accuracy the peak position, amplitude and shape from the thickness and complex permittivity and permeability data. Looking in more detail, we may observe that - as is expected from the gradient in particles' concentration - the agreement is not so good for low filling factors. However, when the filling factor is increased, the theoretical and the experimental data match. The best agreement is reached for \(\textit{ff}_{w}=60\%\), giving us an idea about the real filling factor at the center of the sheet. If the \(\textit{ff}_{w}\) becomes too high (80%), the agreement deteriorates. For cases (A) P3+P4 and (B) P3+P5 the agreement is quite extraordinary, while in (C) P3+P6 the model tends to predict the peak that seems to move to the frequency range above 18 GHz. From Figure 6(A) it is possible to extract a few interesting conclusions. As one may observe, if the filling factor is too high (80%) the agreement worsens and the amplitude of the calculated peak is reduced. This highlights the importance of optimizing and selecting the appropriate filling factor in the material design process and during the computational simulation of \(R_{L}\). In the first case, this is a crucial aspect, just increasing the magnetic powder load does not always result in the increase of the power absorption. It is necessary to optimize the filling factors based on the electromagnetic properties of each component, to reach the adequate effective permittivity and permeability that create an impedance match between the two layers of specific thickness. Such an optimization within our theoretical model is shown in Fig. 7. Figure 6: Comparison between the experimental (solid black) \(R_{L}\) measured in the anechoic chamber and the \(R_{L}\) data calculated by the model (dashed) with \(\textit{ff}_{w}=20\%\), 40%, 60% and 80%, for the bilayered systems (A) P3+ P4, (B) P3+P5 and (C) P3+P6. The simulations shown in the figure above are a powerful tool for design processes of high absorption materials. In the first case, Figure 7(A), represents the average reflection loss produced by the sample all along the analyzed frequency range. Therefore, it is affected by the peak base but not only by the maximum peak amplitude, leading to logic smaller \(R_{L}\). Nonetheless, this figure is of great importance because it shows an overall absorption behavior of the material along the frequency range, considering both the peak amplitude and width. On the other hand, Figure 7(B) represents the maximum absorption (minimum \(R_{L}\)) achieved for each combination of thicknesses between the two layers. As it may be observed, losses around 40 - 50 dBs may be easily achieved with a first layer of thickness between 0.5 and 1 mm and below 0.5 mm in the second one. As stated before, the constant increase of thickness does not lead to a permanent increase in absorbed power. Thus, optimizing the relative thickness between layers is a crucial factor do design materials with the best possible shielding capabilities. Figure 7: 2D simulation of the (A) average \(R_{L}\) and (B) minimum \(R_{L}\) (maximum absorption) as a function of the thickness of each layer in a bi-layer system. \(x\) refers to the thickness of the first layer, while \(d_{TOT}\) is the total thickness of the bilayer. Conclusions We have studied microwave absorption by a two-layer system containing powders of iron and barium hexaferrite particles. Layers under 1-mm thickness of dried paint containing the particles were deposited on a metallic surface. The reflection loss has been studied in an experiment emulating response to a radar signal. This differs from the experiments reported in literature, where the reflection loss was not directly measured but calculated from the data on frequency dependence of real and imaginary parts of permittivity and permeability. While such experiments are valuable, they ignore many features present in the reflection of a radar signal, such as the geometry of the reflecting system inhomogeneity of the parameters. We began with establishing that the material of paint, which dominated composition of the layers, was not responsible for any reflection loss. We then studied microwave absorption by single layers of different thickness and filling factors. The measured reflection loss from single layers was compared with a theoretical formula and a good agreement was found. The magnitude of the reflection loss for single layers was low, however, at frequencies up to 18GHz in all ranges of the parameters used. This changed dramatically when we moved to a bilayer system with the same parameter ranges for single layers. The reflection loss in some cases has grown ten-fold, while still allowing the total thickness of the bilayer system under 1mm. The data were analyzed with the use of a theoretical expression for a bilayer system derived by methods of the transmission line theory. Good agreement between experimental and theoretical results has been found. The ratio of the thicknesses of the two layers as an additional optimization parameter in the impedance match has been shown to play a pivotal role in the enhancement of the reflection loss. ## 6 Acknowledgments: The work at the University of Barcelona has been supported by the U.S. Air Force Office of Scientific Research (AFOSR) through grant No. FA8655-22-1-7049. The work at CUNY has been supported by the AFOSR through grant No. FA9550-20-1-0299. The authors also acknowledge AMES enterprise for their collaboration and for providing necessary materials.
2308.04312
Interpretable Goal-Based model for Vehicle Trajectory Prediction in Interactive Scenarios
The abilities to understand the social interaction behaviors between a vehicle and its surroundings while predicting its trajectory in an urban environment are critical for road safety in autonomous driving. Social interactions are hard to explain because of their uncertainty. In recent years, neural network-based methods have been widely used for trajectory prediction and have been shown to outperform hand-crafted methods. However, these methods suffer from their lack of interpretability. In order to overcome this limitation, we combine the interpretability of a discrete choice model with the high accuracy of a neural network-based model for the task of vehicle trajectory prediction in an interactive environment. We implement and evaluate our model using the INTERACTION dataset and demonstrate the effectiveness of our proposed architecture to explain its predictions without compromising the accuracy.
Amina Ghoul, Itheri Yahiaoui, Anne Verroust-Blondet, Fawzi Nashashibi
2023-08-08T15:00:12Z
http://arxiv.org/abs/2308.04312v1
# Interpretable Goal-Based model for Vehicle Trajectory Prediction in Interactive Scenarios ###### Abstract The abilities to understand the social interaction behaviors between a vehicle and its surroundings while predicting its trajectory in an urban environment are critical for road safety in autonomous driving. Social interactions are hard to explain because of their uncertainty. In recent years, neural network-based methods have been widely used for trajectory prediction and have been shown to outperform hand-crafted methods. However, these methods suffer from their lack of interpretability. In order to overcome this limitation, we combine the interpretability of a discrete choice model with the high accuracy of a neural network-based model for the task of vehicle trajectory prediction in an interactive environment. We implement and evaluate our model using the INTERACTION dataset and demonstrate the effectiveness of our proposed architecture to explain its predictions without compromising the accuracy. ## I Introduction Predicting the future motion of a dynamic agent in an interactive environment is crucial many fields and especially in autonomous driving. However, this task is challenging as it depends on various factors such as the agent's intention or the interaction with his surroundings. Because of these uncertainties, future motion of agents are inherently multimodal. To ensure safe predictions, the agent needs to take into account the dynamics of the surroundings and timely predict their motions in near future to avoid collisions. To address the task of forecasting vehicle motion, many studies use neural network-based model. One major drawback of these methods is the lack of interpretability. In fact, although data-driven approaches achieve outstanding performance in various tasks, it is hard to trust and interpret their predictions. For this reason, in recent years, developing models that can understand social interactions and forecast future trajectories has been an active and challenging area of research. Early works designed hand-crafted methods based upon domain knowledge to forecast dynamic agents trajectories, either with physics-based models such as Social Forces [1], or with pattern-based models such as discrete choice modelling (DCM) [2]. These models, based on domain knowledge allow their predictions to be interpretable. The nature of vehicle movement is highly connected to the motion of other road users around them. They alter their paths according to their interactions with neighbors. Thus, the concept of social interaction has been highly evaluated and discussed in the existed studies [3]. Our interest in this problem stems from the fact that while interaction modeling has been well-investigated in existing studies, it's hard to interpret the learned social interactions. In these previous studies, variables in models are designed to learn latent behavioral characteristics and with no expectation to have practical implications. For example, the pooling methods [3] directly aggregate hidden states of all neighbors in a neighborhood to learn the connections between people. Thus, it's hard to understand what kind of social interactions is going on, how it varies among moving pedestrians and how it affects the future trajectories. Attention mechanism [4] can show the interests of agents in each neighbor by observing the learned distribution of attention, thus we know which agent have the greatest influence on agent. However, we still can't get a concrete pattern of the social interaction. Therefore, these neural network-based models suffer from the lack of interpretability regarding the model's decision-making process. To address these limitations, we propose to combine an interpretable discrete choice model with a neural network for the task of vehicle trajectory prediction. Our approach presents a way to easily validate NN models in safety critical applications, by using the interpretable pattern-based rules from the DCM. We conduct extensive experiments on the real-world INTERACTION dataset and we demonstrate the effectiveness of our method, while at the same time providing a rationale behind high-level decisions, an essential component required for safety-critical applications like autonomous systems. We also conduct a comparative study between two discrete choice models. ## II Related Work ### _Knowledge-based Models_ Early works address trajectory prediction problem using of knowledge-based methods. [5] use Kalman filter to predict vehicle future trajectory. Discrete choice modelling (DCM) uses a grid for selecting the next action relative to each individual. DCMs have been used to predict pedestrian's trajectories [6], and also for many applications in various fields such as facial expression recognition [7]. These knowledge-based methods allow interpretable outputs, but they usually fail to capture the complexity of agent-agent interactions and agent-scene interactions. Therefore, they have low prediction accuracy when predicting trajectories. ### _Data-driven Models_ In order to solve the low accuracy problem of knowledge-based models, in recent years, many studies tackle the task of motion prediction using neural network models [8, 9]. [3] introduced the social LSTM for pedestrian trajectory prediction. They encode the motion of each agent using an LSTM. Then, they extract the interactions between agents by sharing the hidden states between all the LSTMs corresponding to a set of neighboring pedestrians. MHA JAM [8] applies multi-head attention by considering a joint representation of the static scene and surrounding agents. The authors use each attention head to generate a distinct future trajectory to address multimodality of future trajectories. However, these data-driven methods lack the ability to output predictions that can be explained. ### _Interpretable Trajectory Prediction_ To adress the lack of interpretability in neural network-based models, recent studies focus on adding expert knowledge to deep learning models for trajectory prediction. Neumeier et al. [10] use an autoencoder where the decoder contains expert knowledge to produce an interpretable latent space in a vehicle trajectory prediction model, in a highway environment. Another way to encourage interpretability in trajectory prediction architectures is through discrete modes. For example, Brewitt et al. [11] propose a Goal Recognition method by Interpretable Trees (GRIT) where the "goal" is defined as many kinds of behavioral intentions, such as "straight-on", "turn left", "u-turn", and "stop", etc. This aims the goal recognition to be interpretable by humans. Kothari et al. [12] learn a probability distribution over possibilities in an interpretable discrete choice model for the task of pedestrian trajectory prediction. We use a similar approach for the task of vehicle trajectory prediction in an urban environment. However, unlike [12], we first predict the goal and then the whole trajectory for a prediction horizon greater than 1 second. To the best of our knowledge, we are the first to use a DCM to help model the behavior of vehicles in their interactions with their surroundings. In this paper we also consider and compare two types of discrete choice models describing the behavior of vehicles. ## III Method ### _Problem definition_ The goal is to predict the future trajectories of a target agent \(T:\hat{Y_{T}}=(\hat{x}_{T}^{t},\hat{y}_{T}^{t})\) from time \(t=t_{obs}+1\) to \(t=t_{f}\). We have as input of our model the track history of the target agent and the \(n\) neighboring agents in a scene defined as \(\textbf{X}=[X_{1},X_{2},...,X_{n}]\). Each agent \(i\) is represented by a sequence of its states, from time \(t=1\) to \(t=t_{obs}\). Each state is composed of a sequence of the agent relative coordinates \(x_{i}^{t}\) and \(y_{i}^{t}\), velocity \(v_{i}^{t}\), acceleration \(a_{i}^{t}\), heading \(\theta_{i}^{t}\). \[X_{i}^{t}=(x_{i}^{t},y_{i}^{t},v_{i}^{t},a_{i}^{t},\theta_{i}^{t}) \tag{1}\] The positions of each agent \(i\) are expressed in a frame where the origin is the position of the target agent at \(t_{obs}\). The y-axis is oriented toward the target agent's direction of motion and x-axis points to the direction perpendicular to it. ### _Discrete Choice Model_ Discrete choice models or DCMs are hand-crafted models used to explain or predict a choice from a set of alternatives \(K\) made by a decision-maker. DCMs are knowledge based models that have a high interpretability. However, despite having interpretable outputs, these models suffer from low prediction accuracy. For that reason, [12] proposed a model combining the high interpretability of the DCMs and the high accuracy of the neural network-based model to predict pedestrian's trajectories. In this paper, we present an architecture that can model the interactions between vehicles and their surroundings. We use the Random Utility Maximization (RUM) theory [13] that postulates that the decision-maker aims at maximizing the utility relative to their choice. The utility that an agent \(i\) chooses an alternative \(k\), is given as : \[U_{ik}=\sum_{d}\beta_{d}b_{dik}+\epsilon_{ik}, \tag{2}\] where \(\beta\) are the parameters associated with the explanatory variables \(b\) that describe the observed attributes of the choice alternative. We assume that the random terms \(\epsilon_{ik}\) are independently and identically distributed (i.i.d.) follow an Extreme Value Type I distribution with location parameter zero and the scale parameter 1. In our case, we propose and compare two utility functions \(u_{k}\) for an alternative \(k\). These functions are defined and explained in details in Section IV-C. The alternative \(k\) corresponds to the target agent's goal at timestep \(t_{f}\), extracted from a radial grid, similar to [12]. ### _Neural Networks Model_ For a target agent \(T\) at time \(t\), \(X_{T}^{t}\) is embedded using a fully connected layer to a vector \(e_{i}^{t}\) and encoded using an LSTM encoder, \[h_{i}^{t}=LSTM(h_{i}^{t-1},e_{i}^{t};W_{enc}), \tag{3}\] \(W_{enc}\) are the weights to be learned. The weights are shared between all agents in the scene. Then we build a social tensor similar to [8]. We define the interaction space of a target vehicle \(T\) as the area centered on its position at \(t_{obs}\) and oriented in its direction of motion. We divide this interaction space into a spatial grid of size \((M,N)\). The trajectory encoder states of the surrounding agents \(h_{i}^{t_{obs}}\) are placed at their corresponding positions in the 2D spatial grid, giving us a tensor \(F_{s}\) of size \((M,N,C_{h})\), where \(C_{h}\) is the size of the trajectory encoder state. We use the multi-head attention mechanism [14] to model the social interactions, where the target vehicle \(h_{T}^{t_{obs}}\) is processed by a fully connected layer to give the query and the social tensor is processed by \(1\times 1\) convolutional layer to give the keys and the values. We consider \(K\) attention heads where \(K\) attention heads are specialized to the \(K\) potential goals. For each attention head, we concatenate the output of the multi-head attention module \(A_{k}\) with the target vehicle trajectory encoder state \(h_{T}^{t_{obs}}\) to give a context representation \(z_{k}\) for \(k=1,...K\). \[z_{k}=Concat(h_{T}^{t_{obs}},A_{k}) \tag{4}\] In order to help the knowledge-based model DCM capture the long term dependencies and the complex interactions, we use the Learning Multinomial Logit (L-MNL) [15] framework. The goal selection probabilities is defined as : \[\pi(a_{k}|\textbf{X})=\frac{e^{s_{k}(\textbf{X})}}{\sum_{j\in K}e^{s_{j}( \textbf{X})}}, \tag{5}\] where \[s_{k}(\textbf{X})=u_{k}(\textbf{X})+z_{k}(\textbf{X}), \tag{6}\] where \(s_{k}(\textbf{X})\) represents the goal function containing the NN encoded terms, \(z_{k}(\textbf{X})\), as well as utility function \(u_{k}(\textbf{X})\), following the L-MNL framework. We consider \(L\) attention heads, for each attention head, we concatenate the output of the multi-head attention module \(A_{l}\) with the target vehicle trajectory encoder state \(h_{T}^{t_{obs}}\) to give a context representation \(c_{l}\) for \(l=1,...L\). \[c_{l}=Concat(h_{T}^{t_{obs}},A_{l}) \tag{7}\] We select the \(L\) best scored targets, and we concatenate their embedding to the output of the context representation \(c_{l}\) for \(l=1,...L\). Finally, the context vector \(c_{l}\) is fed to an LSTM Decoder which generates the predicted parameters of the distributions over the target vehicle's estimated future positions of each possible trajectory for next \(t_{f}\) time steps, \[\Theta_{l}^{t}=\Lambda(LSTM(h_{l}^{t-1},z_{l};W_{dec})), \tag{8}\] where \(W_{dec}\) are the weights to be learned, and \(\Lambda\) is a fully connected layer. Similar to [8], we also output the probability \(P_{l}\) associated with each mixture component. ### _Loss function_ Our proposed model (DCM-MHA-LSTM) outputs the means and variances \(\Theta_{l}^{t}=(\mu_{l}^{t},\Sigma_{l}^{t})\) of the Gaussian distributions for each mixture component at each time step. The loss for training the model is composed of a regression loss \(L_{reg}\) and two classification losses \(L_{score}\) and \(L_{cls}\). \(L_{reg}\) is the negative log-likelihood (NLL) similar to the one used in [8] and given by : \[L_{reg}=-min_{l}\sum_{t=t_{obs}+1}^{t_{obs}+t_{f}}log(\mathcal{N}(y^{t}|\mu_{l }^{t};\Sigma_{l}^{t}))). \tag{9}\] \(L_{score}\) is a cross entropy loss defined as : \[L_{score}=-\sum_{l=1}^{L}\delta_{l*}(l)log(P_{l}), \tag{10}\] where \(\delta\) is a function equal to 1 if \(l=l*\) and 0 otherwise. \(L_{cls}\) is also a cross entropy loss defined as : \[L_{cls}=-\sum_{k=1}^{K}\delta_{k*}(k)log(p_{k}), \tag{11}\] where \(p_{k}\) is the probability associated with the potential goal \(k\), \(\delta\) is a function equal to 1 if \(k=k*\) and 0 otherwise, \(k_{*}^{t}\) is the index of the potential goal most closely matching the endpoint of the ground truth trajectory. Finally, the loss is given by : \[L=L_{cls}+L_{reg}+L_{score}, \tag{12}\] ## IV Experiments ### _Dataset_ We evaluate our model on the INTERACTION [16] dataset. The INTERACTION dataset provides a large set of challenging intersection, roundabout, and highway merge scenarios. In total, the data is collected from 11 locations using drones or fixed cameras. Fig. 1: Architecture of the compared methods for trajectory prediction. The models take as inputs the past trajectories of the agents in the scene (MHA-LSTM), the target coordinates sampled from a radial grid (G-MHA-LSTM), as well as the input of the DCM model (DCM-MHA-LSTM). They output \(L\) trajectories. For more details see section III. ### _Compared Methods_ The experiment includes a comparison of different models: * **I) MHA-LSTM [4]:** This model only takes as inputs the past trajectories of the agents in the scene and outputs \(L\) trajectories with their associated probabilities (see the architecture in the red rectangle in Fig. 1). We use \(L=6\) attention heads. * **II) G-MHA-LSTM [17]:** We add to the previous model a radial grid representation from which we extract potential goals. We predict the goal and then the trajectories conditioned on the predicted goal. (see the architecture in the orange rectangle in Fig. 1). * **III) DCM-MHA-LSTM :** To predict the goal of the target agent, we combine the DCM and the neural network using the LMNL framework [15]. This model is described in Section III and the architecture is illustrated in the blue rectangle in Fig. 1. * **IV) ODCM-MHA-LSTM :** This model only uses the DCM to predict the goal of the target agent. **Goal set representations :** We also compare different types of radial grids. For the methods II), III) and IV), we compare our results for two types of radial grid : a **dynamic** grid (d) and a **fixed** one (f). Similar to [12], we build the dynamic grid by considering the target agent's current velocity \(v_{T}^{t_{obs}}\). If \(v_{T}^{t_{obs}}=0\), we replace it with an arbitrary value equals to \(0.5\ m.s^{-1}\). The fixed grid is built using the value \(v=5.83m.s^{-1}\), which corresponds to the mean of the velocities in the INTERACTION training set. ### _Compared DCMs_ We compare two types of DCMs for modelling the behavior of vehicle motion. For our case, the functions modelling vehicle motion phenomenon which we consider for goal selection in this work are: 1. _occupancy:_ directions containing neighbours in the vicinity are less desirable. 2. _keep direction:_ vehicles tend to maintain the same direction of motion. 3. _collision avoidance:_ when a neighbour vehicle's trajectory is head-on towards a potential goal, this goal becomes less desirable due to the chance of a collision. * **1) DCM 1 :** For the first DCM configuration, we use a utility function defined as: \[u_{k}(\textbf{X})=\beta_{dir}dir_{k}+\beta_{occ}occ_{k}+\beta_{col}col_{k}\] (13) Where the functions \(dir_{k}\), \(occ_{k}\), and \(col_{k}\) correspond respectively to _keep direction_, _occupancy_ and _collision avoidance_. These functions are defined in [2] and [6]. * **2) DCM 2 :** For the second DCM, the utility function is defined as : \[u_{k}(\textbf{X})=\beta_{dir}dir_{k}+\beta_{occup}occup_{k}\] (14) Where the function \(dir_{k}\) is the same as in (IV-C). For \(occup_{k}\), we use the same mathematical formula as the occupancy function in (IV-C), however, we don't consider the position of the neighbors at time \(t_{obs}\). Instead, we consider their predicted position at time \(t_{obs}+t_{f}\) using a Constant velocity model. We assume that before predicting his goal, the target agent first predicts the future positions of his surroundings according to their headings and current velocitites, and then avoids the zones that are expected to be crowded. While training this model, we calculate the \(occup_{k}\) function using the growth truth positions of the neighbors. ### _Implementation details_ We use \(K=15\) number of potential goals. Similar to [8], our interaction space is 40 m ahead of the target vehicle, 10 m behind and 25 m on each side. We consider the neighbors situated in the interaction space at \(t_{obs}\). We also take into account the neighbors that are susceptible of being in this space from time \(t_{obs}\) to \(t_{f}\). To do so, we predict the trajectories of all of the neighbors in the scene using a Constant Velocity model and if they have a predicted position in the interaction space, we consider them in our model. We argue that this representation allows to consider neighbors that are not situated in the grid at \(t_{obs}\) but that can appear in the grid from time \(t=t_{obs}+1\) to \(t=t_{f}\). without having to create a bigger interaction space which can be more computationally expensive. We use \(L+K=6+15\) parallel attention operations. We use a batch size of 64 and Adam optimizer. The model is implemented using PyTorch [18]. ## V Results ### _Evaluation metrics_ Our method for trajectory forecasting is evaluated with the following three error metrics: * **Minimum Average Displacement Error over k (\(minADE_{k}\))** : The average of pointwise L2 distances between the predicted trajectory and ground truth over the k most likely predictions. * **Minimum Final Displacement Error over k (\(minFDE_{k}\))** : The final displacement error (FDE) is the L2 distance between the final points of the prediction and ground truth. We take the minimum FDE over the k most likely predictions and average over all agents. * Groundtruth collision (Col-II)**[19]: This metric calculates the percentage of collision between the primary vehicle's prediction and the neighbors in the groundtruth future scene. ### _Comparison of Methods_ We compare the methods described in Section IV-B. The results are reported in Table I. DCM1 and DCM2 refers to the first (resp the second) type of DCM described in IV-C. (f) and (d) correspond to respectively, the fixed and the dynamic radial grid representation for the extraction of potential goals. We can see that adding the DCM module decrease the percentage of collisions. We can see that the models using a fixed grid perform slightly better than when using a dynamic one. Thus, we can conclude that adding the information about the velocity doesn't improve the results. For future work, we can try multiple grid configurations, which could potentially improve the results. We can see that using the DCM alone gives worst results as this is due to the fact that the DCM without the NN is not able to predict accurately the goal of the target agent, and therefore, is not able to predict accurate trajectories. We can see that when using the second type of DCM IV-C, the results of ADE/FDE are similar to the ones using the first type of DCM, however, we notice that the percentage of collisions is lower, indicating that in this case, the utility function is more appropriate to avoid collisions. ### _Comparison with the state-of-the-art_ We compare our approach with the state-of-the-art using the INTERACTION dataset. Our proposed model does not include any map information. In fact, our aim in this paper, is to study the social interactions between the target agent and his surroundings. Therefore, in Table II, are reported the results where we compare our approach with methods that do not use any map information as well such as DESIRE [20] and Multipath [21]. We can see that our method outperforms these two models. We then compare our approach against state-of-the-art methods that use map information SAN [22], TNT [9], ITRA [23] and ReCoG [24]. The results are reported in Table. III. The main scope is not to compare the approach to the currently best performing trajectory prediction networks. The scope here is to introduce a discrete choice model that provides interpretability, show its feasibility and evaluate the potential of its prediction performance. Nonetheless, our method still achieves competitive results against these methods. Our model is able to perform well while, unlike any of these methods, providing interpretability. ### _Interpretable outputs_ #### Iv-D1 Estimation of \(\beta\) We study the coefficients \(\beta\) of the utility function of our DCM obtained by training our model. The estimated parameters of both of the utility functions Eq. IV-C and Eq.IV-C are reported in Table. IV for a fixed radial grid. We can see that the all of the coefficients \(\beta_{dir}\) are negative. This means that the utility of an alternative is going to decrease when its angular position is more decentralised with respect to the current direction, respectively. This is coherent as vehicles tend to keep their direction. The \(\beta_{occ}\) parameters have a positive sign, implying that vehicles tend to prefer nearby spatial zones crowded by agents. This result is not coherent as we would expect the contrary. However, this result can be interpreted as in situations where a lot of agents are moving toward the same destination. The \(\beta_{cd}\) parameters have a negative sign. This means that vehicles tend to avoid zones where there are potential colliders. We can see that the coefficients \(\beta_{occup}\) and \(\beta_{dir}\) are negatives. Fig. 2: Qualitative illustration of the ability of our architecture to output high-level interpretable goals. The potential goals are shown in black and the predicted goal in shown in magenta. The ground truth trajectory is in red and the predicted trajectory is in cyan. Current neighbour positions are shown in blue and their past trajectories are shown in green. In the first row, the decision of the model is influenced by the neural-network (NN). In the second row, the decision of the model is strongly influenced by the keep direction map of the DCM. (Un)favourable potential goals are shown in green (red). This is coherent as vehicles tend to keep their direction and avoid occupied zones. #### V-C2 Interpretability of the Goals We demonstrate the ability of our network to output interpretable goals in Fig. 2. In addition to the predictions map, we illustrate the activation maps of : the neural network (NN) map, the overall DCM map and finally the DCM function for the first DCM configuration described in Section. IV-C. In the first row of Fig. 2, the NN map influence the most the final decision. However, in the second row the influence of the NN map is weaker and the decision is more influenced by the DCM map. We can see that the decision is mostly influenced by keep direction and collision avoidance. Especially, the collision avoidance map helps counteract the influence of the NN map and avoid the model of making the wrong decision. Thus, we observe that the DCM maps work well in conjunction with the NN map to provide interpretable outputs. ## VI Conclusion and future work In this paper we proposed an interpretable goal based method for the task of vehicle trajectory prediction. In this approach, the discretized goals are selected using both interpretable knowledge-based functions and neural network predictions from the scene. This method allows to combine the high accuracy of the neural networks while being able to understand which vehicle motion rules are present in predicting its goal. Through experiments on the INTERACTION dataset, we highlight the interpretability as well as the accurate predictions outputted by our model. As future work, we plan to add lane information to the radial grid in order to make our model more scene-compliant. Moreover, we plan to explore other types of utility functions for the DCM to better model the behaviour of vehicles in an interactive environment.
2310.11233
Nearly half-flat $\rm{SU}(3)$-structures on $S^3\times S^3$
We study the $\rm{SU}(3)$-structure induced on an oriented hypersurface of a 7-dimensional manifold with a nearly parallel $\rm{G}_2$-structure. We call such $\rm{SU}(3)$-structures nearly half-flat. We characterise the left invariant nearly half-flat structures on $S^3\times S^3$. This characterisation then help us to systematically analyse nearly parallel $\rm{G}_2$-structures on an interval times $ S^3\times S^3$.
Ragini Singhal
2023-10-17T13:03:50Z
http://arxiv.org/abs/2310.11233v2
# Nearly half-flat \(\mathrm{SU}(3)\)-structures on \(S^{3}\times S^{3}\) # Nearly half-flat \(\mathrm{SU}(3)\)-structures on \(S^{3}\times S^{3}\) Ragini Singhal **Abstract** We study the \(\mathrm{SU}(3)\)-structure induced on an oriented hypersurface of a \(7\)-dimensional manifold with a nearly parallel \(\mathrm{G}_{2}\)-structure. We call such \(\mathrm{SU}(3)\)-structures _nearly half-flat_. We characterise the left invariant nearly half-flat structures on \(S^{3}\times S^{3}\). This characterisation then help us to systematically analyse nearly parallel \(\mathrm{G}_{2}\)-structures on an interval times \(S^{3}\times S^{3}\). ###### Contents * 1 Introduction * 2 \(\mathrm{SU}(3)\)- and \(\mathrm{G}_{2}\)-structures * 3 Evolution equations from \(\mathrm{SU}(3)\) to \(\mathrm{G}_{2}\) * 4 Parameterising invariant nearly half-flat structures on \(S^{3}\times S^{3}\) * 5 The \(S^{3}\times S^{3}\) evolution equations ## 1 Introduction A \(\mathrm{G}_{2}\)-structure on a \(7\)-dimensional manifold \(M\) is defined by a non-degenarate \(3\)-form \(\varphi\) that induces a metric \(g_{\varphi}\), a cross-product \(\times_{\varphi}\), an orientation \(\mathrm{vol}_{\varphi}\) and thus a Hodge star \(*_{\varphi}\) on \(M\) (see [1]). The Riemannian manifold \(M\) with a \(\mathrm{G}_{2}\)-structure \(\varphi\) is called nearly \(\mathrm{G}_{2}\) if \(\varphi\) is a nearly parallel \(\mathrm{G}_{2}\)-structure that is, for some \(\lambda\neq 0\) \[d\varphi=\lambda*_{\varphi}\varphi. \tag{1.1}\] They were described as manifolds with weak holonomy \(\mathrm{G}_{2}\) by Gray in [1]. Manifolds with nearly parallel \(\mathrm{G}_{2}\)-structure have been studied for various reasons in mathematics as well as physics: in addition to the papers described in the introduction, see also [1, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. One of the most interesting facts about nearly \(\mathrm{G}_{2}\)-manifolds is they are positive Einstein that is the Ricci curvature tensor is proportional to its metric tensor and the Ricci scalar is positive. In [13] the author classified the homogeneous Einstein \(7\)-manifolds with positive scalar curvature. Finding an inhomogeneous metric that satisfies this condition is quite challenging since the field equations for an appropriate metric ansatz are highly nonlinear partial differential equations. The cones over nearly parallel \({\rm G}_{2}\)-manifolds have holonomy contained in \({\rm Spin}(7)\) and can be \({\rm Spin}(7)\), \({\rm SU}(4)\), or \({\rm Sp}(2)\), when the dimension of the space of Killing spinors nearly parallel \({\rm G}_{2}\)-structure on the link is 1,2, or 3 respectively [10]. This property makes these spaces particularly important in the construction and understanding of manifolds with torsion-free \({\rm Spin}(7)\)-structures. In physics the interest in these manifolds arise from the fact that they appear as the internal manifold when one allows for a curved four-dimensional spacetime whilst still preserving maximal symmetry and supersymmetry in four dimensions [14]. Solutions of this type were extensively considered after the discovery of D = 11 supergravity and are known as Freund-Rubin spontaneous compactifications [12, 13, 15]. See [16] for a more detailed review. Thus finding new examples of nearly \({\rm G}_{2}\)-manifolds is an important problem. So far the only known examples of nearly \({\rm G}_{2}\)-manifolds are homogeneous [10] or they come from 3-Sasakian geometry [1]. In [10] the authors classified all the possible simply-connected, complete homogeneous nearly parallel \({\rm G}_{2}\)-manifolds and showed them to be isomorphic to one of : \((S^{7},g_{round})={\rm Spin}(7)/{\rm G}_{2}\), \((S^{7},\,g_{squashed})=\frac{{\rm Sp}(2)\times{\rm Sp}(1)}{{\rm Sp}(1)\times{ \rm Sp}(1)}\), \({\rm SO}(5)/{\rm SO}(3)\), \(M(3,2)=\frac{{\rm SU}(3)\times{\rm SU}(2)}{{\rm U}(1)\times{\rm SU}(2)}\), \(N(k,l)={\rm SU}(3)/S^{1}_{k,l}\ k,l\in\mathbb{Z}\), \(Q(1,1,1)={\rm SU}(2)^{3}/{\rm U}(1)^{2}\). Thus, in order to find new examples of nearly parallel \({\rm G}_{2}\)-manifolds one has to look for non-homogeneous examples. As a first step towards this goal, it would be natural to look for cohomogeneity-one examples of nearly parallel \({\rm G}_{2}\)-manifolds. So we are looking for a nearly parallel \({\rm G}_{2}\)-structure on manifold \((M,\varphi)\) that has a Lie group \(G\) acting on \(M\) such that it preserves \(\varphi\) and the generic orbits have dimension \(7-1=6\). In [11] the authors classified the connected Lie groups \(G\) that can act as cohomogeneity-one on nearly parallel \({\rm G}_{2}\)-manifolds. In the case when \(G\) is simple, they classified all the complete nearly parallel \({\rm G}_{2}\)-manifolds but unfortunately no new solutions were found in this case. According to the classification in [11] one of the possible non-simple Lie group that can act by cohomogeneity-one on nearly parallel \({\rm G}_{2}\)-manifolds is \({\rm SU}(2)^{2}\) up to finite quotients. This is equivalent to saying that the generic orbit of a complete cohomogeneity-one nearly \({\rm G}_{2}\)-manifold is isomorphic to \(S^{3}\times S^{3}\) or its quotient. In [12] the authors showed that the \({\rm G}_{2}\)-structure on 7-dimensional manifolds of type \(F_{1}\times F_{2}\times\mathbb{R}\) where \(F_{i}\) are complete, connected 3-folds with some special warped metrics is parallel or nearly-parallel if \(F_{i}\) are both isomorphic to \(S^{3}\). Thus, we study the \({\rm SU}(3)\)-structures on \(S^{3}\times S^{3}\) such that the induced \({\rm G}_{2}\)-structure on \(S^{3}\times S^{3}\times I\) is nearly parallel. In [12] the authors constructed the first complete examples of cohomogeneity-one nearly Kahler solution. Hopefully this characterization can also be used to achieved something similar in the nearly parallel \({\rm G}_{2}\)-structure case. The study of the hypersurfaces of \(\mathbb{R}^{7}\) with it's associated \({\rm G}_{2}\) cross product was initiated by Calabi [18], Gray [15] and later extended to manifolds with \({\rm G}_{2}\)-structures [11, 12, 13, 14, 15]. The Weingarten map of an oriented hypersurface inside a manifold with a \({\rm G}_{2}\)-structure can be completely described in terms of the intrinsic torsion forms of the hypersurface. This relation was used in [18] and [14] to describe the \({\rm SU}(3)\)-structure on an oriented hypersurface of a manifold with a \({\rm G}_{2}\)-structure. In this article we describe the invariant \({\rm SU}(3)\)-structure on the six-dimensional orientted hypersurface \(M^{6}\) of a nearly parallel \({\rm G}_{2}\)-manifold \(N^{7}\). The \({\rm SU}(3)\)-structure on a 6-dimensional manifold is defined by a 2-form \(\omega\) and a 3-form \(\gamma\). The nearly \({\rm G}_{2}\)-structure on \(N\) imposes some conditions on the intrinsic torsion of the \({\rm SU}(3)\)-structure on \(M\) and the \({\rm SU}(3)\)-structure thus obtained is known as _nearly half-flat_. The term nearly half-flat originates from the analogous situation where the \({\rm SU}(3)\)-structure on the hypersurface of a torsion-free \(\mathrm{G}_{2}\)-manifold is referred as _half-flat_. Nearly half-flat \(\mathrm{SU}(3)\)-structures were first introduced in [11] in the context of evolution equations on six-manifolds \(M\) leading to nearly parallel \(\mathrm{G}_{2}\)-structures on the product of \(M\) and an interval. In [10, 11, 12] the authors showed that one can construct nearly parallel \(\mathrm{G}_{2}\)-structures by lifting the nearly half-flat structures. This result is analogous to Hitchin's result in [13] that half-flat \(\mathrm{SU}(3)\)-structures on a six-dimensional manifold \(M\) can be lifted to parallel \(\mathrm{G}_{2}\)-structure on the product \(M\times\mathbb{R}\). In fact the nearly half-flat is a slight generalisation of the half-flat where the 3-form \(\gamma\) is not closed. Moreover if we impose \(\gamma\) to be closed, the nearly half-flat structure becomes half-flat. This is the same as setting \(\lambda=0\) in (1.1). In this article we concentrate on the case when the hypersurface of the nearly parallel \(\mathrm{G}_{2}\)-manifold is \(S^{3}\times S^{3}\) or it's finite quotient and describe the invariant nearly half-flat \(\mathrm{SU}(3)\)-structures on \(S^{3}\times S^{3}\). In [13] the authors describe the left-invariant half-flat \(\mathrm{SU}(3)\)-structures on \(S^{3}\times S^{3}\) using the representation theory of \(\mathrm{SO}(4)\) and matrix algebra. In the present article we follow a similar approach and show that we can describe any invariant nearly half-flat \(\mathrm{SU}(3)\)-structure in terms of two \(3\times 3\) matrices and two real constants satisfying some normalization and commutativity relations (see Theorem 4.1). In SS2 we give a brief introduction of \(\mathrm{SU}(3)\)-structures on 6-dimensional manifolds and the \(\mathrm{G}_{2}\)-structure on a 7-dimensional manifolds. We define the respective intrinsic torsion forms and alienate the classes of our particular interests. In SS3 we derive the evolution equations that describe the nearly \(\mathrm{G}_{2}\)-structure on \(M\times I\) evolving from the nearly half-flat \(\mathrm{SU}(3)\)-structure on \(M\). We then use the evolution equations to describe the nearly half-flat \(\mathrm{SU}(3)\)-structure in terms of the intrinsic torsion forms. In SS4 we specialise the theory curated in the previous sections to \(S^{3}\times S^{3}\) which makes the heart of this article. Matrix algebra is used to first parameterize invariant \(\mathrm{SU}(3)\)-structure on \(S^{3}\times S^{3}\) and then nearly half-flat \(\mathrm{SU}(3)\)-structures. Further we use this parameterization to describe the moduli space of invariant nearly half-flat \(\mathrm{SU}(3)\)-structures on \(S^{3}\times S^{3}\) in 4.2 and show that the moduli space is essentially a finite-dimensional symplectic quotient. This description is rather similar to half-flat case as described in [13]. The description of the invariant nearly half-flat \(\mathrm{SU}(3)\)-structure \((\omega,\gamma)\) in term of elementary matrices makes it rather elegant to construct \(\mathrm{SU}(3)\)-structures of specific torsion classes, for instance the \(\mathrm{SU}(3)\)-structure is always in \(\mathcal{W}_{1}+\mathcal{W}_{3}\) if both matrices are scalar multiple of identity. We can now express the intrinsic torsion forms \(w_{1},w_{2},w_{3}\) in terms of the algebraic data which makes it easier to construct examples of specific torsion as we do in SS4. Using this terminology we are able to produce nearly half-flat structures with strictly positive scalar curvature as well as with zero scalar curvature. In SS5 we describe the evolution equations in the special case of \(S^{3}\times S^{3}\) in terms of the two \(3\times 3\) matrices and real functions. The equations we have are in contrast with the half-flat case as presented in [13] as they are no longer Painleve equations. In fact the functions \(a\) and \(b\) are no longer constants with respect to \(t\) as compared to the half-flat case (see (5.1)). We also represent some of the known examples of nearly parallel \(\mathrm{G}_{2}\)-structures obtained from the invariant nearly half-flat \(\mathrm{SU}(3)\)-structure on \(S^{3}\times S^{3}\) such as the homogeneous nearly parallel \(\mathrm{G}_{2}\)-structure on the Berger space \(\mathrm{SO}(5)/\mathrm{SO}(3)\) and the sine cone metric on \(S^{1}\times S^{3}\times S^{3}\). Both of the solutions we present here has extra \(Z_{2}^{2}\) and \(\mathrm{SU}(2)\)-symmetry respectively. The algebraic setup introduced makes the description of these known nearly parallel \(\mathrm{G}_{2}\)-structures much more elegant and efficient to use. **Acknowledgements.** The author would like to thank Simon Salamon and Thomas Madsen for enumerable suggestions on the project. The author would also like to thank Benoit Charbonneau for improving the manuscript, and Lorenzo Foscolo, Spiro Karigiannis and Shubham Dwivedi for helpful discussions. A special mention to Simons Collaboration on Special holonomy in geometry, analysis and physics as most of the work on this project was undertaken while the author was a Simons Collaboration postdoc at King's College London. ## 2 \(\mathrm{SU}(3)\)- and \(\mathrm{G}_{2}\)-structures We are interested in studying invariant \(\mathrm{SU}(3)\)-structures on \(S^{3}\times S^{3}\). An \(\mathrm{SU}(3)\)-structure on a \(6\)-dimensional manifold \(M\) is defined by a pair \(\omega,\Omega\) where \(\omega\) is a symplectic form and \(\Omega\) is a complex \((3,0)\) form which satisfies the normalizarion condition \[\Omega\wedge\bar{\Omega}=-\frac{4i}{3}\omega^{3}.\] An \(\mathrm{SU}(3)\) reduction defines a circle of real \(3\)-forms \(\Gamma:=\{\cos\theta\mathrm{Re}\Omega+\sin\theta\mathrm{Im}\Omega,\theta\in \mathbb{R}\}\), and any \(\gamma\in\Gamma\) along with the \(2\)-form \(\omega\) defines an almost complex structure \(J\), the metric \(g\), and the orientation \(\mathrm{vol}_{g}\). Note that an \(\mathrm{SU}(3)\)-structure on a \(6\)-dimensional manifold can be defiend by a pair \((\omega,\gamma)\) where \(\gamma\in\Gamma\). indeed \(\gamma\) can determine \(J\) as well \(J\gamma\) such that \(\gamma+iJ\gamma\) is a complex holomorphic volume form of type \((3,0)\). See [10] for more details. Using \(\omega\) one can define the symplectic Hodge star \(\star:\Omega^{r}M\to\Omega^{6-r}M\) via the relation \[\alpha\wedge\star\beta=\omega(\alpha,\beta)\frac{\omega^{3}}{6}. \tag{2.1}\] Using the above defined symplectic Hodge star for \(p\in M\) we have \(P_{\gamma}\in\mathrm{End}(\mathrm{T}_{\mathrm{p}}^{*}\mathrm{M})\) defined by \[P_{\gamma}\colon v\mapsto-\frac{1}{2}\star(\gamma\wedge\star( \gamma\wedge\alpha).\] Then the endomorphism \(J_{\gamma}=(\det P_{\gamma})^{-\frac{1}{6}}P_{\gamma}\) defines an almost complex structure on \(M\). We write \(J\) instead of \(J_{\gamma}\) when there is no scope for confusion. When \(M\) is positively oriented an almost complex structure \(J_{\gamma}\) on \(M\) can be defined using any \(\gamma\in\Gamma\) as described in [10]. We define \(K_{\gamma}\in\mathrm{End}(\mathrm{TM})\otimes\Omega^{6}\mathrm{M}\cong\mathrm{ End}(\mathrm{TM})\) by \[X\mapsto K(X):=(X\lrcorner\gamma)\wedge\gamma\in\Omega^{5}M\cong TM \otimes\Omega^{6}M. \tag{2.2}\] Then \(J_{\gamma}=6K_{\gamma}/\omega^{3}\). The natural action of the Lie group \(\mathrm{SU}(3)\) on the tangent space of a \(6\)-dimensional manifold induces the following decomposition on the space of differential forms \(\Omega^{p}\) into irreducible \(\mathrm{SU}(3)\)-representations \(\Omega^{p}_{k}\) with pointwise dimension \(k\) (see [1]): \[\begin{split}\Omega^{2}&=\Omega^{2}_{1}\oplus\Omega ^{2}_{6}\oplus\Omega^{2}_{8},\\ \Omega^{3}&=\Omega^{3}_{\mathrm{Re}}\oplus\Omega^{3} _{\mathrm{Im}}\oplus\Omega^{3}_{6}\oplus\Omega^{3}_{12},\end{split} \tag{2.3}\] where each summand can be described in terms of the \(\mathrm{SU}(3)\)-structure as follows, \[\Omega^{2}_{1}=\mathbb{R}\omega,\] \[\Omega^{2}_{6} =\{\star(\alpha\wedge\gamma)\ |\ \alpha\in\Omega^{1}\}=\{\beta \in\Omega^{2}\ |\ J\beta=-\beta\},\] \[\Omega^{2}_{8} =\{\beta\in\Omega^{2}\ |\ \beta\wedge\gamma=0,\star\beta=-\beta \wedge\omega\}\] \[=\{\beta\in\Omega^{2}\ |\ J\beta=\beta,\beta\wedge\omega^{2}=0\},\] and \[\Omega^{3}_{\mathrm{Re}} =\mathbb{R}\gamma,\quad\Omega^{3}_{\mathrm{Im}}=\mathbb{R}J\gamma,\] \[\Omega^{3}_{6} =\{\alpha\wedge\omega\ |\ \alpha\in\Omega^{1}\}=\{\xi\in\Omega^{3}\ |\ \star\xi=\xi\},\] \[\Omega^{3}_{12} =\{\xi\in\Omega^{3}\ |\ \xi\wedge\omega=0,\xi\wedge\gamma=0,\xi \wedge J\gamma=0\}.\] The space of \(1,6\)-forms is irreducible and we can describe the space of \(4,5\)-forms via the isomorphism described by the Hodge star operator \(\star\). Using the decomposition in (2.3) and the relations between \(\omega,\gamma,J\gamma\) one can define the torsion forms of an \(\mathrm{SU}(3)\)-structure \((\omega,\gamma)\) on \(M\) in terms of the derivaties of the forms \(\omega,\gamma,J\gamma\). Since \(\gamma\wedge J\gamma=2/3\omega^{3}\), then for some \(\tau_{0},\nu_{0}\in\Omega^{0},\tau_{1},\nu_{1}\in\Omega^{1},\tau_{2},\nu_{2} \in\Omega^{2}_{8}\), and \(\tau_{3}\in\Omega^{3}_{12}\) we have \[d\omega =\tau_{0}\gamma+\nu_{0}J\gamma+\tau_{1}\wedge\omega+\tau_{3},\] \[d\gamma =\frac{2\nu_{0}}{3}\omega^{2}+\nu_{1}\wedge\gamma+\nu_{2}\wedge\omega, \tag{2.4}\] \[dJ\gamma =-\frac{2\tau_{0}}{3}\omega^{2}+J\nu_{1}\wedge\gamma+\tau_{2} \wedge\omega.\] The forms \(\nu_{i},\tau_{i}\) are known as the intrinsic torsion forms of the \(\mathrm{SU}(3)\)-structures ([1],[2],[3]) and defined the torsion \(T\) of the \(\mathrm{SU}(3)\)-structure which measures the failure of \(\mathrm{Hol}(\nabla^{\mathrm{LC}})\) to reduce to \(\mathrm{SU}(3)\). The torsion \(T\) of a \(G\)-structure lives in the space \(T^{*}M\otimes\mathfrak{g}^{\perp}\) which for \(G=\mathrm{SU}(3)\) turns out to be a \(42\)-dimensional space \[T^{*}M\otimes\mathfrak{su}(3)^{\perp}\cong\mathcal{W}^{\pm}_{1} \oplus\mathcal{W}^{\pm}_{2}\oplus\mathcal{W}_{3}\oplus\mathcal{W}_{4}\oplus \mathcal{W}_{5}.\] The \(\mathrm{SU}(3)\)-structure is torsion-free or Calabi-Yau if and only if \(T=0\) and is nearly Kahler if \(T\in\mathcal{W}^{-}_{1}\). For a more thorough description of these torsion classes see [11], [1]. In this article we are interested in a special torsion class of \(\mathrm{SU}(3)\)-structures known as nearly half-flat which arise when the \(\mathrm{G}_{2}\)-structure on \(M\times I\) is a nearly parallel \(\mathrm{G}_{2}\)-structure. The torsion for a nearly half-flat structure lies is \(\mathcal{W}_{1}\oplus\mathcal{W}^{-}_{2}\oplus\mathcal{W}_{3}\). We describe this torsion class in detail in the following section (see Table 3). A \(\mathrm{G}_{2}\)-structure, on the other hand is defined by the reduction of the structure group of the frame bundle of a \(7\)-dimensional manifold \(N\) to the Lie group \(\mathrm{G}_{2}\subset\mathrm{SO}(7)\). The existence of a \(\mathrm{G}_{2}\)-structure on \(N\) is characterized by a positive \(3\)-form \(\varphi\) preserved by the action of \(\mathrm{G}_{2}\) on \(\Omega^{3}(N)\). Such a structure exists if and only if the manifold is orientable and spinnable, conditions which are respectively equivalent to the vanishing of the first and second Stiefel-Whitney classes. The \(3\)-form \(\varphi\) nonlinearly induces a Riemannian metric \(g_{\varphi}\) and an orientation \(\mathrm{vol}_{\varphi}\) on \(N\) and hence a Hodge star operator \(*_{\varphi}\). We denote the Hodge dual \(4\)-form \(*_{\varphi}\varphi\) by \(\psi\). Pointwise we have \(\left\|\varphi\right\|^{2}=\left\|\psi\right\|^{2}=7\), where the norm is taken with respect to the metric induced by \(\varphi\). Similar to \(\mathrm{SU}(3)\)-structure, a \(\mathrm{G}_{2}\)-structure on \(N\) induces a splitting of the spaces of differential forms on \(N\) into irreducible \(\mathrm{G}_{2}\) representations. The space of \(2\)-forms \(\Omega^{2}\) and \(3\)-forms \(\Omega^{3}\) decompose as \[\Omega^{2}=\Omega^{2}_{7}\oplus\Omega^{2}_{14},\] \[\Omega^{3}=\Omega^{3}_{1}\oplus\Omega^{3}_{7}\oplus\Omega^{3}_{27}.\] More precisely, we have the following description of the space of forms : \[\Omega^{2}_{7} =\{X\lrcorner\varphi\mid X\in\Gamma(TN)\}=\{\beta\in\Omega^{2} \mid*(\varphi\wedge\beta)=2\beta\},\] \[\Omega^{2}_{14} =\{\beta\in\Omega^{2}(N)\mid\beta\wedge\psi=0\}=\{\beta\in \Omega^{2}\mid*(\varphi\wedge\beta)=-\beta\}.\] Similarly, for 3-forms \[\Omega^{3}_{1} =\{f\varphi\mid f\in C^{\infty}(N)\},\] \[\Omega^{3}_{7} =\{X\lrcorner\psi\mid X\in\Gamma(TN)\}=\{*(\alpha\wedge\varphi) \mid\alpha\in\Omega^{1}\},\] \[\Omega^{3}_{27} =\{\eta\in\Omega^{3}(N)\ \mid\ \eta\wedge\varphi=0=\eta\wedge\psi\}.\] The decompositions of \(\Omega^{4}\) and \(\Omega^{5}\) are obtained by taking the respective Hodge stars with respect \(*_{\varphi}\). Given a \(\mathrm{G}_{2}\)-structure \(\varphi\) on \(M\), we can decompose \(d\varphi\) and \(d\psi\) according to the above decompositions. This defines the _torsion forms_, which are unique differential forms \(\tau_{0}\in\Omega^{0}\), \(\tau_{1}\in\Omega^{1}\), \(\tau_{2}\in\Omega^{2}_{14}\) and \(\tau_{3}\in\Omega^{3}_{27}\) such that (see [1]) \[d\varphi =\tau_{0}\psi+3\tau_{1}\wedge\varphi+*_{\varphi}\tau_{3},\] \[d\psi =4\tau_{1}\wedge\psi+*_{\varphi}\tau_{2}.\] Here the torsion lives in the 49-dimensional space \(T^{*}N\otimes\mathfrak{g}_{2}^{\perp}\) and is decomposed as follows \[T^{*}N\otimes\mathfrak{g}_{2}^{\perp}\cong\mathcal{X}_{0}\oplus\mathcal{X}_{1 }\oplus\mathcal{X}_{2}\oplus\mathcal{X}_{3}.\] These torsion forms give rise to the sixteen classes of \(\mathrm{G}_{2}\)-structures and \(T=0\) if and only if \(d\varphi=d\psi=0\) (see [11], [12]). The torsion class for which \(T\in\mathcal{X}_{0}\) is called nearly parallel \(\mathrm{G}_{2}\)-structure. **Definition 2.1**.: A \(\mathrm{G}_{2}\)-structure \(\varphi\) is **nearly parallel** if and only if there exists \(\lambda\neq 0\) such that \[d\varphi=\lambda\psi\quad\text{ and }\quad d\psi=0. \tag{2.5}\] In this case, \(T_{ij}=\dfrac{\lambda}{4}(g_{\varphi})_{ij}\). **Remark 2.2**.: If \(\varphi\) is a nearly \(\mathrm{G}_{2}\)-structure differentiating (2.5) gives \(d\lambda\wedge\psi=0\) which implies \(d\lambda=0\), as wedge product with \(\psi\) is an isomorphism from \(\Omega^{1}_{7}\) to \(\Omega^{5}_{7}\). Thus, if \(N\) is connected \(\lambda\) is a constant. In this article we are interested in parameterizing the \(\mathrm{SU}(3)\)-structures that arise on the equidistant orientable hypersurfaces of manifolds with nearly parallel \(\mathrm{G}_{2}\)-structures. ## 3 Evolution equations from \(\mathrm{SU}(3)\) to \(\mathrm{G}_{2}\) The exceptional Lie group \(\mathrm{G}_{2}\) is the group of automorphisms of the Octonions \(\mathbb{O}\) that preserves the splitting \(\mathbb{O}\cong\mathbb{R}+\mathrm{Im}\mathbb{O}\). Then one can define the Lie group \(\mathrm{SU}(3)\) as the subgroup of \(\mathrm{G}_{2}\) that preserves an imaginary unit octonion. This fact indicates the presence of an \(\mathrm{SU}(3)\)-structure on an orientable hypersurface of a manifold with \(\mathrm{G}_{2}\)-structure which led Calabi [12] and Gray [14] study the induced \(\mathrm{SU}(3)\)-structure on orientable hypersurfaces of \(\mathrm{Im}\mathbb{O}\). Let \(I\subset\mathbb{R}\) be an interval. Given an \(\mathrm{SU}(3)\)-structure \((\omega,\gamma)\) on \(M\), one can define a \(\mathrm{G}_{2}\)-structure \((\varphi,\psi)\) on \(I\times M\) by \[\varphi =dt\wedge\omega(t)+\gamma(t) \tag{3.1}\] \[\psi=*_{\varphi}\varphi =\frac{1}{2}\omega^{2}(t)-dt\wedge J\gamma(t).\] **Conditions on \(\omega,\gamma\) for nearly \(\mathrm{G}_{2}\)-structure.** Now suppose \(\varphi,\psi\) defines a nearly \(\mathrm{G}_{2}\)-structure, that is for some non-zero constant \(\lambda\in\mathbb{R}\) we have \[d\varphi=\lambda\psi.\] From (3.1) we get that \[d\varphi =dt\wedge(-d\omega(t)+\gamma^{\prime}(t))+d\gamma(t), \tag{3.2}\] \[d\psi =\frac{d\omega^{2}(t)}{2}+dt\wedge(\frac{1}{2}(\omega^{2})^{ \prime}(t)+dJ\gamma(t)). \tag{3.3}\] Thus \((\varphi,\psi)\) defines a nearly \(\mathrm{G}_{2}\)-structure if and only if \[d\omega(t) =\gamma^{\prime}(t)+\lambda J\gamma(t), \tag{3.4}\] \[d\gamma(t) =\frac{\lambda}{2}\omega^{2}(t)\] \[dJ\gamma(t) =-\frac{(\omega^{2})^{\prime}}{2}.\] Equation (3.4) immediately imply \(d\omega^{2}=0\) which further implies that \(\omega\wedge d\omega=0\). Thus we get \[0=\omega\wedge d\omega=\omega\wedge\gamma^{\prime}+\lambda\omega\wedge J\gamma.\] Since \(\omega\wedge J\gamma=0\) the above implies that \(\omega\wedge\gamma^{\prime}=0\). Since \(\gamma\wedge\omega=0\) we have \[d\gamma\wedge\omega=\gamma\wedge d\omega,\] which from (3.4) implies \[\frac{\lambda}{2}\omega^{3}=\gamma\wedge\gamma^{\prime}+\lambda(\gamma\wedge J \gamma)=\gamma\wedge\gamma^{\prime}+\frac{2}{3}\lambda\omega^{3}.\] Thus, \(\gamma\wedge\gamma^{\prime}=(-1/6)\lambda\omega^{3}\). Thus the 3-form \(\gamma^{\prime}\) is of the form \(\alpha\gamma-\frac{1}{4}\lambda J\gamma+\gamma_{12}^{\prime}\). Since we also have \(d\psi=0\), we get that \(dJ\gamma=-\omega^{\prime}\wedge\omega\). Writing \(\omega^{\prime}=p\omega+X\lrcorner\gamma+\omega_{8}^{\prime}\) and comparing torsion forms we get that \(p=2\alpha/3\) and \(X=0\). Summing up we get that for some \(w_{1}\in\Omega^{0},w_{3}\in\Omega^{3}_{12}\), and \(\hat{w_{2}}\in\Omega^{2}_{8}\), \[d\omega =w_{1}\gamma+\frac{3\lambda}{4}J\gamma+w_{3}, \tag{3.5}\] \[d\gamma =\frac{\lambda}{2}\omega^{2},\] \[dJ\gamma =-\frac{2}{3}w_{1}\omega^{2}+\hat{w_{2}}\wedge\omega.\] Thus it is clear that for nearly half-flat \(\mathrm{SU}(3)\)-structures the only non vanishing torsion forms are \(\tau_{0},\nu_{0},\tau_{3}\), and \(\nu_{2}\). But since \(\nu_{0}\) is a constant completely determined by the nearly parallel \(\mathrm{G}_{2}\)-structure, the dimension of the unknown torsion is given by \(\dim(\mathbb{R})+\dim(\Omega_{8}^{2})+\dim(\Omega_{12}^{3})=21\), which is rather similar to "_half-flat_" \(\mathrm{SU}(3)\)-structures ([13]) in the sense that \(W_{1}^{-}\) is the only extra non-vanishing torsion. Also observe that there are no necessarily closed forms as opposed to the half-flat case where \(\gamma\) is always closed. We call the \(\mathrm{SU}(3)\)-structures whose torsion is given by (3.5), "_nearly half-flat_" \(\mathrm{SU}(3)\)-structures. The condition \(d\gamma=\frac{\lambda}{2}\omega^{2}\) implies \(\nu_{1}=\nu_{2}=0\) in (2.4) and since \(d(\omega^{2})=0\) we get \(\tau_{1}=0\). Thus we can give the sufficient condition for an \(\mathrm{SU}(3)\)-structure on a \(6\)-dimensional manifold to be nearly half-flat. **Definition 3.1**.: An \(\mathrm{SU}(3)\)-structure \((\omega,\gamma)\) on a \(6\)-manifold \(N^{6}\) is nearly half-flat if for some non-zero real constant \(\lambda\) \[d\gamma=\frac{\lambda}{2}\omega^{2}.\] Now we can state the result originally proved in [15, Proposition 5.2] **Proposition 3.2**.: _Let \(I\subseteq\mathbb{R}\) parameterised by \(t\). A nearly half-flat structure \((\omega,\gamma)\) on \(N^{6}\) can be lifted to a nearly \(\mathrm{G}_{2}\)-structure \(\varphi=dt\wedge\omega+\gamma\) on \(N^{6}\times I\) if and only if \((\omega,\gamma)\) satisfy the evolution equations_ \[\begin{split}\gamma^{\prime}(t)&=d\omega(t)-\lambda J \gamma(t)\\ dJ\gamma(t)&=-\frac{(\omega^{2})^{\prime}}{2}.\end{split} \tag{3.6}\] Given an initial nearly half-flat structure [15] established the existence, uniqueness and naturality of a solution of the system (3.6) for all time. For compact manifolds, it is shown in [16] that a real-analytic solution of these evolution equations which is a nearly half-flat \(\mathrm{SU}(3)\)-structure for a time \(t=t_{0}\) already defines a nearly parallel \(\mathrm{G}_{2}\)-structure. In [15], the author extended the evolution equations to all possible signatures and gave a simplified proof for the properties of the solutions which also holds for non-compact manifolds. **Remark 3.3**.: If we put \(\lambda=0\) (i.e. torsion-free) in (3.5) we get back the half-flat conditions as we should! In the above notation for intrinsic torsion forms the scalar curvature \(s\) of the Levi-Civita connection for nearly half-flat \(\mathrm{SU}(3)\)-structures is given by \[s=\frac{10}{3}w_{1}^{2}+\frac{15\lambda^{2}}{8}-\frac{1}{2}|\hat{w}_{2}|^{2}- \frac{1}{2}|w_{3}|^{2}. \tag{3.7}\] \begin{table} \begin{tabular}{|c|c|} \hline \(\mathcal{W}_{1}^{+}\) & \(\mathcal{W}_{1}^{-}\) \\ \hline \(\mathcal{W}_{2}\) & \(\hat{\mathcal{W}}_{2}\) \\ \hline \(\mathcal{W}_{3}\) & \\ \hline \(\mathcal{W}_{4}\) & \\ \hline \(\mathcal{W}_{5}\) & \\ \hline \end{tabular} \end{table} Table 1: the non-zero torsion classes for nearly half-flat The general expression for the scalar curvature in terms of the instrinsic torsion of the \(\mathrm{SU}(3)\)-structure was derived in [1, Theorem 3.4]. **Remark 3.4**.: From (3.5) one can see that if \(w_{3}=0\) then \(d(d\omega)=w_{1}d\gamma+\frac{\lambda}{2c}dJ_{\gamma}=0\) which implies \(\hat{w}_{2}\wedge\omega=0\) and hence, \(\hat{w}_{2}=0\). Thus if \(w_{3}=0\) the nearly half-flat structure is in \(\mathcal{W}_{1}\). Moreover for a nearly half-flat structure \(\mathcal{W}_{1}^{-}\) is always non-zero so the only possible torsion classes for a nearly half-flat structre are \(\mathcal{W}_{1}^{-}(\text{nearly Kahler}),\mathcal{W}_{1},\mathcal{W}_{1}^{-}+ \mathcal{W}_{3},\mathcal{W}_{1}+\mathcal{W}_{3},\mathcal{W}_{1}^{-}+\hat{ \mathcal{W}}_{2}+\mathcal{W}_{3},\mathcal{W}_{1}+\hat{\mathcal{W}}_{2}+ \mathcal{W}_{3}\). Also note that of the torsion class for \((\omega,\gamma)\) is in \(\mathcal{W}_{1}\) then there exists a \(\tilde{\gamma}\in\{\cos(\theta)\gamma+\sin{(\theta)}J\gamma\mid\theta\in \mathbb{R}\}\) such that \((\omega,\tilde{\gamma})\) is nearly Kahler. Under some special circumstances a nearly half-flat structure \((\omega,\gamma)\) can be deformed to a half-flat structure. **Proposition 3.5**.: _Let \(\hat{w}_{2}=0\). Then for \(\theta=\tan^{-1}\left(\frac{3\lambda}{4w_{1}}\right)\), the \(\mathrm{SU}(3)\)-structure \((\omega,\cos(\theta)\gamma+\sin(\theta)J\gamma)\) is half-flat._ Proof.: Let \(\tilde{\gamma}=\cos(\theta)\gamma+\sin(\theta)J\gamma\). Since \(d(\omega^{2})=0\), the \(\mathrm{SU}(3)\)-structure \((\omega,\tilde{\gamma})\) is half-flat if and only if \(d\tilde{\gamma}=0\). If \(\hat{w}_{2}=0\) and \(w_{1}\neq 0\) \[d\tilde{\gamma} =\cos(\theta)d\gamma+\sin(\theta)dJ\gamma\] \[=\frac{\cos(\theta)\lambda}{2}\omega^{2}-\frac{2\sin(\theta)}{3} w_{1}\omega^{2}.\] Thus \(d\tilde{\gamma}=0\iff\tan(\theta)=\frac{3\lambda}{4w_{1}}\). As an immediate consequence of the above proposition we see that if \(w_{1}=0\), the \(\mathrm{SU}(3)\)-structure \((\omega,J\gamma)\) is half-flat. **Proposition 3.6**.: _Let \((\omega,\gamma)\) be a nearly half-flat \(\mathrm{SU}(3)\)-structure on \(M^{6}\) and let \(\tilde{\gamma}\in\Omega^{3}(M)\) be any closed, primitive \(3\)-form that is \(\tilde{\gamma}\wedge\omega=0\). Then \((\omega,\tilde{\gamma})\) defines a half-flat \(\mathrm{SU}(3)\)-structure._ Proof.: Since \(d\tilde{\gamma}=0\), from (2.4) we have that \[d\omega =\tau_{0}\tilde{\gamma}+\tau_{1}\wedge\omega+\tau_{3},\] \[d\tilde{J}\tilde{\gamma} =-\frac{2}{3}\tau_{0}\omega^{2}+\tau_{2}\wedge\omega.\] Equating \(d\omega\) from above and in (3.5) we get \[\tau_{1}\wedge\omega=w_{1}\gamma-\tau_{0}\tilde{\gamma}+\frac{3\lambda}{4}J \gamma+w_{3}-\tau_{3}\] Wedging with \(\omega\) both sides implies \(\tau_{1}\wedge\omega^{2}=0\implies\tau_{1}=0\). For a nearly half-flat structure \((\omega,\gamma)\) since \(d\omega^{2}=0\) for any \(\epsilon>0\) and parameter \(s\) the \(3\)-form \(\gamma_{s}:=\gamma+\epsilon sd\omega\) defines a primitive \(3\)-form such that \(d\gamma_{s}=\lambda/2\omega^{2}\). For \(\epsilon\) sufficiently small \(\gamma\) is also stable. **Proposition 3.7**.: _Given a nearly half-flat structure \((\omega,\gamma)\) on \(M^{6}\), for sufficiently small \(\omega\) and \(s\in I\subset\mathbb{R}\) the one parameter family of \(\mathrm{SU}(3)\)-structures given by_ \[(\omega,\gamma_{s}:=\gamma+\epsilon sd\omega)\] _is nearly half-flat for all \(s\in I\)._ ## 4 Parameterising invariant nearly half-flat structures on \(S^{3}\times S^{3}\) Let \(M\) denote the six dimensional manifold \(S^{3}\times S^{3}\). We will use the notation as used in [13]. The tangent bundle \(TM\) is trivial since \(M\) is a Lie group. Thus \(TM\cong M\times\mathbb{R}^{6}\cong M\times\mathfrak{so}(4)\cong M\times\mathfrak{ su}(2)\oplus\mathfrak{su}(2)\). We will denote by \(A,B\) the 2 copies of \(\mathfrak{su}(2)\) in the cotangent bundle at the identity, \(T^{*}_{\rm id}M\cong A\oplus B\). We choose bases \(\{e^{1},e^{3},e^{5}\}\) and \(\{e^{2},e^{4},e^{6}\}\) for \(A\) and \(B\) respectively such that \[de^{1}=e^{35},de^{2}=e^{46},de^{3}=-e^{15},de^{4}=-e^{26},de^{5}=e^{13},de^{6}=e ^{24}.\] We now describe the invariant nearly half-flat structures on \(S^{3}\times S^{3}\) in terms of \(3\times 3\) real matrices similar to SS3 in [13]. Using the same notation for \(T^{*}M=A\oplus B=:U\cong M_{3\times 3}(\mathbb{R})\) where \(A,B\cong\mathfrak{su}(2)\) we have \[\Omega^{2}M \cong\Omega^{2}A\oplus(A\otimes B)\oplus\Omega^{2}B,\] \[\Omega^{3}M \cong\Omega^{3}A\oplus(\Omega^{2}A\otimes B)\oplus(A\otimes\Omega ^{2}B)\oplus\Omega^{3}B.\] Since \(d(\omega^{2})=0\) from (3.5) we have \(\omega^{2}\in\Omega^{2}A\otimes\Omega^{2}B\) which implies \(\omega\in A\otimes B\). Thus \(\omega\) can be represented by a \(3\times 3\) real matrix \(P\) by \(\omega=\sum_{i,j=1}^{3}P_{ij}e^{2i-1}\wedge e^{2j}\). We define a 3-form \(\delta\in(\Omega^{2}A\otimes B)\oplus(A\otimes\Omega^{2}B)\) such that \(\delta\wedge\omega=0\) and \(d\delta=\omega^{2}\). A symmetric choice of \(\delta\) in \(A,B\) yields \[\delta=-\sum_{i,j=1}^{3}\rm Adj(P^{T})_{ij}(de^{2i-1}\wedge e^{2j}+e^{2i-1} \wedge de^{2j}).\] Since \(d\gamma=\lambda/2\omega^{2}\) for nearly half-flat structures the 3-form \(\gamma-\frac{\lambda}{2}\delta\) is closed and hence for some \(a,b\in\mathbb{R}\) and \(d\beta\in(\Omega^{2}A\otimes B)\oplus(A\otimes\Omega^{2}B)\) we can assume \(\gamma\) to be of the form \[\gamma=ae^{135}+be^{246}+d\beta+\frac{\lambda}{2}\delta.\] The 2-form \(\beta\in A\otimes B\) can be represented by a \(3\times 3\) real matrix \(Q\) and we have \(\beta=\sum_{i,j=1}^{3}Q_{ij}e^{2i-1}\wedge e^{2j}\). The \(\Omega^{2}A\otimes B,A\otimes\Omega^{2}B\) components of \(\gamma\) are given by matrices \(Q_{1},Q_{2}\) respectively where \[\begin{split} Q_{1}&=Q-\frac{\lambda}{2}\rm Adj(P^{ T}),\\ Q_{2}&=-Q-\frac{\lambda}{2}\rm Adj(P^{T}),\end{split} \tag{4.1}\] and we have \[\gamma=ae^{135}+be^{246}+\sum_{i,j=1}^{3}\left((Q_{1})_{ij}de^{2i-1}\wedge e^{ 2j}+(Q_{2})_{ij}e^{2i-1}\wedge de^{2j}\right).\] The identity \(\gamma\wedge\omega=0\) implies that \(Q^{T}P\) is symmetric which also implies \(Q_{i}^{T}P\) is symmetric for \(i=1,2\). By using (2.2) we compute the almost complex structure \(J_{\gamma}\) in terms of \(a,b,Q_{1},Q_{2}\). For \(r=1,2,3\), \[Je^{2r-1} =\frac{1}{\det(P)}\left((ab-\operatorname{tr}(Q_{2}Q_{1}^{T}))e^{ 2r-1}+2\sum_{i=1}^{3}\left((Q_{2}Q_{1}^{T})_{ri}e^{2i-1}-(aQ_{2}-\operatorname{ Adj}(Q_{1}^{T}))_{ri}e^{2i}\right)\right),\] \[Je^{2r} =\frac{1}{\det(P)}\left(-(ab-\operatorname{tr}(Q_{1}^{T}Q_{2}))e ^{2r}+2\sum_{i=1}^{3}\left((bQ_{1}^{T}-\operatorname{Adj}(Q_{2}))_{ri}e^{2i-1} -(Q_{1}^{T}Q_{2})_{ri}e^{2i}\right)\right). \tag{4.2}\] For \(i=1,\ldots,6\), \[J^{2}e^{i}=\frac{1}{(\det P)^{2}}((ab-\operatorname{tr}(Q_{1}^{T}Q_{2}))^{2}+ 4(a\det Q_{2}+b\det Q_{1})-4\operatorname{tr}\left(\operatorname{Adj}(Q_{1}^{ T}Q_{2}))\right))e^{i}.\] Since \(J^{2}=-\mathrm{id}\) we have \[\det P=(-(ab-\operatorname{tr}(Q_{1}^{T}Q_{2}))^{2}-4(a\det Q_{2}+b\det Q_{1}) +4\operatorname{tr}\left(\operatorname{Adj}(Q_{1}^{T}Q_{2})\right))^{\frac{1} {2}}, \tag{4.3}\] which denotes the normalization condition for the nearly half-flat \(\mathrm{SU}(3)\)-structure. The space of invariant nearly half-flat structures can now be parameterized by \((\lambda,a,b,P,Q)\) satisfying the commutativity relation \(P^{T}Q=Q^{T}P\) and the normalization condition (4.3). We denote the set of invariant nearly half-flat structures corresponding to a fixed \(\lambda\in\mathbb{R}^{*}\) and a cohomology class \((a,b)\) for \(\gamma-\frac{\lambda}{2}\delta\) by \(\mathcal{H}_{\lambda,a,b}\). Using the isomorphism between \(M_{3\times 3}(\mathbb{R})\) the space of real \(3\times 3\) matrices and the space of real symmetric trace-free \(4\times 4\) matrices \(V\) we can describe the space \(\mathcal{H}_{\lambda,a,b}\) as the kernel of the \(\mathrm{SO}(4)\)-equivariant map described in [13, Section 3]. Under the isomorphism between \(M_{3\times 3}(\mathbb{R})\) and \(V\) the condition \(Q^{T}P-P^{T}Q=0\) can be written as \([Q,P]=0\). **Theorem 4.1**.: _The space of invariant nearly half-flat structures \(\mathcal{H}_{\lambda,a,b}\) can be described as the subspace of \(U\oplus U\)_ \[\{(Q,P)\in U\oplus U,\ |\ P,Q\text{ satisfy }\eqref{eq:M_3},\text{and }Q^{T}P=P^{T}Q,\}.\] With respect to the natural symplectic structure on \(T^{*}V\) the map \[\mu\colon V\otimes V \to\mathfrak{so}(4)\cong\Omega^{2}\mathbb{R}^{4}\] \[(A,B) \mapsto[A,B]\] defines a moment map for the Hamiltonian action of \(\mathrm{SO}(4)\) on \(T^{*}V\). The set \(H_{\lambda,a,b}\) then lies in the kernel of \(\mu\). **Corollary 4.2**.: _Modulo equivalent relations \(\mathcal{H}_{\lambda,a,b}\) is a subset of the singular symplectic quotient_ \[\frac{\mu^{-1}(0)}{\mathrm{SO}(4)}\cong\frac{\mathbb{R}^{3}\oplus\mathbb{R}^{ 3}}{S_{3}}.\] For describing the flow equations (3.4) in this matrix framework we need to compute \(J\gamma\in\mathcal{H}_{\lambda,a,b}\). To do this we make use of the following identity \[J\gamma(X,Y,Z)=\gamma(JX,JY,JZ)=-\gamma(JX,Y,Z),\] and can compute \[J\gamma= \frac{2}{\det P}\Big{(}(a\operatorname{tr}(Q_{1}^{T}Q_{2})-2\det Q _{1}-a^{2}b)e^{135}-(b\operatorname{tr}(Q_{1}^{T}Q_{2})-2\det Q_{2}-ab^{2})e^{ 246}\] \[-\sum_{i,j=1}^{3}((ab+\operatorname{tr}\left(Q_{1}^{T}Q_{2}\right) )\;Q_{1}-2a\;\text{Adj}(Q_{2}^{T})-2Q_{1}Q_{2}^{T}Q_{1})_{ij}de^{2i-1}\wedge e^ {2j}\] \[+\sum_{i,j=1}^{3}((ab+\operatorname{tr}\left(Q_{1}^{T}Q_{2}\right) )\;Q_{2}-2b\;\text{Adj}(Q_{1}^{T})-2Q_{2}Q_{1}^{T}Q_{2})_{ij}e^{2i-1}\wedge de^ {2j}\Big{)}.\] We denote by \[\begin{split} A&\coloneqq a\operatorname{tr}(Q_{1} ^{T}Q_{2})-2\det Q_{1}-a^{2}b,\\ B&\coloneqq-(b\operatorname{tr}(Q_{1}^{T}Q_{2})-2 \det Q_{2}-ab^{2}),\\ R_{1}&\coloneqq-((ab+\operatorname{tr}\left(Q_{1} ^{T}Q_{2}\right))\;Q_{1}-2a\;\text{Adj}(Q_{2}^{T})-2Q_{1}Q_{2}^{T}Q_{1}),\\ R_{2}&\coloneqq(ab+\operatorname{tr}\left(Q_{1} ^{T}Q_{2}\right))\;Q_{2}-2b\;\text{Adj}(Q_{1}^{T})-2Q_{2}Q_{1}^{T}Q_{2},\end{split} \tag{4.4}\] thus we can write \[J\gamma=\frac{2}{\det P}\Big{(}Ae^{135}+Be^{246}+\sum_{i,j=1}^{3} \big{(}(R_{1})_{ij}de^{2i-1}\wedge e^{2j}+(R_{2})_{ij}e^{2i-1}\wedge de^{2j} \big{)}\,\Big{)}.\] One can also check that \(\gamma\wedge J\gamma=2/3\omega^{3}\) using (4.3) and \(J\gamma\wedge\omega=0\) which uses the fact that \(P^{T}Q_{i}\) is symmetric for \(i=1,2\). The 4-form \[dJ\gamma=\frac{2}{\det P}\sum_{i,j=1}^{3}R_{ij}de^{2i-1}\wedge de ^{2j},\] where \[R=R_{1}+R_{2}=(ab+\operatorname{tr}\left(Q_{1}^{T}Q_{2}\right) )(Q_{2}-Q_{1})-2(b\text{Adj}(Q_{1}^{T})-a\text{Adj}(Q_{1}^{T}))-2Q_{2}Q_{1}^{ T}Q_{2}+2Q_{1}Q_{2}^{T}Q_{1}.\] **Remark 4.3**.: In [13] the authors did similar computations when the SU(3)-structure is half-flat or equivalently when \(\lambda=0\) which implies \(Q_{1}=-Q_{2}=Q\). Our results here matches that of in [13] for \(\lambda=0\). Also note that they used an isomorphism between the space of real \(3\times 3\) matrices and the space of real symmetric trace-free \(4\times 4\) matrices to simplify their expressions but we do not find it very useful in this case. The 6-form \(d\omega\wedge J\gamma=-dJ\gamma\wedge\omega=\frac{2}{\det P}\operatorname{tr} (P^{T}R)\operatorname{vol}_{6}\). From (3.5) this implies \[w_{1}=\frac{\operatorname{tr}(P^{T}R)}{2(\det P)^{2}}.\] Thus we can rewrite (3.5) as \[\frac{\det P}{2}P_{ij}(de^{2i-1}\wedge e^{2j}-e^{2i-1}\wedge de^{2j})= \left(\frac{\operatorname{tr}(P^{T}R)}{4\det P}a+\frac{3\lambda}{4 }A\right)e^{135}+\left(\frac{\operatorname{tr}(P^{T}R)}{4\det P}b+\frac{3 \lambda}{4}B\right)e^{246}\] \[+\left(\frac{\operatorname{tr}(P^{T}R)}{4\det P}Q_{1}+\frac{3 \lambda}{4}R_{1}\right)_{ij}de^{2i-1}\wedge e^{2j}\] \[+\left(\frac{\operatorname{tr}(P^{T}R)}{4\det P}Q_{2}+\frac{3 \lambda}{4}R_{2}\right)_{ij}e^{2i-1}\wedge de^{2j}+w_{3},\] \[R_{ij}de^{2i-1}\wedge de^{2j}=\frac{\operatorname{tr}(P^{T}R)}{3 \det P}\mathrm{Adj}(P^{T})_{ij}de^{2i-1}\wedge de^{2j}+\hat{w}_{2}\wedge\omega,\] and make the following observations **Proposition 4.4**.: _Let \((\omega,\gamma)\in\mathcal{H}_{\lambda,a,b}\)._ 1. _If_ \(w_{1}=w_{3}=\hat{w_{2}}=0\)_, the_ \(\mathrm{SU}(3)\)_-structure satisfies_ \[d\omega=\frac{3\lambda}{4}J\gamma,\quad d\gamma=\frac{\lambda}{2}\omega^{2}.\] _Thus the nearly half-flat structure is nearly Kahler if and only if_ \[A =B=0,\] \[R_{1} =-R_{2}=\frac{2\det P}{3\lambda}P.\] _Note that_ \(R=R_{1}+R_{2}\) _so_ \(R=0\) _in this case._ 2. _The torsion form_ \(w_{1}=0\) _if and only if_ \(\operatorname{tr}(P^{T}R)=0\)_._ 3. _The torsion form_ \(\hat{w_{2}}=0\) _that is the_ \(\mathrm{SU}(3)\)_-structure is "nearly" co-coupled if and only if_ \[R=\frac{\operatorname{tr}(P^{T}R)}{3\det P}\mathrm{Adj}(P^{T})\] 4. _The torsion form_ \(w_{3}=0\) _or the_ \(\mathrm{SU}(3)\)_-structure is "nearly" coupled if and only if_ \[A =-\frac{\operatorname{tr}(P^{T}R)}{3\lambda\det P}a,\quad B=- \frac{\operatorname{tr}(P^{T}R)}{3\lambda\det P}b,\] \[R_{1} =\frac{1}{3\lambda}\left(2\det PP-\frac{\operatorname{tr}(P^{T}R )}{\det P}\ Q_{1}\right),\quad R_{2}=-\frac{1}{3\lambda}\left(2\det PP+\frac{ \operatorname{tr}(P^{T}R)}{\det P}\ Q_{2}\right).\] We can now use the above algebraic framework to describe some examples of nearly half-flat \(\mathrm{SU}(3)\)-structures on \(S^{3}\times S^{3}\). _Nearly Kahler solution_: If we assume \(P=\mathrm{diag}(\mathrm{p}_{1},\mathrm{p}_{2},\mathrm{p}_{3})\), and \(Q=\mathrm{diag}(\mathrm{q}_{1},\mathrm{q}_{2},\mathrm{q}_{3})\) the nearly half-flat structure given by \((a,b,P,Q)\) solves the nearly Kahler equations for \(\lambda=4\) only when \[(P,Q)=\left(\pm\frac{1}{12\sqrt{3}}\mathrm{Id},0\right),\quad a=b=\frac{1}{108}. \tag{4.5}\] In the current framework the above solution represents the _unique_\(S^{3}\times S^{3}\)-invariant nearly Kahler solution ( compare with [23, Proposition 3]) _Examples of type \(\mathcal{W}_{1}^{+}+\mathcal{W}_{1}^{-}\)_: The nearly half-flat structure has torsion contained in \(\mathcal{W}_{1}\) if and only if \(R=-\frac{2}{3}w_{1}\mathrm{Adj}(P^{T})\). Furthermore if \(w_{1}=0\) the structure becomes nearly Kahler which we have already seen above so we assume \(w_{1}\neq 0\). Assuming \(P=p\mathrm{Id},Q=q\mathrm{Id},\lambda=4\) we obtain the following solutions * for \(p\in\left(0,\frac{\sqrt{3}}{36}\right)\) \[a=b=4p^{2},\quad q=\pm\frac{\sqrt{3\sqrt{3}p-108p^{2}}}{3}\] * for \(p\in\left(-\frac{\sqrt{3}}{36},0\right)\) \[a=b=4p^{2},\quad q=\pm\frac{\sqrt{-3\sqrt{3}p-108p^{2}}}{3}\] Since \(\hat{w}_{1}=\lambda/2\neq 0\), from (3.7) one can see that examples of this type has strictly positive scalar curvature given by \[s=\frac{10(\sqrt{3}-27p)}{3p}.\] _Examples of type \(\mathcal{W}_{1}^{-}+\mathcal{W}_{3}\)_: The nearly half-flat structure in this torsion form satisfy \(dJ\gamma=0\). For \(\lambda=4\), \(P=p\mathrm{Id},Q=q\mathrm{Id}\) and \(a>\frac{1}{256}\) the following nearly half flat structure has \(w_{1}=\hat{w}_{2}=0\) \[b=\frac{512a^{2}}{256a-1},\quad q=\frac{128a^{2}}{256a-1},\quad p=\pm 8a\sqrt{ \frac{1}{256a-1}}\] _Zero scalar curvature metric_ If we assume \(b=a=0,P=p\mathrm{Id}\) and \(Q=q\mathrm{Id}\) then the normalization condition (4.3) becomes \[3q^{4}-24q^{2}p^{4}+48p^{8}-p^{6}=0,\] which has the following solutions \[q=\pm\frac{p\sqrt{36p^{2}\pm 3\sqrt{3}p}}{3}.\] The nearly half-flat metric in this case is given by \[g=\frac{2(2p^{2}-q)^{2}}{p^{2}}e^{2i-1}\otimes e^{2i-1}+\frac{2(2p^{2}+q)^{2} }{p^{2}}e^{2i}\otimes e^{2i}+\frac{4p^{4}-q^{2}}{p^{2}}(e^{2i-1}\otimes e^{2i }+e^{2i}\otimes e^{2i-1}).\] One can compute the scalar curvature of the nearly half-flat structure metric by (3.7). For \(a=b=0\) and \(q=\pm\frac{p\sqrt{36p^{2}+3\sqrt{3}p}}{3}\) the scalar curvature takes the form \[s=\frac{2(72p^{4}+105p+5\sqrt{3})}{3p}.\] There are two values of \(p\) for which \(s=0\) but for only one of them \(q=\pm\frac{p\sqrt{36p^{2}+3\sqrt{3}p}}{3}\in\mathbb{R}\) hence we get one solution from this case. However for \(q=\pm\frac{p\sqrt{36p^{2}-3\sqrt{3}p}}{3}\) the scalar curvature turns out to be \[s=\frac{2(72p^{4}+105p-5\sqrt{3})}{3p},\] and both the solutions for \(s=0\) are admissible. ## 5 The \(S^{3}\times S^{3}\) evolution equations We can now describe the flow equations for the nearly \(\mathrm{G}_{2}\)-structure on \(S^{3}\times S^{3}\times\mathbb{R}\) compatible with (3.1) in this matrix framework. From 3.2 we know that an invariant nearly half-flat structure \((\omega,\Omega)\in\mathcal{H}_{\lambda,a,b\cdot}\) on \(S^{3}\times S^{3}\) evolve to a nearly parallel \(\mathrm{G}_{2}\)-structure if and only if it satisfies (3.6). In terms of matrix \(P,Q\) used to parameterize invariant nearly half-flat \(\mathrm{SU}(3)\)-structures on \(S^{3}\times S^{3}\) the evolution equations take the following form where \(Q_{i}\)s and \(A,B,R_{i}\)s are defined in (4.1) and (4.4) respectively. **Proposition 5.1**.: _The evolution equations for the flow \(t\mapsto(P(t),Q(t))\in\mathcal{H}_{\lambda,a(t),b(t)}\) are given by_ \[\begin{split} a^{\prime}&=-\frac{2\lambda}{\det P}A,\quad b^{\prime}=-\frac{2\lambda}{\det P}B\\ Q^{\prime}_{1}&=-\frac{2\lambda}{\det P}R_{1}+P\\ Q^{\prime}_{2}&=-\frac{2\lambda}{\det P}R_{2}-P \end{split} \tag{5.1}\] **Remark 5.2**.: In the half-flat case that is when \(\lambda=0\) the parameters \(a,b\) are constant in \(t\) but in the nearly half-flat case the cohomology class \((a,b)\) evolves with time. ### Dynamic examples Below we use the the matrix framework to describe some examples of nearly parallel \(\mathrm{G}_{2}\)-structures on \(S^{3}\times S^{3}\times I\) for \(I\subset\mathbb{R}\) parameterised by \(t\). #### 5.1.1 The homogeneous nearly \(\mathrm{G}_{2}\) metric on the Berger space Let \(B\coloneqq\frac{\mathrm{SO}(5)}{\mathrm{SO}(3)}\) be the Berger space. The homogeneous metric on \(B\) has a nearly parallel \(\mathrm{G}_{2}\)-structure. There is a cohomogeneity-one action of \(\mathrm{SO}(4)\) on \(B\) as first described by [11]. Under this action the principal orbits are hypersurfaces of \(B\) isomorphic to \(\mathrm{SO}(4)/\mathbb{Z}_{2}^{2}\cong\frac{\mathrm{S}^{3}\times\mathrm{S}^{3 }}{\mathbb{Z}_{2}^{3}}\). The Lie group \(\mathrm{SO}(3)\) is embedded into \(\mathrm{SO}(5)\) via the \(5\) dimensional irreducible representation of \(\mathrm{SO}(3)\) on \(\mathrm{Sym}_{0}^{2}(\mathbb{R}^{3})\). If we denote by \(S_{i,j}\) the symmetric \(3\times 3\) matrix with \(1\) at the \((i,j)\) and \((j,i)\) entry and \(0\) elsewhere then \[E_{1}:=\frac{\mathrm{diag}(1,1,-2)}{\sqrt{6}},\quad E_{2}:=\frac{\mathrm{diag} (1,-1,0)}{\sqrt{2}},\quad E_{3}:=S_{12},\quad E_{4}:=S_{13},\quad E_{5}:=S_{23}\] defines a basis of \(\mathrm{Sym}_{0}^{2}(\mathbb{R}^{3})\cong\mathbb{R}^{5}\). The embedding of \(\mathrm{SO}(3)\) in \(\mathrm{SO}(5)\) by the conjugate action of \(\mathrm{SO}(3)\) on \(\mathrm{Sym}_{0}^{2}(\mathbb{R}^{3})\cong\mathbb{R}^{5}\). The group \(\mathrm{SO}(5)\) acts on \(\mathbb{R}^{5}\) via the usual left multiplication. We can define the group \(\mathrm{SO}(4)=\mathrm{SO}(4)_{\mathrm{E}_{1}}\subset\mathrm{SO}(5)\) as the subgroup preserving the \(E_{1}\) direction in \(\mathbb{R}^{5}\). Thus there is an action of \(\mathrm{SO}(4)\) on \(\mathbb{R}^{5}\). The generic stabilizer group for the group \(\mathrm{SO}(4)_{\mathrm{E}_{1}}\) is also given by \(\mathbb{Z}_{2}^{2}\cong\mathrm{diag}(1,1,\mathrm{ab},\mathrm{b},\mathrm{a})\) that preserves the \(E_{1},E_{2}\) directions in \(\mathbb{R}^{5}\). The generic orbit is therefore given by \(\mathrm{SO}(4)/\mathbb{Z}_{2}^{2}\). The stabilizer of the identity coset \(x_{-}=\mathrm{id}.\mathrm{SO}(3)\) in \(B\) is the group \(K^{-}\cong\mathrm{O}(2)\) such that \(K_{0}^{-}\cong\mathrm{SO}(2)\) acts by angle \(2\theta\) in the \(E_{2},E_{3}\) plane and by angle \(\theta\) in the \(E_{4},E_{5}\) plane. Thus \(x_{-}.\mathrm{SO}(4)=\mathrm{SO}(4)/(\mathrm{SO}(2)\times\mathbb{Z}_{2})\) is a singular orbit for the action. For the other singular orbit, we need to follow the geodesic \(\gamma(t):=\cos(t)E_{1}+\sin(t)E_{2}\) transverse to all orbits. At \(t=\pi/3\) we get the second singular orbit again isomorphic to \(\mathrm{SO}(4)/(\mathrm{SO}(2)\times\mathbb{Z}_{2})\). Thus the singular stabilizer groups for the action are both isomorphic to \(S^{1}\times\mathbb{Z}_{2}\). By some simple calculations one can compute that \[\mathrm{Stab}(\gamma(t))\cong\begin{cases}\mathbb{Z}_{2}\times\mathbb{Z}_{2}& t\in(0,\pi/3),\\ S(\mathrm{O}(2)\mathrm{O}(1))&t=0,\\ S(\mathrm{O}(1)\mathrm{O}(2))&t=\pi/3.\end{cases}\] With respect to the basis \(\{e^{1},e^{2},e^{3},e^{4},e^{5},e^{6},dt\}\) of \(\mathrm{SO}(4)/\mathbb{Z}_{2}^{2}\) where \(e^{i}\)s are as described in SS4 the nearly parallel \(\mathrm{G}_{2}\)-structure satisfying \(d\varphi=\frac{6}{\sqrt{5*}\varphi^{\varphi}}\) is given by \[\varphi= \frac{1}{\sqrt{5}}(\sin(t)e^{12}+\sin(t-2\pi/3)e^{34}+\sin(t+2 \pi/3)e^{56})\wedge dt-\frac{-7+2\cos(3t)}{20\sqrt{5}}e^{135}-\frac{7+2\cos(3 t)}{20\sqrt{5}}e^{246}\] \[+\frac{1}{5\sqrt{5}}\left(\cos(t)(e^{235}-e^{146})-3\sin(t-2\pi/ 3)\sin(t+2\pi/3)(e^{235}-e^{146})\right)\] \[+\frac{1}{5\sqrt{5}}\left(\cos(t-2\pi/3)(e^{145}-e^{236})-3\sin( t)\sin(t+2\pi/3)(e^{145}-e^{236})\right)\] \[+\frac{1}{5\sqrt{5}}\left(\cos(t+2\pi/3)(e^{136}-e^{245})-3\sin( t)\sin(t-2\pi/3)(e^{136}-e^{245})\right).\] In the matrix framework the nearly half-flat \(\mathrm{SU}(3)\)-structure on \(\mathrm{SO}(4)/\mathbb{Z}_{2}^{2}\) corresponding to the homogeneous nearly parallel \(\mathrm{G}_{2}\)-structure for \(\lambda=6/\sqrt{5}\) on \(B\) can then be expressed as \[a =-\frac{-7+2\cos(3t)}{20\sqrt{5}},\quad b=-\frac{7+2\cos(3t)}{20 \sqrt{5}},\] \[P =\frac{1}{\sqrt{5}}\mathrm{diag}\left(\sin(\mathrm{t}),\sin( \mathrm{t}-2\pi/3),\sin(\mathrm{t}+2\pi/3)\,,\right.\] \[Q =\frac{1}{5\sqrt{5}}\mathrm{diag}\left(\cos(\mathrm{t}),\cos( \mathrm{t}-2\pi/3),\cos(\mathrm{t}+2\pi/3)\right).\] #### 5.1.2 Sine-cone solution if we assume \(P=p(t)\mathrm{Id},\mathrm{Q}=\mathrm{q}(\mathrm{t})\mathrm{Id}\) with \(b(t)=a(t)\) and choose \(\lambda=4\). The normalization condition takes the following form \[48p^{8}(t)+(64a(t)-1)p^{6}(t)+24p^{4}(t)(a^{2}(t)-q^{2}(t))+48p^{2}(t)q^{2}(t)a (t)+3q^{4}(t)-6q^{2}(t)a^{2}(t)-a^{4}(t)=0\] Setting the coefficient of \(e^{135}-e^{246}\) to zero in \(\gamma^{\prime}-d\omega+4J\gamma\) we get \[-\frac{16(a(t)-4p^{2}(t))((a(t)+2p^{2}(t))^{2}+3q^{2}(t))}{p^{3}(t)}=0\] which implies either \(a(t)=4p^{2}(t)\) or if \((a(t)+2p^{2}(t))^{2}+3q^{2}(t)=0\) but the later solution solves the normalization condition if and only if \(p(t)=0\) and can be discarded. Substituting \(a(t)=4p^{2}(t)\) in the normalization condition generates four possible solutions \[q(t)=\pm\frac{p(t)\sqrt{-108p(t)^{2}\pm 3\sqrt{3}p(t)}}{3}.\] Equating all the coefficients in \(\gamma^{\prime}-d\omega+4J\gamma\) to zero give the following set of ODEs \[p^{\prime}(t)p^{4}(t) =24p^{4}(t)q(t)+2q^{3}(t),\] \[q^{\prime}(t)p(t) =p^{2}(t)-48q^{2}(t)-576p^{4}(t).\] Substituting \(q(t)\) in terms of \(p(t)\) in the above equations we get either \[p(t) =\pm\frac{\sqrt{3}}{72}(1+\sin(4t+c))\] \[q(t) =\pm\frac{\sqrt{3}}{864}|\cos(4t+c)|(1+\sin(4t+c)),\] or \[p(t) =\pm\frac{\sqrt{3}}{72}(-1+\sin(4t+c))\] \[q(t) =\pm\frac{\sqrt{3}}{864}|\cos(4t+c)|(-1+\sin(4t+c)).\] If we now assume the nearly half-flat structure at \(t=0\) to be the unique nearly Kahler solution presented in (4.5) we get that the following solutions to the evolution equations \[a(t)=b(t)=\frac{\cos^{4}(2t)}{108},\quad p(t)=\frac{\sqrt{3}}{36}\cos^{2}(2t),\quad q(t)=\frac{\sqrt{3}}{216}\cos^{3}(2t)\sin(2t),\] or \[a(t)=b(t)=\frac{\cos^{4}(2t)}{108},\quad p(t)=-\frac{\sqrt{3}}{36}\cos^{2}(2t ),\quad q(t)=-\frac{\sqrt{3}}{216}\cos^{3}(2t)\sin(2t).\] If we denote by \(g_{NK}\), the metric induced by the nearly Kahler SU(3)-structure, the metric \(g_{6}(t)\) corresponding to the above nearly half-flat structure at any time \(t\) is given by \[g_{6}(t)=\frac{\cos^{2}(2t)}{18}\sum_{i=1}^{3}((e^{2i-1})^{2}+(e^{2i})^{2}- \frac{1}{2}e^{2i-1}\otimes e^{2i}-\frac{1}{2}e^{2i}\otimes e^{2i-1})=\cos^{2} (2t)g_{NK}.\] From (3.1) the nearly G\({}_{2}\)-structure is given by \[\varphi= \frac{\sqrt{3}\cos^{2}(2t)}{36}(e^{12}+e^{34}+e^{56})\wedge dt+ \frac{\cos^{4}(2t)}{108}(e^{135}+e^{246})\] \[+\frac{\cos^{3}(2t)\cos(2t-2\pi/3)}{108}(e^{136}+e^{145}+e^{235}) +\frac{\cos^{3}(2t)\cos(2t+2\pi/3)}{108}(e^{146}+e^{236}+e^{245})\] Reparametrizing \(s=2t+\pi/2\), the G\({}_{2}\)-metric \(g_{\varphi}(s)\) corresponding to the above G\({}_{2}\)-structure is (up-to a scale) given by \[g_{\varphi}(s)=\sin^{2}(s)g_{NK}+(ds)^{2},\] which is the well-known sine-cone nearly \(\mathrm{G}_{2}\) metric. Note that the above solution is incomplete as at \(s=0,\pi\) the metric shrinks to a point and becomes highly non-singular. At any time \(s\in(0,\pi)\) the only non-vanishing torsion form for the nearly half-flat structure is \(w_{1}=6\cot(s)\) which vanishes only at \(s=\pi/2\). One can also explicitly write down other known examples of nearly parallel \(\mathrm{G}_{2}\)-structures on \(S^{3}\times S^{3}\times I\) such as the homogeneous nearly parallel \(\mathrm{G}_{2}\)-structure on \(S^{7}\). The lie group \(\mathrm{SO}(4)\) acts by cohomogeneity-one on \(S^{7}\subset\mathbb{R}^{8}\). The action of \(\mathrm{SO}(4)\) on \(\mathbb{R}^{8}\) is defined by the isotropy action on the tangent space of \(\mathrm{G}_{2}/\mathrm{SO}(4)\). As a complex representation of \(\mathrm{SU}(2)\times\mathrm{SU}(2)\) if \(V_{(k,l)}\) represents the tensor product of the symmetric representation of weight \(k,l\) on the first and second \(\mathrm{SU}(2)\) factor respectively, the space \(\mathbb{R}^{8}\) can be written as \(V_{(1,0)}\otimes V_{(0,3)}\). Moreover in [10] the authors showed that the total spaces of the \(\mathrm{SO}(3)\)-bundles of self dual (anti-self dual) 3-forms over Hitchin's self dual Einstein orbifolds [14] are smooth 3-Sasakian seven dimensional manifolds. The action of \(\mathrm{SO}(3)\) on the base lifts to form a cohomogeneity-one \(\mathrm{SO}(3)\times\mathrm{SO}(3)\)-action on the total space. One can describe the 2-parameter family of nearly parallel \(\mathrm{G}_{2}\)-structures induced by this 3-Sasakian structure using the matrix framework. Apart from this, one can also use this framework to study \(\mathrm{G}_{2}\)-instantons on \(S^{3}\times S^{3}\times I\) with respect to the \(\mathrm{G}_{2}\)-structure \((\varphi,\psi)\) defined in (2.5). In [10, Lemma 1] the authors showed that if \(\psi\) is closed then \(\mathrm{G}_{2}\)-instantons on \(S^{3}\times S^{3}\times I\) are in one-to-one correspondence with a 1-parameter family of connections \(a(t)_{t\in I}\) with curvature \(F_{a}(t)\) on \(S^{3}\times S^{3}\) that satisfies \[\dot{a}\wedge\frac{\omega^{2}}{2}-F_{a}\wedge J\gamma=0,\] along with the constraint \(F_{a}\wedge\omega^{2}=0\), which is shown to be compatible with the evolution. Previously in [10] the authors used similar framework to describe the \(\mathrm{SU}(2)^{2}\)-invariant \(\mathrm{G}_{2}\)-instantons on non-compact manifolds with holonomy \(\mathrm{G}_{2}\). A similar analysis can be done when the \(\mathrm{G}_{2}\)-structure is nearly parallel.
2310.07451
Variational stabilization of degenerate p-elasticae
A new stabilization phenomenon induced by degenerate diffusion is discovered in the context of pinned planar $p$-elasticae. It was known that in the non-degenerate regime $p\in(1,2]$, including the classical case of Euler's elastica, there are no local minimizers other than unique global minimizers. Here we prove that, in stark contrast, in the degenerate regime $p\in(2,\infty)$ there emerge uncountably many local minimizers with diverging energy.
Tatsuya Miura, Kensuke Yoshizawa
2023-10-11T12:52:04Z
http://arxiv.org/abs/2310.07451v1
# Variational stabilization of degenerate \(p\)-elasticae ###### Abstract. A new stabilization phenomenon induced by degenerate diffusion is discovered in the context of pinned planar \(p\)-elasticae. It was known that in the non-degenerate regime \(p\in(1,2]\), including the classical case of Euler's elastica, there are no local minimizers other than unique global minimizers. Here we prove that, in stark contrast, in the degenerate regime \(p\in(2,\infty)\) there emerge uncountably many local minimizers with diverging energy. 2020 Mathematics Subject Classification: 49Q10, 53A04, and 33E05 ## 1. Introduction Nonlinear diffusion often provokes peculiar behavior of solutions, leading to new analytical challenges. In this paper we discover a strong variational stabilization phenomenon due to (1D) degenerate diffusion in the context of pinned planar \(p\)-elasticae. More precisely, we study the structure of local minimizers of the \(p\)-bending energy under the fixed-length constraint and the pinned boundary condition. Let \(p\in(1,\infty)\). The \(p\)-bending energy is a fundamental geometric energy defined by \[\mathcal{B}_{p}[\gamma]:=\int_{\gamma}|k|^{p}\,ds,\] where \(\gamma\) is a planar curve, \(k\) is the signed curvature, and \(s\) is the arclength parameter. For \(L>0\) we define \[W^{2,p}_{\mathrm{arc}}(0,L;\mathbf{R}^{2}):=\left\{\,\gamma\in W^{2,p}(0,L; \mathbf{R}^{2})\;\big{|}\;|\gamma^{\prime}|\equiv 1\;\right\},\] the set of arclength parametrized curves of length \(L\) in the \(W^{2,p}\)-Sobolev class. Given \(P_{0},P_{1}\in\mathbf{R}^{2}\) such that \(|P_{1}-P_{0}|<L\), we define the admissible space \(\mathcal{A}_{\mathrm{pin}}\) by \[\mathcal{A}_{\mathrm{pin}}=\mathcal{A}_{\mathrm{pin}}(P_{0},P_{1},L):=\left\{ \,\gamma\in W^{2,p}_{\mathrm{arc}}(0,L;\mathbf{R}^{2})\;\big{|}\;\gamma(0)=P_ {0},\;\gamma(L)=P_{1}\;\right\},\] equipped with the \(W^{2,p}\)-Sobolev metric. (Recall that \(W^{2,p}(0,L)\subset C^{1}([0,L])\).) As usual, \(\gamma\in\mathcal{A}_{\mathrm{pin}}\) is called a global minimizer (resp. local minimizer) if \(\mathcal{B}_{p}[\eta]\geq\mathcal{B}_{p}[\gamma]\) holds for all \(\eta\in\mathcal{A}_{\mathrm{pin}}\) (resp. for all \(\eta\in\mathcal{A}_{\mathrm{pin}}\) in a neighborhood of \(\gamma\)). Previous results ensure that there are infinitely many critical points, and among them global minimizers are unique up to isometries [35]. As for local minimality (stability), the pinned boundary condition allows any rotation of the tangent directions at the endpoints, so that critical points have a strong predisposition towards instability. Indeed, in the classical case \(p=2\), it was known that all but global minimizers are unstable; see e.g. Maddocks' linear stability analysis in 1984 [28]. For \(p\neq 2\), stability analysis is much more delicate since in general the \(p\)-bending energy involves a generic loss of regularity (where \(k=0\)) and is not compatible with Introduction Let \(p\in(1,2]\) be a real number, and let \(\mathcal{B}_{p}\) be the set of all \(p\in(1,2]\). We say that \(\mathcal{B}_{p}\) is _almost surely_ if \(\mathcal{B}_{p}\) is almost surely a \(p\)-dimensional Banach space. If \(\mathcal{B}_{p}\) is almost surely a \(p\)-dimensional Banach space, then \(\mathcal{B}_{p}\) is almost surely a \(p\)-dimensional Banach space. The classical case is the following: **Theorem 1.1** ([36]).: _If \(p\in(1,2]\), or if \(p\in(2,\infty)\) and \(|P_{1}-P_{0}|\leq\frac{1}{p-1}L\), then there are no local minimizers of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\mathrm{pin}}\) other than global minimizers._ Up to now it was completely open whether there exists a nontrivial local minimizer in the remaining case. In this paper we discover that the remaining case is surprisingly different from the classical case, proving that not only there emerge nontrivial local minimizers but also their energy can be arbitrarily large. **Theorem 1.2**.: _If \(p\in(2,\infty)\) and \(|P_{1}-P_{0}|>\frac{1}{p-1}L\), then there exists an uncountable family \(\{\gamma_{\delta}\}_{\delta}\) of local minimizers of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\mathrm{pin}}\) such that \(\sup_{\delta}\mathcal{B}_{p}[\gamma_{\delta}]=\infty\)._ This phenomenon is strongly due to degeneracy in the nonlinear diffusion term of the Euler-Lagrange equation. Our result shows for the first time that degeneracy induces new local minimizers in \(p\)-elastica theory. Even in a broader context of nonlinear diffusion, to the authors' knowledge, our theorem seems to provide the first (or at least unusual) example that degeneracy yields new local minimizers of arbitrarily high energy. Below we discuss this point in more details. Nonlinear diffusion equations are differential equations with nonlinear generalizations of the (linear) Laplacian \(\Delta\) in the top-order term. The \(p\)-Laplacian \(\Delta_{p}u:=\nabla\cdot(|\nabla u|^{p-2}\nabla u)\) is a typical example of a nonlinear diffusion operator, arising from the first variation of the \(p\)-Dirichlet energy \(\int_{\Omega}|\nabla u|^{p}dx.\) (See e.g. the books [13, 20, 24, 53] for details.) The case of \(p>2\) is said to be _degenerate_ since the ellipticity-related gradient term \(|\nabla u|^{p-2}\) in \(\Delta_{p}\) may vanish. In the degenerate case, solutions to elliptic equations involving \(\Delta_{p}\) may have nontrivial regions where the gradient vanishes, called _flat core_[49, 22], while solutions to parabolic equations may exhibit _slow diffusion_ with finite-speed propagation of free boundaries on the flat part [4, 21] (see also [10]). Similar phenomena are also observed for the porous medium equation \(u_{t}=\Delta(|u|^{m-1}u)\) with \(m>1\). Roughly speaking, degenerate diffusion makes solutions prefer to be flat, which can cause new stabilization effects. Degeneracy-induced transitions at \(p=2\) also appear in various ways, see e.g. Figalli-Zhang's recent stability result on the Sobolev inequality [16]. Our problem here is concerned with \(p\)_-elasticae_, i.e., critical points of the \(p\)-bending energy \(\mathcal{B}_{p}\) under the fixed-length constraint. The classical case \(p=2\) corresponds to the celebrated problem of Euler's elastica studied from the 18th century (see e.g. [45, 52, 23, 31, 47]). The quadratic bending energy arises from standard linear elasticity, while non-quadratic bending energies also appear in several contexts such as DNA cyclization or image processing. In fact, even for \(p\neq 2\), there are also many studies on \(p\)-elasticae (e.g. [34, 35, 26, 36, 54]) as well as related problems on the \(p\)-bending energy (e.g. [1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 14, 15, 17, 25, 29, 30, 37, 38, 39, 40, 41, 46, 30, 47, 48, 49, 50, 46, 41, 4, 46, 31, 4, 4, 41]). The Euler-Lagrange equation for \(p\)-elastica is formally given by \[p(|k|^{p-2}k)_{ss}+(p-1)|k|^{p}k-\lambda k=0,\quad\lambda\in\mathbf{R}. \tag{1.1}\] If \(p>2\), this equation also involves degeneracy where \(k\) vanishes. Here we point out that the top-order operator in (1.1) is rather of porous medium type (\(|u|^{p-2}u\))\({}^{\prime\prime}\), while in terms of tangential angle \(\theta\) the Euler-Lagrange equation involves exactly the \(p\)-Laplacian (\(|u^{\prime}|^{p-2}u^{\prime}\)) [54]: \[(|\theta_{s}|^{p-2}\theta_{s})_{s}=R\sin(\theta+\alpha).\] The degeneracy as well as singularity leads to generic loss of regularity and hence the analytical treatment of \(p\)-elastica is much more delicate than the classical one. Building on Watanabe's previous work [54], the authors recently obtained a complete classification of planar \(p\)-elasticae with explicit formulae and optimal regularity [34]. This in particular shows that in the non-degenerate regime \(p\in(1,2]\) the zero set of the curvature \(k\) is always discrete, while in the degenerate regime \(p\in(2,\infty)\) the zero set may contain nontrivial intervals, which are also called flat core in this context. In our subsequent study on the pinned boundary value problem [35], we classified all pinned \(p\)-elasticae and proved that any global minimizer is given by a convex arc, unique up to invariances. Stability of \(p\)-elasticae is much more delicate. In [36] the authors developed a new geometric method to prove general instability results, which in particular imply Theorem 1.1. In the remaining case of Theorem 1.1, our instability theory still partially works, implying that any local minimizer is necessarily either a global minimizer (convex arc) or a _quasi-alternating_ flat-core pinned \(p\)-elastica [36, Corollary 2.14]. However, it was totally open whether there indeed exists a stable quasi-alternating elastica. Our main result, from which Theorem 1.2 directly follows, resolves this problem in a generic sense: **Theorem 1.3** (Theorem 4.1).: _Every alternating flat-core pinned \(p\)-elastica is stable._ Roughly speaking, an _alternating_ flat-core \(p\)-elastica has segments and loops located alternatingly, with no loop touching an endpoint (see Definition 2.9 and Figure 1 (iii)). The quasi-alternating class additionally allows segments to degenerate to points whenever the two neighboring loops are in a same direction (Figure 1 (iv)). Therefore, the alternating class can naturally be regarded as a generic subclass of the quasi-alternating class (see also Remark 2.11). Among many others, we briefly mention some closely related results for reaction-diffusion equations involving the \(p\)-Laplacian and a logistic-type nonlinearity, originating from e.g. [18, 22] (see also [19] and references therein). It is well known Figure 1. Examples of flat-core pinned \(p\)-elasticae. (i), (ii) Unstable [36]. (iii) Stable (alternating, Theorem 1.3). (iv) Open (quasi-alternating, Problem 1.4). that those equations may have stationary solutions with flat parts if \(p>2\). Their (dynamical) stability is however not yet completely understood even in 1D, in particular in the sign-changing case (with many flat parts). Since such solutions can be parametrized by the position of the transition layers, one can regard the parameter spaces as simplices [18, Theorem 2.2]. In terms of this simplex, Takeuchi-Yamada [51] proved instability in a boundary case, and later Takeuchi [48] proved a partial stability in an interior case, restricting the neighborhood. Full stability in the interior case is a long-standing open problem. Our results may be regarded as certain variational counterparts, since the tangential angle of a flat-core (or borderline) elastica may be regarded as a transition layer (cf. [31] for \(p=2\)). In particular, our present result gives a full variational stability in the interior case. We now explain the key idea for the proof of Theorem 1.3. As mentioned, stability of \(p\)-elasticae cannot be tackled by using standard theory in contrast to the case of \(p=2\), cf. [27, 28, 44, 45]. (See also [42, 43] for similar subtlety in dynamical stability.) Our proof proceeds in a nonstandard way; the local-minimality issue is reduced to a new (global) minimization problem subject to an auxiliary free boundary problem through a geometric relaxation procedure. More precisely, we first cut an alternating flat-core pinned \(p\)-elastica at the top of each loop and also an interior point of each inner segment. Then we prove that each piece of curve has an independent minimizing property even after a perturbation. This implies the desired local minimality of the whole curve. In this procedure, a straightforward choice of the boundary condition for each piece would be a kind of clamped boundary conditions (prescribing up to first order) in order to retain the admissibility of the whole curve. However, for such a choice it is usually hard to detect minimizers and compute their energy, cf. [31], even though there are explicit formulae for critical points. Our main ingenuity here lies in the delicate choice of the relaxed boundary condition, which we call the _hooked boundary condition_. This choice turns out to be very well suited: On one hand, it is so relaxed that we can use additional natural boundary conditions for restricting all critical points in "computable" forms. Here we heavily rely on our previous classification theory [34, 35], which describes \(p\)-elasticae in terms of \(p\)-elliptic functions introduced by the authors and of \(p\)-elliptic integrals introduced by Watanabe [54] with reference to Takeuchi's work [50]. Armed with those tools and resulting monotonicity properties, we can explicitly obtain unique global minimizers and represent their energy by complete \(p\)-elliptic integrals (Theorem 3.10). On the other hand, it is _not_ so relaxed that the minimality of the original curve cannot be recovered, even though the above relaxation allows our admissible set to include even discontinuous competitors (Lemma 4.2). The last minimality is eventually reduced to an elementary argument involving Jensen's inequality, which however crucially reflects the effect of degeneracy. It would be worth mentioning that the core of our relaxation technique is in the same spirit of some recent studies in different contexts, although the details are quite independent; an isoperimetric inequality for multiply-winding curves by [33], and on a Li-Yau type inequality in terms of the bending energy [32] as well as the \(p\)-bending energy [35]. The main characteristic here is that local minimality is reduced to global minimality. This point is subtle, and accordingly our study does not cover the full quasi-alternating class, leaving the following problem open: **Problem 1.4**.: Is every quasi-alternating flat-core pinned \(p\)-elastica stable? This paper is organized as follows: In Section 2 we prepare notation for \(p\)-elliptic functions and recall some known results for \(p\)-elasticae. In Section 3 we introduce the hooked boundary condition and classify all hooked \(p\)-elasticae (Theorem 3.7). Uniqueness of minimal hooked \(p\)-elasticae will be also deduced. In Section 4 we complete the proof of Theorems 1.2 and 1.3. ### Acknowledgements The first author is supported by JSPS KAKENHI Grant Numbers 18H03670, 20K14341, and 21H00990, and by Grant for Basic Science Research Projects from The Sumitomo Foundation. The second author is supported by JSPS KAKENHI Grant Number 22K20339. ## 2. Preliminary In this section we first recall from [34] the definitions and fundamental properties of \(p\)-elliptic integrals and functions, and also some known facts for planar \(p\)-elasticae. Throughout this paper, let \(e_{1},e_{2}\in\mathbf{R}^{2}\) be the canonical basis, and \(p\in(1,\infty)\) unless specified otherwise. For an arclength parametrized planar curve \(\gamma:[0,L]\to\mathbf{R}^{2}\), the function \(\theta:[0,L]\to\mathbf{R}\) denotes the tangential angle \(\partial_{s}\gamma=(\cos\theta,\sin\theta)\), and \(k:[0,L]\to\mathbf{R}\) denotes the (counterclockwise) signed curvature \(k=\partial_{s}\theta\). In addition, \(R_{\phi}\) denotes the counterclockwise rotation matrix through angle \(\phi\in\mathbf{R}\). ### \(p\)-Elliptic integrals and functions Now we recall the definitions of \(p\)-elliptic integrals introduced in [54]. Note that there are two types of generalizations, for example \(\mathrm{E}_{1,p}\) and \(\mathrm{E}_{2,p}\), but here we only use the first type such as \(\mathrm{E}_{1,p}\). **Definition 2.1**.: The _incomplete \(p\)-elliptic integral of the first kind_\(\mathrm{F}_{1,p}(x,q)\) of modulus \(q\in[0,1)\) is defined for \(x\in\mathbf{R}\) by \[\mathrm{F}_{1,p}(x,q):=\int_{0}^{x}\frac{|\cos\phi|^{1-\frac{2}{p}}}{\sqrt{1-q ^{2}\sin^{2}\phi}}\,d\phi,\] and also the corresponding _complete \(p\)-elliptic integral_\(\mathrm{K}_{1,p}(q)\) by \[\mathrm{K}_{1,p}(q):=\mathrm{F}_{1,p}(\pi/2,q).\] For \(q=1\), they are defined by \[\mathrm{F}_{1,p}(x,1):=\int_{0}^{x}\frac{d\phi}{|\cos\phi|^{\frac{2}{p}}}, \quad\text{where}\;\begin{cases}x\in(-\frac{\pi}{2},\frac{\pi}{2})&\text{if }\;\;1<p\leq 2,\\ x\in\mathbf{R}&\text{if }\;\;p>2,\end{cases}\] and \[\mathrm{K}_{1,p}(1)=\mathrm{K}_{p}(1):=\begin{cases}\infty&\text{if }\;\;1<p\leq 2,\\ \int_{0}^{\frac{\pi}{2}}\frac{d\phi}{(\cos\phi)^{\frac{2}{p}}}<\infty&\text{ if }\;\;p>2.\end{cases}\] Also, the _incomplete \(p\)-elliptic integral of the second kind_\(\mathrm{E}_{1,p}(x,q)\) of modulus \(q\in[0,1]\) is defined for \(x\in\mathbf{R}\) by \[\mathrm{E}_{1,p}(x,q):=\int_{0}^{x}\sqrt{1-q^{2}\sin^{2}\phi}\,|\cos\phi|^{1- \frac{2}{p}}\,d\phi,\] and also the corresponding _complete \(p\)-elliptic integral_\(\mathrm{E}_{1,p}(q)\) by \[\mathrm{E}_{1,p}(q):=\mathrm{E}_{1,p}(\pi/2,q).\] Note that, by definition and periodicity, for any \(x\in\mathbf{R}\), \(q\in[0,1)\), and \(n\in\mathbf{Z}\), \[\begin{split}\mathrm{E}_{1,p}(x+n\pi,q)&=\mathrm{E}_{ 1,p}(x,q)+2n\mathrm{E}_{1,p}(q),\\ \mathrm{F}_{1,p}(x+n\pi,q)&=\mathrm{F}_{1,p}(x,q)+2n \mathrm{K}_{1,p}(q).\end{split} \tag{2.1}\] The following lemma will play fundamental roles. **Lemma 2.2** ([54, Lemma 2]).: _Let \(Q_{p}:[0,1)\to\mathbf{R}\) be defined by_ \[Q_{p}(q):=2\frac{\mathrm{E}_{1,p}(q)}{\mathrm{K}_{1,p}(q)}-1,\quad q\in[0,1).\] _Then \(Q_{p}\) is strictly decreasing on \([0,1)\). Moreover, \(Q_{p}\) satisfies \(Q_{p}(0)=1\) and_ \[\lim_{q\uparrow 1}Q_{p}(q)=\begin{cases}-1&\text{if}\ \ 1<p\leq 2,\\ -\frac{1}{p-1}&\text{if}\ \ p>2.\end{cases}\] Next we recall \(p\)-elliptic and \(p\)-hyperbolic functions introduced in [34]. Here we focus on those we will use later; for example, we define \(\mathrm{sech}_{p}\) only for \(p>2\) (see [34, Definition 3.8] for \(p\in(1,2]\)). **Definition 2.3**.: Let \(q\in[0,1]\). The _amplitude function_\(\mathrm{am}_{1,p}(x,q)\) with modulus \(q\) is defined by the inverse functions of \(\mathrm{F}_{1,p}(x,q)\), i.e., for \(x\in\mathbf{R}\), \[x=\int_{0}^{\mathrm{am}_{1,p}(x,q)}\frac{|\cos\phi|^{1-\frac{2}{p}}}{\sqrt{1- q^{2}\sin^{2}\phi}}\,d\phi.\] The _\(p\)-elliptic sine_\(\mathrm{sn}_{p}(x,q)\) with modulus \(q\) is defined by \[\mathrm{sn}_{p}(x,q):=\sin\mathrm{am}_{1,p}(x,q),\quad x\in\mathbf{R}.\] The _\(p\)-elliptic cosine_\(\mathrm{cn}_{p}(x,q)\) with modulus \(q\) is defined by \[\mathrm{cn}_{p}(x,q):=|\cos\mathrm{am}_{1,p}(x,q)|^{\frac{2}{p}-1}\cos \mathrm{am}_{1,p}(x,q),\quad x\in\mathbf{R}. \tag{2.2}\] For \(p>2\), the _\(p\)-hyperbolic secant_\(\mathrm{sech}_{p}\,x\) is defined by \[\mathrm{sech}_{p}\,x:=\begin{cases}\mathrm{cn}_{p}(x,1),&x\in(-\mathrm{K}_{p} (1),\mathrm{K}_{p}(1)),\\ 0,&x\in\mathbf{R}\setminus(-\mathrm{K}_{p}(1),\mathrm{K}_{p}(1)).\end{cases} \tag{2.3}\] Moreover, the _\(p\)-hyperbolic tangent_\(\tanh_{p}x\) is defined by \[\tanh_{p}x:=\int_{0}^{x}(\mathrm{sech}_{p}\,t)^{p}dt,\quad x\in\mathbf{R}.\] **Proposition 2.4** ([34, Proposition 3.10]).: _Let \(\mathrm{cn}_{p}\) and \(\mathrm{sech}_{p}\) be given by (2.2) and (2.3), respectively. Then the following statements hold:_ 1. _For_ \(q\in[0,1)\)_,_ \(\mathrm{cn}_{p}(\cdot,q)\) _is an even_ \(2\mathrm{K}_{1,p}(q)\)_-antiperiodic function on_ \(\mathbf{R}\) _and, in_ \([0,2\mathrm{K}_{1,p}(q)]\)_, strictly decreasing from_ \(1\) _to_ \(-1\)_._ 2. _Let_ \(p>2\)_. Then_ \(\mathrm{sech}_{p}\) _is an even nonnegative function on_ \(\mathbf{R}\)_, and strictly decreasing in_ \([0,\mathrm{K}_{p}(1))\)_. Moreover,_ \(\mathrm{sech}_{p}\,0=1\) _and_ \(\mathrm{sech}_{p}\,x\to 0\) _as_ \(x\uparrow\mathrm{K}_{p}(1)\)_. In particular,_ \(\mathrm{sech}_{p}\) _is continuous on_ \(\mathbf{R}\)_._ In particular, this with \(\mathrm{cn}_{p}(\mathrm{K}_{1,p}(q),q)=0\) implies that \[Z_{p,q}:=\{\,x\in\mathbf{R}\mid\mathrm{cn}_{p}(x,q)=0\,\}=(2\mathbf{Z}+1) \mathrm{K}_{1,p}(q). \tag{2.4}\] ### \(p\)-Elasticae In this paper we call an arclength parametrized planar curve \(p\)_-elastica_ if its signed curvature \(k:[0,L]\to\mathbf{R}\) (parametrized by the arclength) belongs to \(L^{\infty}(0,L)\) and if there is \(\lambda\in\mathbf{R}\) such that \(k\) satisfies (EL) \[\int_{0}^{L}\Big{(}p|k|^{p-2}k\varphi^{\prime\prime}+(p-1)|k|^{p}k\varphi- \lambda k\varphi\Big{)}ds=0\] for any \(\varphi\in C_{\mathrm{c}}^{\infty}(0,L)\). Equation (EL) is a weak form of (1.1) and appears as the Euler-Lagrange equation for critical points of \(\mathcal{B}_{p}\) under the fixed-length constraint. As opposed to \(p=2\), the \(p\)-elasticae are not necessarily smooth. However at least the curvature is continuous, and also the curvature to a suitable power has more regularity enough to solve an Euler-Lagrange equation in the classical sense. **Proposition 2.5** ([34, Theorem 1.7 and Lemma 4.3]).: _Let \(p\in(1,\infty)\). If an arclength parametrized curve \(\gamma:[0,L]\to\mathbf{R}^{2}\) is a \(p\)-elastica, then \(\gamma\) has the signed curvature \(k\) such that \(k\in C([0,L])\). In addition, \(w:=|k|^{p-2}k\in C^{2}([0,L])\) and_ \[pw^{\prime\prime}+(p-1)|w|^{\frac{2}{p-1}}w-\lambda|w|^{\frac{2-p}{p-1}}w=0 \quad\text{in $[0,L]$}. \tag{2.5}\] Throughout this paper we also crucially use our explicit formulae for \(p\)-elasticae, focusing on those with vanishing curvature. To describe them, we introduce the following notation on a concatenation of curves. For \(\gamma_{j}:[a_{j},b_{j}]\to\mathbf{R}^{2}\) with \(L_{j}:=b_{j}-a_{j}\geq 0\), we define \(\gamma_{1}\oplus\gamma_{2}:[0,L_{1}+L_{2}]\to\mathbf{R}^{2}\) by \[(\gamma_{1}\oplus\gamma_{2})(s):=\begin{cases}\gamma_{1}(s+a_{1}),&s\in[0,L_{ 1}],\\ \gamma_{2}(s+a_{2}-L_{1})+\gamma_{1}(b_{1})-\gamma_{2}(a_{2}),&s\in[L_{1},L_{1 }+L_{2}].\end{cases}\] We inductively define \(\gamma_{1}\oplus\cdots\oplus\gamma_{N}:=(\gamma_{1}\oplus\cdots\oplus\gamma_{ N-1})\oplus\gamma_{N}\). We also write \[\bigoplus_{j=1}^{N}\gamma_{j}:=\gamma_{1}\oplus\cdots\oplus\gamma_{N}.\] **Proposition 2.6** ([34, Theorems 1.2, 1.3]).: _Let \(L>0\) and \(\gamma\in W^{2,p}_{\mathrm{arc}}(0,L;\mathbf{R}^{2})\) be a \(p\)-elastica whose signed curvature \(k\) has a zero in \([0,L]\). Then, up to similarity (i.e., translation, rotation, reflection, and dilation) and reparametrization, the curve \(\gamma\) is represented by \(\gamma(s)=\gamma_{*}(s+s_{0})\) with some \(s_{0}\in\mathbf{R}\), where \(\gamma_{*}:\mathbf{R}\to\mathbf{R}^{2}\) is one of the arclength parametrized curves \(\gamma_{\ell}\), \(\gamma_{w}\), \(\gamma_{f}\) defined as follows:_ * (Linear \(p\)-elastica)__\(\gamma_{\ell}(s):=(s,0)\)_._ * (Wavelike \(p\)-elastica) _For some_ \(q\in(0,1)\)_,_ (2.6) \[\gamma_{w}(s):=\gamma_{w}(s,q)=\begin{pmatrix}2\mathrm{E}_{1,p}(\mathrm{am}_{1,p}(s,q),q)-s\\ -q\frac{p}{p-1}|\,\mathrm{cn}_{p}(s,q)|^{p-2}\,\mathrm{cn}_{p}(s,q)\end{pmatrix}.\] _In this case, the tangential angle is given by_ \(\theta_{w}(s)=2\arcsin(q\,\mathrm{sn}_{p}(s,q))\) _and the signed curvature by_ \(k_{w}(s)=2q\,\mathrm{cn}_{p}(s,q)\)_._ * (Flat-core \(p\)-elastica: \(p>2\)) _For some integer_ \(N\geq 1\)_, signs_ \(\sigma_{1},\ldots,\sigma_{N}\in\{+,-\}\)_, and nonnegative numbers_ \(L_{1},\ldots,L_{N}\geq 0\)_,_ \[\gamma_{f}:=\bigoplus_{j=1}^{N}(\gamma_{\ell}^{L_{j}}\oplus\gamma_{b}^{\sigma_ {j}}),\] _where_ \(\gamma_{b}^{\pm}:[-{\rm K}_{p}(1),{\rm K}_{p}(1)]\to{\bf R}^{2}\) _and_ \(\gamma_{\ell}^{L_{j}}:[0,L_{j}]\to{\bf R}^{2}\) _are defined by_ \[\gamma_{b}^{\pm}(s)=\begin{pmatrix}2\tanh_{p}s-s\\ \mp\frac{p}{p-1}({\rm sech}_{p}\,s)^{p-1}\end{pmatrix},\quad\gamma_{\ell}^{L_ {j}}(s)=\begin{pmatrix}-s\\ 0\end{pmatrix}.\] _The curves_ \(\gamma_{b}^{\pm}(s)\) _have the tangential angles_ \(\theta_{b}^{\pm}(s)=\pm 2\operatorname{am}_{1,p}(s,1)\) _and the signed curvatures_ \(k_{b}^{\pm}(s)=\pm 2\operatorname{sech}_{p}s\) _for_ \(s\in[-{\rm K}_{p}(1),{\rm K}_{p}(1)]\)_. In particular, the signed curvature of_ \(\gamma_{f}\) _is given by_ \[k_{f}(s)=\sum_{j=1}^{N}\sigma_{j}2\operatorname{sech}_{p}(s-s_{j}),\quad\text {where}\quad s_{j}=(2j-1){\rm K}_{p}(1)+\sum_{i=1}^{j}L_{i}. \tag{2.7}\] _Remark 2.7_.: The vanishing curvature condition rules out orbitlike and borderline \(p\)-elasticae obtained in [34, Theorems 1.2, 1.3]. _Remark 2.8_.: The curve \(\gamma_{b}^{+}\) looks like a (finite) loop as in Figure 2. In particular, the tangential angle \(\theta_{b}^{+}:[-{\rm K}_{p}(1),{\rm K}_{p}(1)]\) is strictly monotone from \(-\pi\) to \(\pi\). By [34, Lemma 5.7] and symmetry we also deduce that \(\gamma_{b}^{+}(\pm{\rm K}_{p}(1))=\mp\frac{{\rm K}_{p}(1)}{p-1}e_{1}\). In addition, we call \(\gamma\in\mathcal{A}_{\rm pin}\) a _pinned \(p\)-elastica_ if \(\gamma\) is a \(p\)-elastica with \(k(0)=k(L)=0\). This is the standard first-order necessary condition to be a local minimizer of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\rm pin}\) (see [35] for details). By [35, Theorem 1.1] we know that there are flat-core pinned \(p\)-elasticae. To be more precise, suppose that \(p>2\) and \(|P_{1}-P_{0}|\in[\frac{1}{p-1}L,L)\). Then \(\gamma\in\mathcal{A}_{\rm pin}\) is a pinned \(p\)-elastica if there are \(N\in{\bf N}\), \(r\in[\frac{1}{p-1},1)\), \(\boldsymbol{\sigma}=(\sigma_{1},\ldots,\sigma_{N})\in\{+,-\}^{N}\), and \(\boldsymbol{L}=(L_{1},\ldots,L_{N+1})\in[0,\infty)^{N+1}\) such that, up to similarity and reparametrization, the curve \(\gamma\) is given by \[\gamma_{\rm flat}:=\bigg{(}\bigoplus_{j=1}^{N}\big{(}\gamma_{\ell}^{L_{j}} \oplus\gamma_{b}^{\sigma_{j}}\big{)}\bigg{)}\oplus\gamma_{\ell}^{LN_{N+1}}, \tag{2.8}\] and in addition, the numbers \(p,r,N\) and \(\boldsymbol{L}\) satisfy \[\sum_{j=1}^{N+1}L_{j}=2N\frac{r-\frac{1}{p-1}}{1-r}{\rm K}_{p}(1). \tag{2.9}\] Figure 2. The profile of the loop \(\gamma_{b}^{+}\). Notice that the length \(\bar{L}\) of \(\gamma_{\text{flat}}\) is given by \(\bar{L}=2N\text{K}_{p}(1)+\sum_{j=1}^{N+1}L_{j}\), and \[\begin{split}\gamma_{\text{flat}}(\bar{L})-\gamma_{\text{flat}}(0 )&=-\bigg{(}\frac{2N}{p-1}\text{K}_{p}(1)+\sum_{j=1}^{N+1}L_{j} \bigg{)}e_{1},\\ \gamma_{\text{flat}}^{\prime}(\bar{L})=\gamma_{\text{flat}}^{ \prime}(0)&=-e_{1}.\end{split} \tag{2.10}\] In particular, since \(\gamma\in\mathcal{A}_{\text{pin}}\), we need to have \(r=\frac{|P_{0}-P_{1}|}{L}\). On the other hand, \(\boldsymbol{\sigma}\) is arbitrary, and also \(N\) and \(\boldsymbol{L}\) are arbitrary whenever (2.9) holds. Finally we recall the definition of the class of alternating flat-core \(p\)-elasticae introduced in [36, Section 6.3]. **Definition 2.9** (Alternating flat-core).: Let \(p>2\), \(\frac{1}{p-1}L<|P_{1}-P_{0}|<L\), and \(N\in\mathbf{N}\). We call \(\gamma\in\mathcal{A}_{\text{pin}}\) an _\(N\)-loop alternating flat-core \(p\)-elastica_ if, up to similarity and reparametrization, the curve \(\gamma\) is of the form (2.8) for \(r=\frac{|P_{0}-P_{1}|}{L}\), for some \(\boldsymbol{\sigma}=(\sigma_{1},\dots,\sigma_{N})\in\{+,-\}^{N}\), and for some strictly positive numbers \(\boldsymbol{L}=(L_{1},\dots,L_{N+1})\in(0,\infty)^{N+1}\) satisfying (2.9). Thanks to the strict positivity of \(\boldsymbol{L}\), any alternating flat-core \(p\)-elastica has the segments and the loops alternately (Figure 1 (iii)). Recall that this point is very delicate and important in view of stability -- in fact, by [36, Section 6.3] if either * \(L_{1}=0\) or \(L_{N+1}=0\) (Figure 1 (i)), or * there is \(1<j<N+1\) such that \(L_{j}=0\) and \(\sigma_{j-1}\neq\sigma_{j}\) (Figure 1 (ii)), then the corresponding curve is unstable under the pinned boundary condition. _Remark 2.10_.: The equality case \(\frac{L}{p-1}=|P_{1}-P_{0}|\) is slightly delicate in view of stability. In this case, the set \(\mathcal{A}_{\text{pin}}\) does admit flat-core pinned \(p\)-elasticae, but each of them needs to have a loop touching an endpoint, thus unstable by [36, Proposition 6.3]. This with [36, Corollaries 2.12 and 2.14] implies Theorem 1.1. _Remark 2.11_.: Relation (2.9) forms an \(N\)-simplex. Hence if \(|P_{0}-P_{1}|>\frac{L}{p-1}\), the space of flat-core pinned \(p\)-elasticae in \(\mathcal{A}_{\text{pin}}\) can be written as a disjoint union \(\bigcup_{N=1}^{\infty}E_{N}\), where each \(E_{N}\) is isomorphic to \(\{-1,1\}^{N}\times\Delta^{N}\). Here \(\Delta^{N}\) stands for the standard \(N\)-simplex. From this point of view, the alternating class corresponds to the interior part of \(\Delta^{N}\). The quasi-alternating class additionally contains a part of the boundary \(\partial\Delta^{N}\). The remaining part consists of unstable solutions. ## 3. Hooked \(p\)-elasticae Given \(p\in(1,\infty)\) and \(0<\ell<L\), we define a class of curves subject to a free boundary condition, which we call the _hooked boundary condition_, by \[\mathcal{A}_{\text{hook}} =\mathcal{A}_{\text{hook}}(\ell,L)\] \[:=\left\{\,\gamma\in W^{2,p}_{\text{arc}}(0,L;\mathbf{R}^{2})\, \big{|}\,\left(\gamma(L)-\gamma(0)\right)\cdot e_{1}=\ell,\ \gamma^{\prime}(L)=-e_{1}\,\right\}.\] As explained in the introduction, we will decompose an alternating flat-core \(p\)-elastica into a finite family of curves in \(\mathcal{A}_{\text{hook}}\) in order to reduce our stability problem to (global) minimization problems for those curves. In this paper we define hooked \(p\)-elasticae as follows: **Definition 3.1** (Hooked \(p\)-elastica).: We call \(\gamma\in\mathcal{A}_{\text{hook}}\) a _hooked \(p\)-elastica_ if \(\gamma\) is a \(p\)-elastica with curvature \(k\) such that the function \(w:=|k|^{p-2}k\in C^{2}([0,L])\) satisfies \(w(0)=w^{\prime}(L)=0\). The above definition is, as in the pinned case, the natural first-order necessary condition to be a (local) minimizer of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\mathrm{hook}}\). We postpone detailed arguments for this fact to Appendix A since we can argue similarly to our previous study of pinned \(p\)-elasticae [35] up to minor modifications. However, we stress that the condition \(w^{\prime}(L)=0\) appears (not in the pinned case but) only in the hooked case and makes the derivation slightly more delicate, but plays a very important role in our reduction process. In the following, we first classify all the possible hooked \(p\)-elasticae in Section 3.1. We then prove unique existence of minimal hooked \(p\)-elasticae and obtain the explicit minimal energy in Section 3.2. ### Classification for hooked \(p\)-elasticae In this subsection we will deduce classification of hooked \(p\)-elasticae. Combining definition of hooked \(p\)-elasticae with our previous classification result, we obtain the following **Lemma 3.2**.: _Let \(\gamma\in\mathcal{A}_{\mathrm{hook}}\) be a hooked \(p\)-elastica. Then \(\gamma\) is either a wavelike \(p\)-elastica or a flat-core \(p\)-elastica. In addition, the signed curvature \(k\) satisfies the following additional boundary condition:_ \[k(0)=0,\quad k(L)\neq 0,\quad k^{\prime}(L)=0. \tag{3.1}\] _Remark 3.3_ (Differentiability of the curvature).: By Proposition 2.5, the signed curvature is differentiable whenever \(k\neq 0\), and hence \(k^{\prime}(L)\) makes sense in (3.1). Proof of Lemma 3.2.: By \(w(0)=0\) in Definition 3.1 we deduce that \(k(0)=0\). This with Proposition 2.6 implies that \(\gamma\) is either linear, wavelike, or flat-core. In addition, the linear case is clearly ruled out by the hooked boundary condition. We next prove \(k(L)\neq 0\) by contradiction, so suppose \(k(L)=0\). This assumption together with Definition 3.1 implies \(w(L)=w^{\prime}(L)=0\). Recall that \(w\) satisfies (2.5) in the classical sense. Then, by the known classification [34, Theorem 4.1] on the corresponding Cauchy problem for equation (2.5), we see that either \(k\equiv 0\) or \(k\) is of flat-core type, i.e., of the form (2.7). Since \(\gamma\) is not linear, it is flat-core. Therefore, it suffices to check that there is no flat-core \(p\)-elastica satisfying both \(k(0)=k(L)=0\) and the hooked boundary condition. This follows by our previous classification of pinned \(p\)-elasticae [35, Theorem 1.1]; in fact, if \(k(0)=k(L)=0\) and \(\gamma\) is a flat-core \(p\)-elastica, then up to a similar transformation the curve \(\gamma\) is of the form (2.8). Hence by (2.10) the vectors \(\gamma^{\prime}(L)\) and \(\gamma(L)-\gamma(0)\) are in the same direction, which contradicts our hooked boundary condition. This implies \(k(L)\neq 0\). Finally, by Remark 3.3, we now have \(w^{\prime}(L)=(p-1)|k(L)|^{p-2}k^{\prime}(L)\), which together with the fact that \(w^{\prime}(L)=0\) implies \(k^{\prime}(L)=0\). The proof is complete. Now we go into more details. First we consider the wavelike case in Lemma 3.2. Recall from Proposition 2.6 that the curvature of wavelike \(p\)-elasticae is given in terms of \(\mathrm{cn}_{p}\). In order to characterize the points where \(k^{\prime}\) vanishes, we prepare the following **Lemma 3.4**.: _Let \(q\in(0,1)\). Then_ \[\big{\{}\,x\in\mathbf{R}\,\big{|}\,\,\frac{\partial}{\partial x}\,\mathrm{cn} _{p}(x,q)=0\,\big{\}}=\begin{cases}\mathbf{Z}\mathrm{K}_{1,p}(q),&\text{ if }\ p\in(1,2),\\ 2\mathbf{Z}\mathrm{K}_{1,p}(q),&\text{ if }\ p\in[2,\infty).\end{cases}\] Proof.: By definition of \(\operatorname{cn}_{p}\) we obtain \[\frac{\partial}{\partial x}\operatorname{cn}_{p}(x,q)=-\frac{2}{p}|\cos\operatorname {am}_{1,p}(x,q)|^{\frac{4}{p}-2}\sin\operatorname{am}_{1,p}(x,q)\sqrt{1-q^{2} \sin^{2}\operatorname{am}_{1,p}(x,q)}\] for \(x\in\mathbf{R}\setminus Z_{p,q}\), where \(Z_{p,q}\) is given by (2.4). Since \(\frac{4}{p}-2>0\) is equivalent to \(p<2\), if \(p\in(1,2)\), then \(\frac{\partial}{\partial x}\operatorname{cn}_{p}(x,q)\) is well defined as well as \(x\in Z_{p,q}\). Thus we see that \(\frac{\partial}{\partial x}\operatorname{cn}_{p}(x,q)=0\) holds if and only if \[\begin{cases}\cos\operatorname{am}_{1,p}(x,q)=0&\text{or}\quad\sin \operatorname{am}_{1,p}(x,q)=0&\text{for}\ \ p\in(1,2),\\ \sin\operatorname{am}_{1,p}(x,q)=0&\text{for}\ \ p\in[2,\infty).\end{cases} \tag{3.2}\] Since \(\operatorname{am}_{1,p}\) is the inverse of the strictly increasing and periodic function \(\operatorname{F}_{1,p}\), cf. (2.1), for any \(n\in\mathbf{Z}\) and \(x\in\mathbf{R}\), \[\operatorname{am}_{1,p}(x+2n\mathrm{K}_{1,p}(q),q)=\operatorname{am}_{1,p}(x, q)+n\pi. \tag{3.3}\] This together with \(\operatorname{am}_{1,p}(\operatorname{K}_{1,p}(q),q)=\pi/2\) and \(\operatorname{am}_{1,p}(0,q)=0\) implies that \[\{\,x\in\mathbf{R}\mid\cos\operatorname{am}_{1,p}(x,q)=0\,\}=(2 \mathbf{Z}+1)\mathrm{K}_{1,p}(q),\] \[\{\,x\in\mathbf{R}\mid\sin\operatorname{am}_{1,p}(x,q)=0\,\}=2 \mathbf{Z}\mathrm{K}_{1,p}(q).\] This with (3.2) completes the proof. This lemma together with (2.4) directly implies the following **Corollary 3.5**.: _Let \(p\in(1,\infty)\), \(q\in(0,1)\), and \(x\in\mathbf{R}\). Then \(\operatorname{cn}_{p}(x,q)\neq 0\) and \(\frac{\partial}{\partial x}\operatorname{cn}_{p}(x,q)=0\) hold simultaneously if and only if \(x\in 2\mathbf{Z}\mathrm{K}_{1,p}(q)\)._ Now we turn to the flat-core case in Lemma 3.2. In view of \(k(L)\neq 0\), any hooked flat-core \(p\)-elastica has a loop part around \(s=L\). Recall from Proposition 2.6 that the curvature of the loop part is given in terms of \(\operatorname{sech}_{p}\big{|}_{[-\operatorname{K}_{p}(1),\operatorname{K}_{ p}(1)]}\). In order to characterize the property that \(k(L)\neq 0\) and \(k^{\prime}(L)=0\) in (3.1), we prepare the following **Lemma 3.6**.: _Let \(p>2\) and \(x\in(-\mathrm{K}_{p}(1),\mathrm{K}_{p}(1))\). Then, \(\operatorname{sech}^{\prime}_{p}x=0\) hold if and only if \(x=0\)._ Proof.: Note that the differentiability of \(\operatorname{sech}_{p}\) on \(\mathbf{R}\setminus\{\pm\mathrm{K}_{p}(1)\}\) is already shown in [34, Theorem 3.16]. Let \(x\in(-\mathrm{K}_{p}(1),\mathrm{K}_{p}(1))\). By direct computation we have \[\operatorname{sech}^{\prime}_{p}x=-\frac{2}{p}\big{(}\cos\operatorname{am}_{1, p}(x,1)\big{)}^{\frac{4}{p}-1}\sin\operatorname{am}_{1,p}(x,1),\] and also \(|\operatorname{am}_{1,p}(x,1)|<\pi/2\), so that \(\operatorname{sech}^{\prime}_{p}x=0\) if and only if \(x=0\). Now we are in a position to state the classification for hooked \(p\)-elasticae with an explicit parametrization in terms of \(\gamma_{w}\), \(\gamma_{b}^{\pm}\), and \(\gamma_{\ell}\) introduced in Proposition 2.6. **Theorem 3.7** (Classification of hooked \(p\)-elasticae).: _Let \(p\in(1,\infty)\) and \(0<\ell<L\). Let \(\gamma\in\mathcal{A}_{\mathrm{hook}}\) be a hooked \(p\)-elastica._ 1. _If_ \(p\in(1,2]\)_, or if_ \(p\in(2,\infty)\) _and_ \(\ell\in(0,\frac{1}{p-1}L)\)_, then, up to vertical translation and reflection,_ \(\gamma\) _is given by_ (3.4) \[\gamma(s) =\tfrac{1}{\alpha_{n}}R_{\pi}\Big{(}\gamma_{w}(\alpha_{n}s+ \mathrm{K}_{1,p}(q),q)-\gamma_{w}(\mathrm{K}_{1,p}(q),q)\Big{)},\quad s\in[0,L],\] \[\alpha_{n} :=\frac{(2n-1)\mathrm{K}_{1,p}(q)}{L},\] _for some_ \(n\in\mathbf{N}\)_, where_ \(q\in(0,1)\) _is a unique solution of_ (3.5) \[2\frac{\mathrm{E}_{1,p}(q)}{\mathrm{K}_{1,p}(q)}-1=-\frac{\ell}{L}.\] 2. _If_ \(p\in(2,\infty)\) _and_ \(\ell\in[\frac{1}{p-1}L,L)\)_, then, up to vertical translation (and reflection),_ \(\gamma\) _is given by_ (3.6) \[\gamma(s)=\frac{1}{\bar{\alpha}_{n}}R_{\pi}\,\Gamma_{n}\left(\bar{\alpha}_{n}s \right),\quad s\in[0,L],\] (3.7) \[\bar{\alpha}_{n}:=(2n-1)\frac{1}{L-\ell}\frac{p-2}{p-1}\mathrm{K}_{p}(1),\] _for some_ \(n\in\mathbf{N}\)_, where_ \(\Gamma_{n}\) _is an arclength parametrized curve defined by_ (3.8) \[\Gamma_{1}:=\gamma_{\ell}^{L_{1}}\oplus\left(\gamma_{b}^{\sigma _{1}}\big{|}_{[-\mathrm{K}_{p}(1),0]}\right),\] _for some_ \(\sigma_{1},\ldots,\sigma_{n}\in\{+,-\}\) _and_ \(L_{1},\ldots,L_{n}\geq 0\) _such that_ (3.9) \[\sum_{j=1}^{n}L_{j}=(2n-1)\frac{\frac{\ell}{L}-\frac{1}{p-1}}{1-\frac{\ell}{L} }\mathrm{K}_{p}(1).\] Proof.: Note first that \(\gamma\) is either wavelike or flat-core, by Proposition 2.6 and Lemma 3.2. In what follows, we mainly prove the following propositions. * (Case 1) \(\gamma\) is wavelike if and only if the assertion in (i) holds. * (Case 2) \(\gamma\) is flat-core if and only if the assertion in (ii) holds. Along the way, we also observe that Case 1 occurs if and only if \(\frac{\ell}{L}<\frac{1}{p-1}\) (i.e., either \(p\in(1,2]\), or \(p\in(2,\infty)\) and \(\ell\in(0,\frac{1}{p-1}L)\)). This automatically means that Case 2 occurs if and only if \(\frac{\ell}{L}<\frac{1}{p-1}\) (i.e., \(p\in(2,\infty)\) and \(\ell\in[\frac{1}{p-1}L,L)\)). **Case 1** (Wavelike \(p\)-elasticae).: Let \(\gamma\in\mathcal{A}_{\mathrm{hook}}\) be a wavelike hooked \(p\)-elastica. Up to a vertical translation, we may assume that \(\gamma(0)=(0,0)\). Then Proposition 2.6 implies that \(\gamma\) is given by \[\gamma(s)=\frac{1}{\alpha}AR_{\phi}(\gamma_{w}(\alpha s+s_{0},q)-\gamma_{w}(s_ {0})) \tag{3.10}\] for some \(q\in(0,1)\), \(s_{0}\in\mathbf{R}\), \(\alpha>0\), \(\phi\in[0,2\pi)\), and \(A\in\{I,J\}\), where \(I\) denotes the identity and \(J\) denotes the vertical reflection \(P\mapsto P-2(P\cdot e_{2})e_{2}\), both given by \(2\times 2\) matrices. By Proposition 2.6, the curvature of \(\gamma\) is of the form \(k(s)=\pm 2\alpha q\,\mathrm{cn}_{p}(\alpha s+s_{0},q)\). Now we use the boundary conditions in Lemma 3.2. Since \(k(0)=0\), by (2.4) we have \(s_{0}\in(2\mathbf{Z}+1)\mathrm{K}_{1,p}(q)\); by periodicity we may assume that \(s_{0}\in\{\mathrm{K}_{1,p}(q),-\mathrm{K}_{1,p}(q)\}\); by symmetry, up to reflection (i.e., changing \(A\) if necessary) we may eventually assume that \[s_{0}=\mathrm{K}_{1,p}(q). \tag{3.11}\] Moreover, since \(k(L)\neq 0\) and \(k^{\prime}(L)=0\), by Corollary 3.5 we have \(\alpha L+\mathrm{K}_{1,p}(q)\in 2\mathbf{Z}\mathrm{K}_{1,p}(q)\), or equivalently \(\alpha L\in(2\mathbf{Z}-1)\mathrm{K}_{1,p}(q)\). Since \(\alpha,L>0\), this means that there is some \(n\in\mathbf{N}\) such that \[\alpha=\frac{(2n-1)\mathrm{K}_{1,p}(q)}{L}\ (=\alpha_{n}). \tag{3.12}\] By (3.11), (3.12), and the formula for \(\theta_{w}\) in Proposition 2.6, we also deduce that \[|\theta(L)|=\phi+\theta_{w}(\alpha L+s_{0})=\phi+2\arcsin(q\operatorname{sn}_{p} (2n\mathrm{K}_{1,p}(q),q))=\phi\pmod{2\pi\mathbf{Z}}.\] Since \(\gamma^{\prime}(L)=-e_{1}\) by definition of \(\mathcal{A}_{\mathrm{hook}}\), we need to have \[\phi=\pi. \tag{3.13}\] The necessary conditions in (3.10)-(3.13) already imply (3.4). Moreover, in addition to (3.10)-(3.13), by using \((\gamma(L)-\gamma(0))\cdot e_{1}=\ell\) and also formula (2.6), we deduce that \[-\frac{L}{(2n-1)\mathrm{K}_{1,p}(q)}\Big{(}2\mathrm{E}_{1,p}( \mathrm{am}_{1,p}(2n\mathrm{K}_{1,p}(q),q),q)-2n\mathrm{K}_{1,p}(q)\] \[-2\mathrm{E}_{1,p}(\mathrm{am}_{1,p}(\mathrm{K}_{1,p}(q),q),q)+ \mathrm{K}_{1,p}(q)\Big{)}=\ell.\] By \(\mathrm{am}_{1,p}(\mathrm{K}_{1,p}(q),q)=\frac{\pi}{2}\), (2.1), and (3.3), we thus find that (3.5) is also necessary to hold. By Lemma 2.2, equation (3.5) has a solution \(q\in(0,1)\) (if and) only if \(\frac{\ell}{L}<\frac{1}{p-1}\), and such a solution is unique if exists. In summary, if \(\gamma\) is a wavelike \(p\)-elastica, then the assertion in (i) holds true, and also necessarily \(\frac{\ell}{L}<\frac{1}{p-1}\). Conversely, if the assertion in (i) holds true, then it is clear that \(\gamma\) needs to be a wavelike \(p\)-elastica, while it is also necessary that \(\frac{\ell}{L}<\frac{1}{p-1}\) by Lemma 2.2. **Case 2** (Flat-core \(p\)-elasticae).: Let \(\gamma\in\mathcal{A}_{\mathrm{hook}}\) be a flat-core hooked \(p\)-elastica. Up to translation, we may assume that \(\gamma(0)=(0,0)\). Then, by Proposition 2.6, \[\gamma(s)=\frac{1}{\alpha}R_{\phi}\big{(}\gamma_{f}(\alpha s+s_{0})-\gamma_{f }(s_{0})\big{)},\quad\gamma_{f}=\bigoplus_{j=1}^{m}\big{(}\gamma_{\ell}^{L_{j }}\oplus\gamma_{b}^{\sigma_{j}}\big{)}, \tag{3.14}\] for some \(m\in\mathbf{N}\), \(s_{0}\in\mathbf{R}\), \(\alpha>0\), \(\phi\in[0,2\pi)\), \(\boldsymbol{\sigma}=(\sigma_{1},\ldots,\sigma_{m})\in\{+,-\}^{m}\), and \(\boldsymbol{L}=(L_{1},\ldots,L_{m})\in[0,\infty)^{m}\). (The vertical reflection corresponds to the replacement of \(\boldsymbol{\sigma}\) with \(-\boldsymbol{\sigma}:=(-\sigma_{1},\ldots,-\sigma_{m})\).) Now we use the boundary condition in Lemma 3.2. Since the curvature of \(\gamma\) vanishes at \(s=0\), by changing \(m\), \(\boldsymbol{\sigma}\), and \(\boldsymbol{L}\) if necessary, we may assume that \(0\leq s_{0}\leq L_{1}/\alpha\), and then, by replacing \(L_{1}\) with \(L_{1}-\alpha s_{0}\), we may assume that \(s_{0}=0\) in (3.14). Thus the form (3.14) can be reduced to \[\gamma(s)=\frac{1}{\alpha}R_{\phi}\gamma_{f}(\alpha s),\quad\gamma_{f}= \bigoplus_{j=1}^{m}\big{(}\gamma_{\ell}^{L_{j}}\oplus\gamma_{b}^{\sigma_{j}} \big{)}. \tag{3.15}\] By Proposition 2.6, the curvature of \(\gamma\) is given by \(k(s)=\alpha k_{f}(\alpha s)\), where \[k_{f}(s)=\sum_{j=1}^{m}2\sigma_{j}\operatorname{sech}_{p}(s-s_{j}),\quad s_{j }:=(2j-1)\mathrm{K}_{p}(1)+\sum_{i=1}^{j}L_{i}.\] Since \(k(L)\neq 0\) and \(k^{\prime}(L)=0\), we deduce from the above form combined with Lemma 3.6 that \(\alpha L=s_{n}\) for some \(n\in\{1,\ldots,m\}\), which is equivalent to \[\alpha=\frac{1}{L}\Big{(}(2n-1)\mathrm{K}_{p}(1)+\sum_{j=1}^{n}L_{j}\Big{)} \tag{3.16}\] for some \(n\in\{1,\ldots,m\}\). This also means that \(\gamma_{f}\) in (3.15) satisfies \(\gamma_{f}(\alpha L)=\gamma_{b}^{\sigma_{n}}(0)\), and hence \(\gamma_{f}\) coincides with \(\Gamma_{n}\) defined by (3.8). Next we use the original hooked boundary condition. Let \(\theta\) be the tangential angle of \(\gamma\). Recall from Remark 2.8 (and Figure 2) that for the loop \(\gamma_{b}^{\sigma_{j}}\) satisfies \(\theta_{b}^{\sigma_{j}}(\mathrm{K}_{p}(1))-\theta_{b}^{\sigma_{j}}(-\mathrm{K}_ {p}(1))=2\sigma_{j}\pi\) while the half loop \(\gamma_{b}^{\sigma_{n}}|_{[-\mathrm{K}_{p}(1),0]}\) satisfies \(\theta_{b}^{\sigma_{j}}(0)-\theta_{b}^{\sigma_{j}}(-\mathrm{K}_{p}(1))=\sigma_{n }\pi\). This together with \(\theta(0)=\pi\ (\mathrm{mod}\ 2\pi\mathbf{Z})\) implies \[\theta(L)=\phi+\sigma_{n}\pi+2\pi\sum_{j=1}^{n-1}\sigma_{j}\pmod{2\pi\mathbf{ Z}},\] where the last sum is interpreted as \(0\) if \(n=1\). Since \(\gamma^{\prime}(L)=-e_{1}\) by definition of \(\mathcal{A}_{\mathrm{hook}}\), we need to have \[\phi=\pi. \tag{3.17}\] By Remark 2.8 and symmetry (cf. Figure 2) we have \(\gamma_{b}^{\pm}(\mathrm{K}_{p}(1))-\gamma_{b}^{\pm}(-\mathrm{K}_{p}(1))=- \frac{2}{p-1}\mathrm{K}_{p}(1)e_{1}\) and \(\gamma_{b}^{\pm}(0)-\gamma_{b}^{\pm}(-\mathrm{K}_{p}(1))=-\frac{1}{p-1}\mathrm{ K}_{p}(1)e_{1}\). Combining these with \((\gamma(L)-\gamma(0))\cdot e_{1}=\ell\) from definition of \(\mathcal{A}_{\mathrm{hook}}\), we deduce that \[\frac{1}{\alpha}\Big{(}\frac{2n-1}{p-1}\mathrm{K}_{p}(1)+\sum_{j=1}^{n}L_{j} \Big{)}=\ell. \tag{3.18}\] Combining this with (3.16), we find that \(\alpha\) in (3.15) is equal to \(\bar{\alpha}_{n}\) defined by (3.7), and we need to have (3.9). Consequently, the necessary conditions in (3.14)-(3.18) imply (3.6) and (3.7). Now we find that if \(\gamma\) is a flat-core \(p\)-elastica, then the assertion in (ii) holds true. As in Case 1, the converse is obvious. (We can also directly observe that \(\frac{L}{p-1}\leq\ell\) is necessary and sufficient for Case 2, but logically we need not verify it.) The proof is complete. ### Global minimizers Thanks to Theorem 3.7 we can detect global minimizers by comparing the energy of all hooked \(p\)-elasticae. We first prove the existence of global minimizers by the standard direct method. **Proposition 3.8**.: _Given \(p\in(1,\infty)\) and \(0<\ell<L\), there exists a solution to the following minimization problem_ \[\min_{\gamma\in\mathcal{A}_{\mathrm{hook}}}\mathcal{B}_{p}[\gamma].\] Proof.: Let \(\{\gamma_{j}\}_{j\in\mathbf{N}}\subset\mathcal{A}_{\mathrm{hook}}\) be a minimizing sequence of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\mathrm{hook}}\), i.e., \[\lim_{j\to\infty}\mathcal{B}_{p}[\gamma_{j}]=\inf_{\gamma\in\mathcal{A}_{ \mathrm{hook}}}\mathcal{B}_{p}[\gamma]. \tag{3.19}\] We may suppose that, up to translation, \(\gamma_{j}(0)=(0,0)\). By (3.19), there is \(C>0\) such that \(\mathcal{B}_{p}[\gamma_{j}]\leq C\), and this together with the fact that \(\|\gamma_{j}^{\prime\prime}\|_{L^{p}}^{p}=\mathcal{B}_{p}[\gamma_{j}]\) yields the uniform estimate of \(\|\gamma_{j}^{\prime\prime}\|_{L^{p}}\). Using \(|\gamma_{j}^{\prime}|\equiv 1\) and \(\gamma_{j}(0)=(0,0)\), we also obtain the bounds on the \(W^{1,p}\)-norm. Therefore, \(\{\gamma_{j}\}_{j\in\mathbf{N}}\) is uniformly bounded in \(W^{2,p}(0,L;\mathbf{R}^{2})\) so that there is a subsequence (without relabeling) that converges in the sense of \(W^{2,p}\)-weak and \(C^{1}\) topology. Thus the limit curve \(\gamma_{\infty}\) satisfies \(\gamma_{\infty}\in W^{2,p}(0,L;\mathbf{R}^{2})\), \(|\gamma_{\infty}^{\prime}|\equiv 1\), \((\gamma_{\infty}(L)-\gamma_{\infty}(0))\cdot e_{1}=\ell\), and \(\gamma_{\infty}^{\prime}(L)=-e_{1}\), which implies that \(\gamma_{\infty}\in\mathcal{A}_{\mathrm{hook}}\). In addition, since \(\gamma_{\infty}\) is parametrized by its arclength, the weak lower semicontinuity for \(\|\cdot\|_{L^{p}}\) ensures that \[\mathcal{B}_{p}[\gamma_{\infty}]=\|\gamma_{\infty}^{\prime\prime}\|_{L^{p}}^{ p}\leq\liminf_{j\to\infty}\|\gamma_{j}^{\prime\prime}\|_{L^{p}}^{p}=\liminf_{j\to \infty}\mathcal{B}_{p}[\gamma_{j}].\] Thus we see that \(\gamma_{\infty}\) is a minimizer of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\mathrm{hook}}\). We also recall the following lemma to calculate the \(p\)-bending energy of hooked \(p\)-elasticae. **Lemma 3.9** ([35, Lemma 4.2]).: _For each \(q\in(0,1)\) (including \(q=1\) if \(p>2\)),_ \[\int_{0}^{\operatorname{K}_{1,p}(q)}|\operatorname{cn}_{p}(s,q)|^{p}\,ds=\frac{1 }{q^{2}}\mathrm{E}_{1,p}(q)+\Big{(}1-\frac{1}{q^{2}}\Big{)}\mathrm{K}_{1,p}(q).\] We now prove the main theorem for global minimizers. **Theorem 3.10** (Minimal hooked \(p\)-elasticate).: _Let \(p\in(1,\infty)\), \(0<\ell<L\), and \(\gamma\in\mathcal{A}_{\mathrm{hook}}\)._ 1. _If_ \(p\in(1,2]\)_, or if_ \(p\in(2,\infty)\) _and_ \(\ell\in(0,\frac{1}{p-1}L)\)_, then for the unique solution_ \(q\in(0,1)\) _to (_3.5_),_ (3.20) _where equality holds if and only if_ \(\gamma\) _is given by (_3.4_) with_ \(n=1\)_, up to vertical translation and reflection._ 2. _If_ \(p\in(2,\infty)\) _and_ \(\ell\in[\frac{1}{p-1}L,L)\)_, then_ (3.21) \[\mathcal{B}_{p}[\gamma]\geq 2^{p}\mathrm{K}_{1,p}(1)^{p-1}\mathrm{E}_{1,p }(1)\left(\frac{p-2}{p-1}\right)^{p-1}\frac{1}{(L-\ell)^{p-1}},\] _where equality holds if and only if_ \(\gamma\) _is given by (_3.6_) with_ \(n=1\)_, up to vertical translation (and reflection)._ Proof.: The existence of minimizers follows from Proposition 3.8. Fix any minimizer \(\gamma\in\mathcal{A}_{\mathrm{hook}}\). Then \(\gamma\) is a hooked \(p\)-elasticate (cf. Appendix A). We divide the proof into two cases along the classification for hooked \(p\)-elasticate in Theorem 3.7. First we consider case (i). In this case, up to translation and reflection, \(\gamma\) is given by (3.4) for some \(n\in\mathbf{N}\), and the signed curvature \(k\) of \(\gamma\) is \[k(s)=2q\alpha_{n}\operatorname{cn}_{p}\big{(}\alpha_{n}s+\mathrm{K}_{1,p}(q), q\big{)},\quad s\in[0,L].\] Therefore we have \[\mathcal{B}_{p}[\gamma]=(2q)^{p}\alpha_{n}^{p-1}(2n-1)\int_{0}^{\mathrm{K}_{1,p}(q)}|\operatorname{cn}_{p}(x,q)|^{p}\,dx,\] where we used the symmetry and periodicity of \(\operatorname{cn}_{p}\) in Proposition 2.4. The case \(n=1\) corresponds to a unique minimizer, and Lemma 3.9 implies (3.20). Next we address case (ii). In this case, up to a vertical translation (and reflection), the curve \(\gamma\) is given by (3.6) with (3.7) for some \(n\in\mathbf{N}\). Then, the signed curvature \(k\) of \(\gamma\) is \[k(s)=\bar{\alpha}_{n}k_{f}(\bar{\alpha}_{n}s),\quad k_{f}(s)=\sum_{j=1}^{n}2 \sigma_{j}\operatorname{sech}_{p}(s-s_{j}),\] where \(s_{j}:=(2j-1)\mathrm{K}_{p}(1)+\sum_{i=1}^{j}L_{i}\). Since \(s_{n}=\bar{\alpha}_{n}L\) holds as in the proof of Theorem 3.7, we have \[\mathcal{B}_{p}[\gamma]= \sum_{j=1}^{n-1}\int_{s_{j}-\mathrm{K}_{p}(1)}^{s_{j}+\mathrm{K} _{p}(1)}2^{p}\bar{\alpha}_{n}^{p-1}|\operatorname{sech}_{p}(s-s_{j})|^{p}\,ds\] \[\qquad+\int_{s_{n}-\mathrm{K}_{p}(1)}^{s_{n}}2^{p}\bar{\alpha}_{ n}^{p-1}|\operatorname{sech}_{p}(s-s_{n})|^{p}\,ds\] (the first sum is interpreted as \(0\) if \(n=1\)). From the symmetry and periodicity of \(\operatorname{sech}_{p}\) in Proposition 2.4 we deduce that \[\mathcal{B}_{p}[\gamma]=2^{p}\bar{\alpha}_{n}^{p-1}(2n-1)\int_{0}^{\operatorname {K}_{p}(1)}|\operatorname{sech}_{p}s|^{p}\,ds.\] The case \(n=1\) corresponds to a unique minimizer, and Lemma 3.9 (with \(q=1\)) implies (3.21). _Remark 3.11_.: By symmetry the same result also holds for \(\mathcal{A}_{\operatorname{hook}}\) replaced with \[\mathcal{A}^{\prime}_{\operatorname{hook}} =\mathcal{A}^{\prime}_{\operatorname{hook}}(\ell,L)\] \[:=\left\{\,\gamma\in W^{2,p}_{\operatorname{arc}}(0,L;\mathbf{R }^{2})\,\,\big{|}\,(\gamma(L)-\gamma(0))\cdot e_{1}=\ell,\ \gamma^{\prime}(0)=-e_{1}\,\right\}.\] _Remark 3.12_.: For the proof of Theorem 1.3 (Theorem 4.1) in the next section, we will use Theorem 3.10 only in the case that \(p\in(2,\infty)\) and \(\ell\in[\frac{1}{p-1}L,L)\). However, our full classification results in this section would be useful to highlight the idiosyncrasy of the degenerate case under consideration. ## 4. Stability of alternating flat-core \(p\)-elasticae In this section, we prove the desired stability of alternating flat-core \(p\)-elasticae. More precisely, by applying Theorem 3.10 we prove the following **Theorem 4.1**.: _Let \(p\in(2,\infty)\), \(P_{0},P_{1}\in\mathbf{R}^{2}\), and \(L>0\) such that \(\frac{L}{p-1}<|P_{1}-P_{0}|<L\). Let \(N\in\mathbf{N}\) and \(\gamma\in\mathcal{A}_{\operatorname{pin}}(P_{0},P_{1},L)\) be an \(N\)-loop alternating flat-core \(p\)-elastica (see Definition 2.9). Then \(\gamma\) is a local minimizer of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\operatorname{pin}}(P_{0},P_{1},L)\)._ The following is a key lemma deduced from Theorem 3.10. **Lemma 4.2**.: _Let \(p\in(2,\infty)\), \(L>0\), \(\ell\in(\frac{1}{p-1}L,L)\), and \(N\) be a positive integer. Let \(\{\gamma_{i}\}_{i=1}^{N}\) be an \(N\)-tuple of curves \(\gamma_{i}\in\mathcal{A}_{\operatorname{hook}}(\ell_{i},L_{i})\cup \mathcal{A}^{\prime}_{\operatorname{hook}}(\ell_{i},L_{i})\) with \(L_{i}>0\) and \(\ell_{i}\in[\frac{1}{p-1}L_{i},L_{i})\) such that \(\sum_{i=1}^{N}L_{i}=L\) and \(\sum_{i=1}^{N}\ell_{i}=\ell\). Then_ \[\sum_{i=1}^{N}\mathcal{B}_{p}[\gamma_{i}]\geq\frac{C_{p}N^{p}}{(L-\ell)^{p-1}},\] _where \(C_{p}:=2^{p}\operatorname{K}_{1,p}(1)^{p-1}\operatorname{E}_{1,p}(1)(\frac{p- 2}{p-1})^{p-1}\)._ Proof.: For each \(i\) we apply Theorem 3.10 or Remark 3.11 to deduce that \[\sum_{i=1}^{N}\mathcal{B}_{p}[\gamma_{i}]\geq C_{p}\sum_{i=1}^{N}(L_{i}-\ell_{ i})^{1-p}.\] Then Jensen's inequality applied to the convex function \(x\mapsto x^{1-p}\) implies that \[C_{p}\sum_{i=1}^{N}(L_{i}-\ell_{i})^{1-p}\geq C_{p}N\left(\frac{1}{N}\sum_{i=1 }^{N}(L_{i}-\ell_{i})\right)^{1-p}=C_{p}N^{p}(L-\ell)^{1-p},\] where in the last equality we have used \(\sum_{i=1}^{N}L_{i}=L\) and \(\sum_{i=1}^{N}\ell_{i}=\ell\). We are now in a position to complete the proof of Theorem 4.1. Proof of Theorem 4.1.: Fix any alternating flat-core \(p\)-elastica \(\gamma\). Up to similarity and reparametrization, we may only consider the case that \[\gamma=R_{\pi}\bigg{(}\Big{(}\bigoplus_{j=1}^{N}\big{(}\gamma_{\ell}^{L_{j}} \oplus\gamma_{b}^{\sigma_{j}}\big{)}\Big{)}\oplus\gamma_{\ell}^{L_{N+1}}\bigg{)}, \tag{4.1}\] for some \(N\in\mathbf{N}\), \(\{\sigma_{j}\}_{j=1}^{N}\subset\{+,-\}\), and \(L_{1},\dots,L_{N+1}>0\). In this case, we have \(\gamma\in\mathcal{A}_{\mathrm{pin}}(P_{0},P_{1},L)\) with \(P_{0}:=(0,0)\), \(P_{1}:=(\ell,0)\), \(\ell:=\frac{2N}{p-1}\mathrm{K}_{p}(1)+\sum_{j=1}^{N+1}L_{j}\), and \(L:=2N\mathrm{K}_{p}(1)+\sum_{j=1}^{N+1}L_{j}\). In particular, in the same way as the proof of Theorem 3.10 (ii), we can explicitly compute \[\mathcal{B}_{p}[\gamma]=\frac{C_{p}(2N)^{p}}{(L-\ell)^{p-1}}, \tag{4.2}\] where \(C_{p}=2^{p}\mathrm{K}_{1,p}(1)^{p-1}\mathrm{E}_{1,p}(1)(\frac{p-2}{p-1})^{p-1}\) as in Lemma 4.2. Now we prove that the above \(\gamma\) is a local minimizer of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\mathrm{pin}}\). Take an arbitrary sequence \(\{\gamma_{n}\}_{n\in\mathbf{N}}\subset\mathcal{A}_{\mathrm{pin}}\) such that \(\gamma_{n}\to\gamma\) in \(W^{2,p}(0,L;\mathbf{R}^{2})\), and hence also in \(C^{1}([0,L];\mathbf{R}^{2})\). It suffices to prove that \(\mathcal{B}_{p}[\gamma_{n}]\geq\mathcal{B}_{p}[\gamma]\) holds for all large \(n\). Choose a partition \(\{s_{i}\}_{i=1}^{2N+1}\) of \([0,L]\) as follows (see Figure 3). Let \(s_{1}:=0\), \(s_{2N+1}:=L\), \(s_{2i}:=(2i-1)\mathrm{K}_{p}(1)+\sum_{m=1}^{i}L_{m}\) for \(i\in\{1,\dots,N\}\), which corresponds to the midpoint of the \(i\)-th loop \(\gamma_{b}^{\sigma_{i}}\), and \(s_{2i-1}:=2(i-1)\mathrm{K}_{p}(1)+\sum_{m=1}^{i-1}L_{m}+\frac{1}{2}L_{i}\) for \(i\in\{2,\dots,N\}\), which corresponds to the midpoint of the \(i\)-th segment \(\gamma_{\ell}^{L_{i}}\). Let \[\tilde{L}_{i}:=s_{i+1}-s_{i},\quad\tilde{\ell}_{i}:=\big{(}\gamma(s_{i+1})- \gamma(s_{i})\big{)}\cdot e_{1}\quad(1\leq i\leq 2N). \tag{4.3}\] Then, by formula (4.1) (cf. Figures 2 and 3) we deduce that \[\tfrac{1}{p-1}\tilde{L}_{i}<\tilde{\ell}_{i}<\tilde{L}_{i}\quad(1\leq i\leq 2 N), \tag{4.4}\] and also that the tangent vector \(\gamma^{\prime}\) and the curvature \(k\) of \(\gamma\) satisfy \[\gamma^{\prime}(s_{2i})=-e_{1},\quad|k(s_{2i})|=2\neq 0\quad(1\leq i\leq N).\] This together with the \(C^{1}\)-convergence \(\gamma_{n}\to\gamma\) implies that we can pick sequences \(\{s_{2i,n}\}_{n=1}^{\infty}\subset[0,L]\) such that \[\gamma^{\prime}_{n}(s_{2i,n})=-e_{1},\quad\lim_{n\to\infty}s_{2i,n}=s_{2i} \quad(1\leq i\leq N). \tag{4.5}\] Now, for all large \(n\) we can define a partition \(\{s_{i,n}\}_{i=1}^{2N+1}\) of \([0,L]\) by taking \(s_{1,n}:=0\), \(s_{2N+1,n}:=L\), and \(s_{2i-1,n}:=s_{2i-1}\) for \(i\in\{2,\dots,N\}\). Let \[\tilde{L}_{i,n}:=s_{i+1,n}-s_{i,n},\quad\tilde{\ell}_{i,n}:=\big{(}\gamma_{n} (s_{i+1,n})-\gamma_{n}(s_{i,n})\big{)}\cdot e_{1}\quad(1\leq i\leq 2N). \tag{4.6}\] Then, since \(\gamma_{n}\in\mathcal{A}_{\mathrm{pin}}(P_{0},P_{1},L)\) with \((P_{1}-P_{0})\cdot e_{1}=\ell\), we have \[\sum_{i=1}^{2N}\tilde{L}_{i,n}=L,\quad\sum_{i=1}^{2N}\tilde{\ell}_{i,n}=\ell. \tag{4.7}\] In addition, by (4.3), (4.4), and the \(C^{1}\)-convergence we deduce that for all large \(n\), \[\tfrac{1}{p-1}\tilde{L}_{i,n}<\tilde{\ell}_{i,n}<\tilde{L}_{i,n}\quad(1\leq i \leq 2N). \tag{4.8}\] Therefore, in view of (4.5), (4.6), (4.7), and (4.8), for all large \(n\) the curve \(\gamma_{n}\in\mathcal{A}_{\rm pin}\) has a \(2N\) partition \(\{\gamma_{n}|_{[s_{i,n},s_{i+1,n}]}\}_{i=1}^{2N}\) as in the assumption of Lemma 4.2, and hence \[\mathcal{B}_{p}[\gamma_{n}]=\sum_{i=1}^{2N}\mathcal{B}_{p}[\gamma_{n}|_{[s_{i,n},s_{i+1,n}]}]\geq\frac{C_{p}(2N)^{p}}{(L-\ell)^{p-1}}=\mathcal{B}_{p}[ \gamma],\] where in the last part we used (4.2). The proof is complete. Proof of Theorem 1.2.: This immediately follows by Theorem 4.1. In particular, the uncountability follows by the freedom of \(\boldsymbol{L}\), while the divergence of the energy follows by taking \(N\to\infty\) in (4.2). _Remark 4.3_ (Rigidity).: In fact our argument can also imply the following rigidity: If a curve \(\gamma\in\mathcal{A}_{\rm pin}\) lies in a small neighborhood of a given alternating flat-core \(p\)-elastica \(\bar{\gamma}\), and if \(\mathcal{B}_{p}[\gamma]=\mathcal{B}_{p}[\bar{\gamma}]\), then \(\gamma\) is also an alternating flat-core \(p\)-elastica (possibly different from \(\bar{\gamma}\)). This is mainly due to the uniqueness in Theorem 3.10. ## Appendix A First variation arguments for hooked \(p\)-elasticae Here we check that any (local) minimizer of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\rm hook}(\ell,L)\) is a hooked \(p\)-elastica. Let \(W^{2,p}_{\rm imm}(0,1;\mathbf{R}^{2})\) denote the set of immersed \(W^{2,p}\)-curves, i.e., \[W^{2,p}_{\rm imm}(0,1;\mathbf{R}^{2}):=\left\{\,\gamma\in W^{2,p}(0,1; \mathbf{R}^{2})\,\right|\;|\gamma^{\prime}(t)|\neq 0\ \text{ for all }\ t\in[0,1]\,\right\},\] and define an immersed counterpart of \(\mathcal{A}_{\rm hook}\) by \[\mathcal{A}_{\rm hook}^{*} =\mathcal{A}_{\rm hook}^{*}(\ell,L)\] \[:=\left\{\,\gamma\in W^{2,p}_{\rm imm}(0,1;\mathbf{R}^{2})\, \right|\,(\gamma(1)-\gamma(0))\cdot e_{1}=\ell,\ \mathcal{L}[\gamma]=L,\ \mathbf{t}(L)=-e_{1}\,\right\},\] where \(\mathcal{L}\) denotes the length functional, i.e., \(\mathcal{L}[\gamma]:=\int_{\gamma}\,ds\), and \(\mathbf{t}:[0,L]\to\mathbf{S}^{1}\) denotes the unit tangent. It is clear that if \(\gamma\) is a (local) minimizer of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\rm hook}(\ell,L)\), then the curve \(\bar{\gamma}\) defined by \(\bar{\gamma}(t):=\gamma(Lt)\) for \(t\in[0,1]\) is a (local) minimizer of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\rm hook}^{*}\), thus in particular a critical point of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\rm hook}^{*}\). Here we define: * For \(\gamma\in\mathcal{A}_{\rm hook}^{*}\), we call a one-parameter family \(\varepsilon\mapsto\gamma_{\varepsilon}\in\mathcal{A}_{\rm hook}^{*}\)_admissible perturbation_ of \(\gamma\) in \(\mathcal{A}_{\rm hook}^{*}\) if \(\gamma_{0}=\gamma\) and if the derivative \(\left.\frac{d}{d\varepsilon}\gamma_{\varepsilon}\right|_{\varepsilon=0}\) exists. Figure 3. Decomposition of an alternating flat-core \(p\)-elastica. * We say that \(\gamma\in\mathcal{A}^{*}_{\mathrm{hook}}\) is a _critical point_ of \(\mathcal{B}_{p}\) in \(\mathcal{A}^{*}_{\mathrm{hook}}\) if for any admissible perturbation \((\varepsilon\mapsto\gamma_{\varepsilon})\) of \(\gamma\) in \(\mathcal{A}^{*}_{\mathrm{hook}}\) the first variation of \(\mathcal{B}_{p}\) vanishes: \[\frac{d}{d\varepsilon}\mathcal{B}_{p}[\gamma_{\varepsilon}]\Big{|}_{ \varepsilon=0}=0.\] Hence, in order to check that any (local) minimizer of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\mathrm{hook}}\) is a hooked \(p\)-elastica, it suffices to show that the arclength parametrized signed curvature \(k\) of any critical point of \(\mathcal{B}_{p}\) in \(\mathcal{A}^{*}_{\mathrm{hook}}\) satisfies the Euler-Lagrange equation (EL) and that \(w:=|k|^{p-2}k\) satisfies \(w(0)=w^{\prime}(L)=0\). (Recall \(w\in C^{2}([0,L])\) by Proposition 2.5.) First we deduce a weak form of the Euler-Lagrange equation for hooked \(p\)-elasticae. Note carefully that, although the general flow of the argument below is similar to our previous studies [34, 35], we need a slightly different approximation procedure since the boundary condition is of higher order. **Lemma A.1** (The Euler-Lagrange equation for hooked \(p\)-elasticae).: _Let \(\gamma\in\mathcal{A}^{*}_{\mathrm{hook}}\) be a critical point of \(\mathcal{B}_{p}\) in \(\mathcal{A}^{*}_{\mathrm{hook}}\). Then the (arclength parametrized) signed curvature \(k\) of \(\gamma\) satisfies \(k\in L^{\infty}(0,L)\) and there exists \(\lambda\in\mathbf{R}\) such that \(k\) satisfies (EL) for all \(\varphi\in W^{2,p}(0,L)\) with \(\varphi(0)=0\) and \(\varphi^{\prime}(L)=0\)._ Proof.: By the Lagrange multiplier method (cf. [55, Proposition 43.21]), there is a multiplier \(\lambda\in\mathbf{R}\) such that (A.1) \[\big{\langle}D\mathcal{B}_{p}[\gamma]+\lambda D\mathcal{L}[\gamma],h\big{\rangle}=0\] for all \(h\in W^{2,p}(0,1;\mathbf{R}^{2})\) with \((h(1)-h(0))\cdot e_{1}=0\) and \(h^{\prime}(1)\cdot e_{2}=0\), where \(D\mathcal{B}_{p}[\gamma]\) and \(D\mathcal{L}[\gamma]\) are the Frechet derivatives of \(\mathcal{B}_{p}\) and \(\mathcal{L}\) at \(\gamma\), respectively. By the known computation of the first derivative of \(\mathcal{B}_{p}\) (cf. [35, Lemmas A.3 and A.4]) and the change of variables \(\eta:=h\circ\sigma^{-1}\) with \(\sigma(t):=\int_{0}^{t}|\gamma^{\prime}|\), we can rewrite (A.1) in terms of the arclength parametrization \(\tilde{\gamma}\) of \(\gamma\in\mathcal{A}^{*}_{\mathrm{hook}}\): (A.2) \[\int_{0}^{L}\Big{(}(1-2p)|\tilde{\gamma}^{\prime\prime}|^{p}(\tilde{\gamma}^{ \prime}\cdot\eta^{\prime})+p|\tilde{\gamma}^{\prime\prime}|^{p-2}(\tilde{ \gamma}^{\prime\prime}\cdot\eta^{\prime\prime})+\lambda(\tilde{\gamma}^{ \prime}\cdot\eta^{\prime})\Big{)}ds=0\] for all \(\eta\in W^{2,p}(0,L;\mathbf{R}^{2})\) with (A.3) \[(\eta(L)-\eta(0))\cdot e_{1}=0,\quad\eta^{\prime}(L)\cdot e_{2}=0.\] Let \(k:[0,L]\to\mathbf{R}\) be the arclength parametrized signed curvature of \(\gamma\). By [34, Proposition 2.1] it directly follows that \(k\in L^{\infty}(0,L)\) as well as that (EL) holds for \(\varphi\in C^{\infty}_{\mathrm{c}}(0,L)\), or equivalently for \(\varphi\in W^{2,p}_{0}(0,L)\) up to approximation. In what follows we check that the boundary conditions for \(\varphi\) can be relaxed. Fix \(\varphi\in W^{2,p}(0,L)\) with \(\varphi(0)=\varphi^{\prime}(L)=0\) arbitrarily. Let \(\mathbf{t}\) and \(\mathbf{n}\) be the unit tangent vector and the unit normal vector of \(\tilde{\gamma}\), respectively, defined by \(\mathbf{t}(s):=\partial_{s}\tilde{\gamma}(s)\) and \(\mathbf{n}(s):=R_{\pi/2}\mathbf{t}(s)\), where \(R_{\theta}\) stands for the counterclockwise rotation matrix through angle \(\theta\in\mathbf{R}\). Recall that the Frenet-Serret formula yields \(\mathbf{t}^{\prime}(s)=k(s)\mathbf{n}(s)\) and \(\mathbf{n}^{\prime}(s)=-k(s)\mathbf{t}(s)\), and in particular \[\mathbf{n}(s)=\mathbf{n}(0)-\int_{0}^{s}k(\sigma)\mathbf{t}(\sigma)\,d\sigma, \quad s\in[0,L].\] Take any sequence \(\{k_{j}\}_{j\in\mathbf{N}}\subset C^{1}(0,L)\) such that \(k_{j}\to k\) in \(L^{p}(0,L)\) as \(j\to\infty\). For each \(j\in\mathbf{N}\), set \[\mathbf{n}_{j}(s):=\mathbf{n}(0)-\int_{0}^{s}k_{j}(\sigma)\mathbf{t}(\sigma)\,d\sigma.\] Then we see that \(\{\mathbf{n}_{j}\}_{j\in\mathbf{N}}\subset C^{2}(0,L)\cap C([0,L])\). Since \(k_{j}\to k\) in \(L^{p}(0,L)\), we have \(\mathbf{n}_{j}\to\mathbf{n}\) in \(C([0,L])\). Let \(f\in C^{2}([0,L])\) be a function satisfying \(f(0)=f^{\prime}(0)=f^{\prime}(L)=0\) and \(f(L)=1\). For each \(j\in\mathbf{N}\), set \(r_{j}(s):=f(s)\varphi(L)(\mathbf{n}(L)-\mathbf{n}_{j}(L))\) and \[\eta_{j}(s):=\varphi(s)\mathbf{n}_{j}(s)+r_{j}(s).\] Note that \(\eta_{j}(0)=\varphi(0)\mathbf{n}(0)\) and \(\eta_{j}(L)=\varphi(L)\mathbf{n}(L)\). Using again the Frenet-Serret formula, we have \[\eta_{j}^{\prime}(s) =\varphi^{\prime}(s)\mathbf{n}_{j}(s)-\varphi(s)k_{j}(s) \mathbf{t}(s)+r_{j}^{\prime}(s),\] \[\eta_{j}^{\prime\prime}(s) =\varphi^{\prime\prime}(s)\mathbf{n}_{j}(s)-2\varphi^{\prime}(s)k _{j}(s)\mathbf{t}(s)-\varphi(s)k_{j}^{\prime}(s)\mathbf{t}(s)-\varphi(s)k_{j} (s)k(s)\mathbf{n}(s)+r_{j}^{\prime\prime}(s),\] which implies that \(\{\eta_{j}\}_{j\in\mathbf{N}}\subset W^{2,p}(0,L;\mathbf{R}^{2})\). Since \(\gamma\in\mathcal{A}_{\mathrm{hook}}^{*}\), we obtain \(\mathbf{t}(L)=-e_{1}\), and this also means that \(\mathbf{n}(L)=-e_{2}\). Combining this with \(\varphi(0)=\varphi^{\prime}(L)=0\), we see that \(\eta_{j}\) satisfies (A.3) for each \(j\in\mathbf{N}\). Note that \[(1-2p)|\tilde{\gamma}^{\prime\prime}|^{p}(\tilde{\gamma}^{\prime }\cdot\eta_{j}^{\prime}) =(1-2p)|k|^{p}\Big{(}(\tilde{\gamma}^{\prime}\cdot\varphi^{\prime }\mathbf{n}_{j})-k_{j}\varphi+\tilde{\gamma}^{\prime}\cdot r_{j}^{\prime}\Big{)},\] \[p|\tilde{\gamma}^{\prime\prime}|^{p-2}(\tilde{\gamma}^{\prime \prime}\cdot\eta_{j}^{\prime\prime}) =p|k|^{p-2}\Big{(}(\tilde{\gamma}^{\prime\prime}\cdot\varphi^{ \prime\prime}\mathbf{n}_{j})-|k|^{2}k_{j}\varphi+\tilde{\gamma}^{\prime\prime }\cdot r_{j}^{\prime\prime}\Big{)},\] where \(|\tilde{\gamma}^{\prime}|\equiv|\mathbf{n}|\equiv 1\), \(|\tilde{\gamma}^{\prime\prime}(s)|=|k(s)|\), and \((\tilde{\gamma}^{\prime\prime},\mathbf{t})=0\) were used. Substituting \(\eta=\eta_{j}\) into (A.2), we obtain \[\int_{0}^{L}\Big{(}(1-2p)|k|^{p}\Big{(}\varphi^{\prime}(\tilde{ \gamma}^{\prime}\cdot\mathbf{n}_{j})-k_{j}\varphi+\tilde{\gamma}^{\prime} \cdot r_{j}^{\prime}\Big{)}\\ +p|k|^{p-2}\Big{(}\varphi^{\prime\prime}(\tilde{\gamma}^{\prime \prime}\cdot\mathbf{n}_{j})-p|k|^{2}k_{j}\varphi+\tilde{\gamma}^{\prime\prime} \cdot r_{j}^{\prime\prime}\Big{)}+\lambda\Big{(}\varphi^{\prime}(\tilde{\gamma }\cdot\mathbf{n}_{j})-k_{j}\varphi\Big{)}\bigg{)}ds=0.\] By using \(k_{j}\to k\) in \(L^{p}(0,L)\), \(\mathbf{n}_{j}\to\mathbf{n}\) in \(C([0,L];\mathbf{R}^{2})\), \(r_{j}\to 0\) in \(C^{2}([0,L];\mathbf{R}^{2})\) and \(k\in L^{\infty}(0,L)\), we can obtain (EL) as the limit of the above equality. From this we deduce the additional natural boundary condition: **Lemma A.2** (Improved regularity and natural boundary condition).: _Let \(\gamma\in\mathcal{A}_{\mathrm{hook}}^{*}\) be a critical point of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\mathrm{hook}}^{*}\) and \(k:[0,L]\to\mathbf{R}\) be the arclength parametrized signed curvature of \(\gamma\). Then \(k\in C([0,L])\). In addition, the function \(w:=|k|^{p-2}k\) is of class \(C^{2}\) and satisfies_ \[w(0)=0,\quad w^{\prime}(L)=0.\] Proof.: Let \(\gamma\in\mathcal{A}_{\mathrm{hook}}^{*}\) be a critical point of \(\mathcal{B}_{p}\) in \(\mathcal{A}_{\mathrm{hook}}^{*}\). By Lemma A.1, there exists \(\lambda\in\mathbf{R}\) such that the arclength parametrized signed curvature \(k\) of \(\gamma\) satisfies (EL) for all \(\varphi\in C_{\infty}^{\infty}(0,L)\) in particular, and hence \(\gamma\) is a \(p\)-elastic so that \(k\in C([0,L])\) and \(w:=|k|^{p-2}k\in C^{2}([0,L])\) by Proposition 2.5. In addition, we deduce from Lemma A.1 that \[\int_{0}^{L}\Big{(}pw\varphi^{\prime\prime}+(p-1)|w|^{\frac{2}{p-1}}w\varphi- \lambda|w|^{\frac{2-p}{p-1}}w\varphi\Big{)}ds=0\] for \(\varphi\in W^{2,p}(0,L)\) with \(\varphi(0)=\varphi^{\prime}(L)=0\). Using the boundary condition for \(\varphi\) together with the regularity \(w\in C^{2}([0,L])\), we further deduce via integration by parts that \[0 =\big{[}pw(s)\varphi^{\prime}(s)-pw^{\prime}(s)\varphi(s)\big{]}_{ s=0}^{s=L}+\int_{0}^{L}\!\Big{(}pw^{\prime\prime}+(p-1)|w|^{\frac{2}{p-1}}w- \lambda|w|^{\frac{2-p}{p-1}}w\Big{)}\varphi\,ds\] \[=-pw^{\prime}(L)\varphi(L)-pw(0)\varphi^{\prime}(0).\] Then, with the choice of \(\varphi\) satisfying \(\varphi(L)=1\) and \(\varphi^{\prime}(0)=0\) (resp. \(\varphi(L)=0\) and \(\varphi^{\prime}(0)=1\)), we obtain \(w^{\prime}(L)=0\) (resp. \(w(0)=0\)).
2302.08573
Virtual Therapy Exergame for Upper Extremity Rehabilitation Using Smart Wearable Sensors
Virtual Reality (VR) has been utilized for several applications and has shown great potential for rehabilitation, especially for home therapy. However, these systems solely rely on information from VR hand controllers, which do not fully capture the individual movement of the joints. In this paper, we propose a creative VR therapy exergame for upper extremity rehabilitation using multi-dimensional reaching tasks while simultaneously capturing hand movement from the VR controllers and elbow joint movement from a flexible carbon nanotube sleeve. We conducted a preliminary study with non-clinical participants (n = 12, 7 F). In a 2x2 within-subjects study (orientation (vertical, horizontal) x configuration (flat, curved)), we evaluated the effectiveness and enjoyment of the exergame in different study conditions. The results show that there was a statistically significant difference in terms of task completion time between the two orientations. However, no significant differences were found in the number of mistakes in both orientation and configuration of the virtual exergame. This can lead to customizing therapy while maintaining the same level of intensity. That is, if a patient has restricted lower limb mobility and requires to be seated, they can use the orientations interchangeably. The results of resistance change generated from the carbon nanotube sleeve revealed that the flat configuration in the vertical orientation induced more elbow stretches than the other conditions. Finally, we reported the subjective measures based on questionnaires for usability and user experience in different study conditions. In conclusion, the proposed VR exergame has the potential as a multimodal sensory tool for personalized upper extremity home-based therapy and telerehabilitation.
Lauren Baron, Vuthea Chheang, Amit Chaudhari, Arooj Liaqat, Aishwarya Chandrasekaran, Yufan Wang, Joshua Cashaback, Erik Thostenson, Roghayeh Leila Barmaki
2023-02-16T20:38:17Z
http://arxiv.org/abs/2302.08573v1
# Virtual Therapy Exergame for Upper Extremity Rehabilitation Using Smart Wearable Sensors ###### Abstract Virtual Reality (VR) has been utilized for several applications and has shown great potential for rehabilitation, especially for home therapy. However, these systems solely rely on information from VR hand controllers, which do not fully capture the individual movement of the joints. In this paper, we propose a creative VR therapy exergame for upper extremity rehabilitation using multi-dimensional reaching tasks while simultaneously capturing hand movement from the VR controllers and elbow joint movement from a flexible carbon nanotube sleeve. We conducted a preliminary study with non-clinical participants (n = 12, 7 F). In a 2 x 2 within-subjects study (_orientation (vertical, horizontal) x configuration (flat, curved)_), we evaluated the effectiveness and enjoyment of the exergame in different study conditions. The results show that there was a statistically significant difference in terms of task completion time between the two orientations. However, no significant differences were found in the number of mistakes in both orientation and configuration of the virtual exergame. This can lead to customizing therapy while maintaining the same level of intensity. That is, if a patient has restricted lower limb mobility and requires to be seated, they can use the orientations interchangeably. The results of resistance change generated from the carbon nanotube sleeve revealed that the flat configuration in the vertical orientation induced more elbow stretches than the other conditions. Finally, we reported the subjective measures based on questionnaires for usability and user experience in different study conditions. In conclusion, the proposed VR exergame has the potential as a multimodal sensory tool for personalized upper extremity home-based therapy and telerabilitation. Virtual therapy, virtual reality, smart wearable sensors, upper extremity, telerabilitation, human-computer interaction 2023 ## 1 Introduction Virtual Reality (VR) has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been widely used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. The VR has been used for research in the field of human-computer interaction interaction. ## 1. Introduction Physical therapy (PT) is a well-known treatment for effective rehabilitation. Among post-stroke survivors, as the potential target users for our research, patients often have musculoskeletal conditions, especially upper extremity functional limitations (Krishnan et al., 2017). Approximately 80% of people who suffered a stroke experience motor impairments, including in their upper limbs (Krishnan et al., 2017). Additionally, low adherence to PT exercises has been constantly reported, such as lack of motivation, slow recovery progress, and absence of mental support (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2018). As a result, there is a pressing need to make conventional rehabilitation for upper limb mobility more interactive, engaging, flexible, and effective. Virtual reality (VR) has shown immense potential to provide an engaging, entertaining, and enjoyable experience of PT exercises. Many studies have shown that VR is beneficial and preferred for rehabilitation in various ways, e.g., portability for home therapy, engaging virtual environment, and independence from distractions (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2018). While VR has demonstrated a lot of potential for PT, research on creative virtual therapy and task-oriented therapeutic exercises (e.g., exergames), especially for upper extremity rehabilitation, is still underrepresented (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2018). In addition, a therapeutic assessment for PT in VR is needed. Most systems for virtual therapy only rely on information captured from VR controllers and hand tracking. Using VR trackers could provide the position of the hand/wrist joint, however, they do not fully capture the movement details from each individual joint. Our main contribution to the project is that we use a smart wearable sensor to provide a more accurate and quantifiable assessment of movement during gameplay. In this work, a creative VR exergame is proposed for upper extremity therapy. We developed multi-dimensional reaching tasks and used a smart fabric-based carbon nanotube sensor on the elbow to capture limb movement from the arm and hands. A preliminary study (_n = 12, 7 F_) was conducted to evaluate the effectiveness and enjoyment of the proposed VR exergame in different model orientations and configurations. The objective and subjective measures such as task completion time, number of mistakes, the resistance change of the elbow sleeve sensor, and subjective questionnaires were assessed. Our research questions include the following: * **RQ1**: How do model configuration and orientation influence the _therapeutic experience_ in the VR therapy exergame? * **RQ2**: How do model configuration and orientation influence the _electrical resistance changes_ from the smart wearable sensor for upper extremity rehabilitation? * **RQ3**: How is the subjective perception of the VR therapy exergame, such as _easiness, comfort, and enjoyment_, associated with different VR model conditions? ## 2. Related Work In this section, we report prior research related to the use of VR in rehabilitation and how the virtual content affects the overall user experience. We also describe wearable sensors used for VR therapy in the following sections. ### VR Applications for Physical Therapy VR can be used for training, decision-making, and rehabilitation in the physical therapy domain. For instance, Hartstein et al. (Hartstein et al., 2017) assessed the perceived ease of use and usefulness of VR learning experiences to promote the clinical decision-making of PT students. Other works have also shown the potential of VR on upper limb rehabilitation for patients with stroke or Parkinson's Disease (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2018). Despite the well-known effectiveness of PT interventions for rehabilitation, various limitations, including time commitment, the intensity of labor and resources, dependability on patient compliance, geographical availability of special facilities, and costs/insurance coverage, have been reported (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2018). Phenal et al. (Phelan et al., 2018) explored the use of VR for children with upper limb motor impairment undergoing painful therapeutic processes within a hospital environment. In this study, they found that VR has the potential to improve functional disabilities, alleviate perceived pain, reduce the perceived difficulty of rehabilitation exercises, increase exercise duration and produce positive emotions toward the therapy. In a systematic review of immersive VR for older adults, Campo Prieto et al. (Parto et al., 2019) provided preliminary evidence supporting immersive VR technologies' application in older adult populations. Multiple reviews on the efficacy of VR therapy conclude that the current evidence on the effectiveness of using VR in the rehabilitation of upper limb mobility in patients with stroke is limited and emphasised the need for more studies to support this effect while investigating different intervention types through rigorous studies (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2018; Krishnan et al., 2018). Xu et al. (Xu et al., 2019) developed a depth camera-based, task-specific VR game called _Stomp Joy_ with an aim to assess its feasibility and clinical efficacy for post-stroke rehabilitation of the lower extremities, which was tested in a recent study. In contrast, our study is a depth-based therapeutic exergame for upper extremity rehabilitation. ### Effects of Objective Content Adjustments on Virtual Experience Mason et al.(Mason et al., 2018) explore the intersection between reaching tasks and depth analysis. Their study looks at reaching both physical and virtual targets with VR. They measured task completion time and wrist movements to determine how haptic and visual feedback influence the reaching movements. They also studied depth analysis for reaching kinematics and found that participants took more time decelerating towards smaller targets with haptic feedback provided. However, when haptic feedback was absent, deceleration time was constant. These findings suggest that virtual visual feedback for the moving limb and haptic feedback about contacting objects are important for performance in a virtual environment (VE). Without feedback, Fitts's law (Fitt et al., 2018) to predict human movement does not always hold. They also supported that reaching tasks can be used for depth analysis in the VE. In another study (Krishnan et al., 2018), participants had to reach real-world objects but were given different visual depth cues and only some used a VR headset. Interestingly, participants using the VR headset performed better and visual depth cues only had a minor impact on reaching performance. This shows how an immersive VR environment can be useful for reaching and depth perception. It also indicates that no one visual depth cue weighs more than others and a combination of many depth cues does not necessarily correlate to accuracy. While we measure depth perception objectively via task-completion time and subjectively via questionnaires, our goal is to incorporate Fitt's Law into our future studies to precisely measure reaching task-completion time in simpler models. In another study, Gagnon et al. (Gagnon et al., 2019) aimed to assess whether feedback from reaching improves depth judgment and if re-calibration changed due to feedback across reaching behaviors. They tested judgments of action capabilities within a VE for two different reaching behaviors, reaching out and reaching up. Only some participants received feedback on whether they reached the target dot. They found that reach was initially overestimated, but over feedback blocks, perceptual estimates decreased and became more accurate. They also found that targets just beyond reach were more difficult to judge. Feedback on reaching activities on objects placed far away in an immersive VE is critical for improving depth perception. In a study about depth perception in an immersive VE, 3D objects and closer objects were found to significantly provide better and more accurate depth estimation than 2D objects and far away objects (Kraemer et al., 2017). When looking at hitting a distant target in a VE, participants had to use greater upper limb motor functions like muscular effort and torque of the shoulder to hit more distant objects (Kraemer et al., 2017). VEs can be used to assess estimates of action capabilities and improve those estimates through visual-motor feedback. Reaching tasks, especially for distant and flat objects in a VE, require more upper limb effort and are harder to judge distance/depth. When someone overextends their elbows, rigid joints, etc., their different formations and movements can affect depth cues and impressions. Our study is one of the kinds that attempts to take elbow movements via resistance changes from a flexible nanotube wearable sensor into account in VE reaching tasks. Kioumourtzoglou et al. (Kioumourtzoglou et al., 2017) studied how extending the forearm with the elbow at 80 degrees can be used to measure a sense of kinesthesis and how we perceive our body's movement. Palaniappan et al. (Kraemer et al., 2017) used inverse kinematics to analyze joint angle positions, joint reaction forces, and joint torque to show how VR therapy is more effective than conventional therapy for rehabilitation. The precision of depth and directional judgment is affected by the pendular (contracting and relaxing muscles) motion of the limb segments. ### Wearable Sensors for Virtual Therapy The usability of wearable sensors during VR-based PT is increasingly becoming popular due to their accessibility. As it allows the monitoring of the quantity and quality of body movement, the data collected from smart wearables can provide effective measurements, thus better treatments for patients in the process of rehabilitation. Brandao et al. (Brandao et al., 2019) explore the feasibility of unsupervised physical therapy for rehabilitation at patients' homes. This study was conducted on patients with hemiparesis due to stroke. The patients were trained with an inertial measurement unit (IMU) in VR. The participants were able to use the system in their homes without any supervision. It was found that the arm function of these patients improved significantly. Moreover, the participants were able to complete each session of rehabilitative therapy only in six weeks. VR-based techniques have been used in healthcare to provide more accessible rehabilitation options to patients with disabilities. However, VR combined with wearable devices has made this experience of recovery more pleasant and valuable for patients (Kioumourtzoglou et al., 2017; Kioumourtzoglou et al., 2017). VR provides patients with an immersive experience in a virtual world and gives them the capability to interact with virtual objects using motion sensors. This attribute has made VR-based rehabilitation a promising tool to promote the active participation of patients in their therapy process and produce better motor recovery (Kioumourtzoglou et al., 2017). The data collected from wearable sleeves while performing VR physical therapy can be visualized and interpreted by therapists, patients, and caregivers. Even if they do not have any technical knowledge, an overview of patient performance for each session was found to be effective for them (Kraemer et al., 2017). In another study, Lee et al. (Lee et al., 2017) examined the integration of wearable sleeves in VR-based physical therapy. In this study, the data was collected from wearable sensors in a VR-based goal-directed shoulder rehabilitation system to analyze task performance and improvement after each training session. The study was conducted on patients with frozen shoulders where they performed shoulder muscle strengthening, and core muscle strengthening exercises. While patients were performing exercises by interacting with the VR environment, the sensors were secured to their shoulders to measure the range of motion (ROM). The data collected from training sessions suggested that the usage of wearable sensors in VR therapy can provide better information to offer customized individual training programs. Hence, it is suggested that rehabilitative games for at-home VR therapy using wearable sensors are feasible and safe despite the lack of supervision (Kioumourtzoglou et al., 2017). ## 3. Materials and Methods In the following sections, we describe the participants, apparatus, study procedure, study design with dependent and independent variables, and hypotheses of the user study. ### Participants For the experiment, a priori power analysis was conducted to determine the sample size for interaction effects for ANOVA (repeated measures, within factors) F tests. We used G'Power for a large effect size \(\eta_{p}^{2}:0.14\) which gave the effect size f = 0.403 (Gagnon et al., 2019). With a power of 0.80, one group, and four measurements, the result was a total sample size of 10. To keep a balanced design of three participants for each of the four measurements while still being greater than _n = 10_, we recruited a participant pool of _12_ non-clinical volunteers (5 males and 7 females; age ranged from _20 - 29_, _M=22.67_, _SD=2.78_). There was no monetary compensation for participation. Eight out of _12_ participants (_66.66%_) had Asian, or Pacific Islander ethnicity, and four (_33.33%_) had a Caucasian or White ethnicity. Most participants had prior VR but lacked prior video game experience; ten (_83.33%_) had used VR headsets before, but only three (_25%_) reported playing video games daily or weekly. Two participants (16.67%) have previously experienced a severe upper-body injury, either due to sports or other incidents, and needed to participate in rehabilitation sessions to recover. Only one participant (8.33%) was visually impaired beyond their glasses/contact lenses. ### Apparatus An interactive VR drawing exergame was developed using the Unity game engine (version _2019.1.02_). In the drawing exergame, participants were presented with a welcoming screen in which they could choose drawing contents: the flat fish and curved fish. The models were developed using licensed Autodesk Maya and Blender. The virtual environment was designed with a simple, serene mountainscape, a blue sky, and a wood floor to identify the action space. The only objects in the user's personal space were the virtual representations of the hand controllers (a cube for the non-dominant hand that controls the model's position/rotation and a paintbrush for the user's dominant hand to perform the task). This virtual environment allowed users to focus on the task in a relaxing, distraction-free environment without any other cues that could disturb the user's depth perception. The primary task of users in this exergame was to connect the dots of a traceable outline of the fish model(s) using the paintbrush. When each dot was hit, it turned from red to green, and a positive audio feedback sound was played to the user. When all the dots were green, meaning the user successfully connected all the dots of the drawing, they were celebrated by visual firework animations with sound effects. This positive audio-visual feedback can offer a more guided, targeted reaching, and therapeutic experience to users as shown in preliminary research on the proper depth judgment via feedback cues (Bartos et al., 2018). A fabric-based carbon nanotube sensor (Bartos et al., 2018) was used on the elbow, and the electrical resistance change of the sensor was recorded for elbow flexion/extension throughout the drawing task. The participants were asked to use their dominant hand for drawing and wearing the elbow sleeve (see Figure 2). The other controller was used to adjust the dotted model to the height or position the user felt most comfortable with. ### Study Procedure Once participants arrived, we received verbal consent after describing the purpose of our study, their responsibilities, and their right to withdraw and take breaks when needed. After consent, the participants were randomly assigned the order to conduct the four study conditions based on the \(2\times 2\) study design. Participants then filled out a pre-questionnaire about their demographics, prior experience with VR/video games, experience with physical therapy/exercise, and experience with visual impairments. After getting an explanation of study directions, participants walked to the middle of the room to stand in a cleared area. We then adjusted the elbow sleeve sensor to be right on top of their dominant hand's elbow, gave them the headset to put on, and placed their hand controllers so they can draw with their dominant hand. Their first session was one of the following conditions: flat fish vertical, flat fish horizontal, curved fish vertical, or curved fish horizontal. When starting the task, we triggered the data collection scripts to record the objective measures. Once the task was completed and the victory animation had finished playing, the participants were asked to take off the VR headset and controllers and complete the post-questionnaire for the first session they had just experienced. The participants were then asked to put on the VR headset again and use the VR hand controllers to do their next task in a new configuration and orientation (the order of sessions was based on the \(4\times 4\) balanced Latin square). Once again, participants fill out the post-questionnaire for each of the new configurations/orientations. The entire experimental session for each participant took 20-30 minutes. A unique ID was generated by each participant and was repeatedly used in the completion of the questionnaires and saving of the data files to keep track of their data while preserving their anonymity. ### Study Design We had a \(2\times 2\) within-subjects study with a balanced design with two factors: (1) _orientation_: horizontal vs. vertical, and (2) _configuration_: flat vs. curved. Thus, four study conditions included flat vertical, flat horizontal, curved vertical, and curved horizontal for the fish model shown in Figures 3 and 4. The order to complete these four conditions was based on a balanced Latin Square (\(4\times 4\)) (Bartos et al., 2018) to mitigate the learning effects of win-subjects design. There were no trials in the studies because there were no identical conditions. We collected both objective and subjective measures to assess the perceived difficulty of the drawing performance and other (depth) perceptions related to the VR therapy experience through quantitative data collection and questionnaires. In the following, objective measures including independent and dependent variables, and subjective measures from the questionnaire are described. #### 3.4.1. Independent Variables The user study was planned as within-subject design with a two-factor test. The two factors were defined by two independent variables: _orientation_ and _configuration_. For _orientation_, the drawing contents were rotated around the x-axis for a vertical view of the model and a horizontal view of the model (see Figure 3). The difficulty levels of these orientations were empirically tested while preparing the experiment. Figure 2. The VR Therapy study setup for participants. * **Horizontal**: Participants performed the drawing activity while looking at the model facedown, with their reaching motions moving primarily out and in. * **Vertical**: Participants performed the drawing activity while looking at the model head-on/straight up, with their reaching motions moving primarily up and down. For _configuration_, two virtual drawing contents were objectively adjusted with different numbers of drawing dots and with different dimensions and depths (see Figure 3 and Figure 4). The difficulty levels of these virtual contents were empirically tested while preparing the experiment. * **Flat**: The drawing content was the shape of an abstract outline fish image with 69 drawing dots, which could be easier to draw because they are all on the same z-plane. * **Curved**: The drawing content was the shape of an abstract outline fish image with 91 drawing dots, which could be harder to draw because some dots are closer/farther than others with respect to the z-axis and more dots were required to add this dimension. #### 3.4.2. Dependent Variables Three core measurements to objectively evaluate the drawing performance was defined as dependent variables. * **Normalized Task Completion Time (TCT)**: The completion time of the drawing task was calculated based on the starting time when the participant hits the first dot and the ending time when the final dot of the model is hit. We normalized TCT over the number of drawing dots for a fair comparison between model configurations. Therefore, the measuring unit for TCT was _task completion time per drawing dot-still in seconds_. * **Normalized Number of Mistakes**: The number of mistakes, e.g., when the participants missed any drawing dots while performing the tasks, was logged during the study. We normalized the number of mistakes over the TCT. Thus, the measuring unit was the _number of mistakes per second_. * **Normalized Resistance Change**: The sensor's electrical resistance over the elbow changes during flexion and extension. Negative resistance change values were ignored and considered outliers. The percentage resistance change was calculated according to Equation 1,where \(R\) is resistance at stretch and \(R_{m}\) is minimum resistance value at no stretch (Cheng et al., 2017). We calculated the mean value of percentage resistance change with identical experimental values. \[ResistanceChange(\%)=((R-R_{m})*100)/R_{m}\] (1) In addition, the resistance change data was normalized over the TCT for the fair compression among the participants. Hence, the measuring unit for resistance change is the _percentage of change per second_. #### 3.4.3. Questionnaire For the subjective measures, we collected data on usability and perception based on subjective questionnaires, using Qualtrics survey platform. The questionnaire was adapted with a standardized questionnaire to evaluate the user experience in the immersive environment (Stein * _Willingness to recommend_: "I would recommend this creative therapy game to friends or family members as an upper-limb therapeutic exercise." We also collected the participants' general feedback through text entries, asking for thoughts on how to improve this activity for future use and their preferences for the study conditions. ### Hypotheses The hypotheses for this user study arose from the specified tasks and the therapy experience, resulting in the following: * Participants' objective performance on the VR therapy exergame will be influenced by the model _configurations_. * The curved model configuration (more drawing dots and dimensions) will produce more _electrical resistance changes_, compared to the flat configuration. * Participants' performance and experience will be improved with the flat configuration (the distance from the user to the dots is the same for each dot), compared to the curved configuration. * Participants' objective performance on the VR Therapy exergame will be influenced by model _orientation_. * The horizontal orientation (reaching out) will produce more _electrical resistance changes_ compared to the vertical orientation (reaching up). * Participants' performance and experience will be improved with the vertical orientation compared to the horizontal orientation because they can better see the content's full shape. ## 4. Results We used _RStudio_ with \(R\) for statistical computing. An analysis of variance (_ANOVA_) was used for data analysis with three dependent variables: _task completion time_, _number of mistakes_, and _resistance change_. In addition, we run a normality test for data normal distribution. We further analyzed the data with pairwise _t-tests_ and the _Bonferroni_ adjustment method to identify the differences between the conditions. The questionnaire results were analyzed descriptively. In the following, we describe the results of statistical analysis, questionnaire results, and general feedback. ### Statistical Analysis The summary of the descriptive results for dependent variables is listed in Table 1 and shown in Figure 5. Moreover, the results of the statistical analyses with _ANOVAs_ are listed in Table 2. #### 4.1.1. Normalized Task Completion Time (TCT) Statistically significant difference was found in the interaction effect between the _orientation_ and _configuration_ (\(p<0.03\)). However, there were no significant differences in the main effect of the conditions. We further analyzed the data with the pairwise _t-test_, and we found a difference in the flat configuration between the _horizontal_ and _vertical orientation_ (\(t=2.49\), \(df=11\), \(p<0.03\)) for TCT. The results show that vertical orientation was performed faster than the horizontal ones in the flat model configuration. However, this difference in the curved configuration is not significant. #### 4.1.2. Normalized Number of Mistakes For the number of mistakes for each virtual content/model, we found no statistically significant differences between the orientation and configuration conditions. Trade to subjective results shows that the number of mistakes in flat configuration is averagely lower than in the curved model configuration. Furthermore, the number of mistakes in the flat model in the vertical orientation is relatively lower than the horizontal orientation. However, the differences are small between curved configuration. #### 4.1.3. Normalized Resistance Change The resistance change is indicated by how much the elbow sleeve stretches. We found statistically significant differences in the orientation (\(p<0.04\)), configuration (\(p<0.03\)), and their interaction effect (\(p<0.01\)) for _resistance change_. The results of the pairwise t-test show a significant difference between the _curved_ and _flat_ configurations (\(t=-3.51\), \(df=23\), \(p<0.002\)). In addition, a significant difference was also found in the vertical orientation between the _curved_ and _flat_ configurations (\(t=-3.59\), \(df=11\), \(p<0.004\)). For orientation, there was a difference between the _horizontal_ and _vertical_ orientation in the _flat_ configuration (\(t=-2.83\), \(df=11\), \(p<0.01\)). The results indicate that the _flat_ model induced more _elbow stretches_ than the curved model in the _vertical_ orientation. Furthermore, the _vertical_ orientation induced more _elbow stretches_ compared to the _horizontal_ orientation during the drawing performance of the virtual content. ### Questionnaire Results The questionnaire results are shown in Figure 6. The questionnaire data was analyzed descriptively, and we describe the results for each of subjective measures as follows: * **Easiness** The average scores of easiness are \(M=4.29\), \(SD\) = 1.08 for horizontal and \(M\) = 4.21, \(SD\) = 1.10 for vertical orientations. The perceived easiness of the virtual content for flat model was averagely higher than the curved model on both orientations. * **Comfort** The comfort average score for horizontal (\(M\) = 4.83, \(SD\) = 0.48) is higher than the vertical (\(M\) = 4.71, \(SD\) = 0.55) version. The perceived comfort of the virtual content for flat model was averagely higher than the curved model for both orientations, but it was a small difference. * **Enignment** The mean score of enjoyment for horizontal (\(M\) = 4.29, \(SD\) = 1.27) is averagely lower than the vertical (\(M\) = 4.46, \(SD\) = 0.83) orientation. * **Body Stretch** The average score of body stretch show the horizontal (\(M\) = 2.88, \(SD\) = 1.62) and vertical (\(M\) = 2.58, \(SD\) = 1.56). The perceived body stretch while drawing the curved model was averagely more than the flat model for both orientations. * **Depth Perception** The participants rated the depth perception for horizontal (\(M\) = 4.08, \(SD\) = 1.18) and vertical (\(M\) = 4.25, \(SD\) = 1.15). The perception of depth for flat virtual content was averagely better than for curved content for both orientations. The perceived easiness to reach objects and judge the distance from objects for participants was significant between versions ( \(p<0.05\)). When looking at how the versions compared in pairs for the depth perception scores, there was significance between the flat vertical content and curved horizontal ( \(p<0.02\)). There was also significance between the flat vertical content and the curved vertical ( \(p<0.03\)). * **Visual Cues** To get an insight into how accurate the visual cues were for the virtual contents; we measured the perceived easiness and the perceived realism to see the objects in the virtual environment. The participants rated the horizontal (\(M=4.48\), \(SD=0.59\)) lower than the vertical (\(M=4.60\), \(SD=0.58\)) orientation. For both orientations, the perceived accuracy of visual cues was higher in flat virtual content than in curved. When individually comparing the scores for perceived realism of objects, there was a main effect of flat horizontal vs. curved horizontal ( \(p<0.04\)). flat models had the same mean, so there was also a main effect when comparing flat vertical vs. curved horizontal ( \(p<0.04\)). * **Willingness to Recommend** The willingness to recommend scores were rated on a ten-point Likert scale. The mean score show the horizontal (\(M=8.98\), \(SD=1.38\)) and vertical (\(M=8.98\), \(SD=1.48\)). ### User Feedback When asked about their thoughts on improving the VR drawing activity for future use, participants responded with feedback on the difficulty, easiness of seeing, and the stretch required to reach the tasks. Participant #6 reported that for the curved configuration, in horizontal orientation, it was _"hard to reach the furthest part of the fish from standing in one spot"_ and in the vertical orientation, it was _"hard to tell what dots were further away/closer to me."_ This shows how orientation affects how much a user perceives body stretch while reaching and estimating distance from an object in a VE. When drawing the flat fish in a vertical orientation, they expressed that _"this level was too easy."_ Similarly, Participant #11 expressed that the flat model drawn vertically _"felt like 1 finished this model really fast so [it] wouldn't be that practical for a game."_ \begin{table} \begin{tabular}{l r r r r} \hline \hline Variable & Norm. Task Completion Time (s) & Norm. Number of Mistakes & Norm. Resistance Change (\%) \\ \hline Horizontal & 0.71 (0.43) [0.08] & 0.086 (0.09) [0.01] & 0.60 (0.47) [0.09] \\ Curved & 0.59 (0.32) [0.09] & 0.089 (0.07) [0.02] & 0.51 (0.41) [0.11] \\ Flat & 0.82 (0.51) [0.14] & 0.083 (0.11) [0.03] & 0.70 (0.52) [0.15] \\ Vertical & 0.61 (0.37) [0.07] & 0.066 (0.07) [0.01] & 0.89 (0.86) [0.17] \\ Curved & 0.72 (0.34) [0.10] & 0.088 (0.08) [0.02] & 0.45 (0.50) [0.14] \\ Flat & 0.50 (0.39) [0.11] & 0.044 (0.04) [0.01] & 1.34 (0.93) [0.27] \\ \hline \hline \end{tabular} _All entities are in the format: mean value (standard deviation) [standard error]. (Norm. Normalized)_ \end{table} Table 1. Summary of descriptive results for drawing performance. \begin{table} \begin{tabular}{l r r r r r} \hline \hline **Variable** & **df** & **F** & **p** & **Sig** & \(\eta^{2}\) \\ \hline **Norm. Task Completion Time** & & & & & \\ Orientation & 1 & 1.292 & 0.27 & & 0.015 (small) \\ Configuration & 1 & 0.001 & 0.96 & & 0.00003 (small) \\ Orientation \({}^{*}\) Configuration & 1 & 5.872 & \textless{}0.03 & * & 0.082 (medium) \\ **Norm. Number of Mistakes** & & & & & \\ Orientation & 1 & 1.681 & 0.22 & & 0.016 (small) \\ Configuration & 1 & 2.068 & 0.17 & & 0.023 (small) \\ Orientation \({}^{*}\) Configuration & 1 & 0.479 & 0.50 & & 0.013 (small) \\ **Norm. Resistance Change** & & & & & \\ Orientation & 1 & 4.990 & \textless{}0.04 & * & 0.054 (medium) \\ Configuration & 1 & 11.823 & \textless{}0.005 & * & 0.168 (large) \\ Orientation \({}^{*}\) Configuration & 1 & 8.528 & \textless{}0.01 & * & 0.076 (medium) \\ \hline \hline \end{tabular} \end{table} Table 2. Summary of statistical results with significance level and effect sizes (\(p<.05\)). Figure 5. Results of dependent variables: (left) task completion time, (middle) mistakes, and (right) resistance change. Starred brackets mark significantly different conditions. Such suggestions offer insight into how our task can be used as short repeatable exercises for patients that are not too strenuous in each iteration. Participant #12 said that they liked the flat model more in horizontal orientation than vertical orientation because they could "_look down on it as I draw_" and the horizontal task was "_realistic, easy, enjoyable._" However, Participant #9 expressed that with flat model in horizontal orientation, "_it was a little challenging to see the order of the dots in this setting, and I think I traced some of the dots out of order because of this._" ## 5. Discussion In the following section, we discuss the results from our preliminary user study with respect to the hypotheses described in Section 3.5. Effects of ConfigurationThe results show statistically significant differences in the variable _resistance change_ for model configuration. The **H1-1** is not supported because the flat configuration induced more electrical resistance changes compared to the curved configuration, in particular, in the vertical orientation. It reveals the benefits of flat configuration, which induces more elbow stretches for rehabilitation. There was no significant difference on the main effect between the configurations in term of task completion time. However, there is a significant difference on their interaction effect between the configuration and orientation. The results show that the flat configuration was performed faster in the vertical orientation. For the number of mistakes, there was no significant difference in the main effect of configurations. However, according to descriptive results, the average of mistakes for the flat configuration was lower than for the curved configuration. Based on the results, we state that the **H1-2** is supported. Effects of OrientationThe results of objective measures do not support the **H2-1**. In contrast, the vertical condition induced more electrical resistance changes compared to the horizontal condition. It is also noteworthy that there was a significant difference on the flat configuration between both orientations. The results indicate that the resistance change in the vertical orientation for flat configuration was greater than in the horizontal orientation. For **H2-2**, the results of the normalized task completion time reveal that the vertical outperformed the horizontal orientation in the flat configuration. There was no significant difference in terms of the normalized number of mistakes. However, the average number of mistakes of vertical was lower than horizontal orientation. Therefore, we assume that the results support the **H2-2**. The results of the objective measures for elbow flexion reveal that the participants' performance was influenced by the orientations and configurations of the VR therapy exergame. However, there were no statistically significant differences on the main effects of task completion and number of mistakes of both conditions. It indicates that the VR therapy exergame can lead to customizing therapy while maintaining the same intensity level. For example, if the patient has restricted lower limb and required to be seated, they can use different orientations and configurations interchangeably. For _subjective measures_, the participants reported relatively higher scores for _willing to recommend_ for the multi-dimensional _curved_ content. This encourages researchers to look at how more curves and dimensions and the combination of reaching up/down/up/in movements can be used in future studies of enjoyable tele-rehabilitation. Limitations and future workOne of the major limitations was that the wearable sleeve was _one size fits all_. Thus, for some participants, it was too loose and needed to be secured with rubber bands. Besides, the signal for the sleeve sensor was sometimes disconnected because of the fragility of the hardware. Thus, more work for robust data collection with this sensor is needed, in addition to making different sizes of the elbow sleeve sensor (Baran and Chienang, 2017). Also, though the creative drawing model was meant to be drawn in one continuous arm stroke, some participants had a difficult time following our intended path. Future work will include guided paths Figure 6. Questionnaire results of the average perceived scores for easiness, comfort, enjoyment, body stretch, depth perception, visual cues, and willingness to recommend. so that there is more consistency in resistance change data from the same task. This is important because, in the future, we can look at distance misestimation for depth in terms of overestimation (controller goes past the dots) or underestimations (controller did not reach the dot), especially with respect to Fitts' Law (Lavand, 2019) in a simpler virtual content. This will give us an idea of where depth perception falls short in our immersive virtual exergame. In addition, there are more dimensions that can be further developed for VR therapy exergame. For example, implementing multiplayer feature allows collaboration among users, thereby making the exergame more interesting and improving user experience (Fan, 2019; Gagnon et al., 2020). Collecting multi-model data during experiment is also worth investigation. The collected data can be used to train machine learning models which predicts the exercise intensity. Hence, future research can utilize the collected data and trained model to recognize key features that have the biggest impact on the outcome, i.e. quantitative and qualitative assessments. Another limitation of our study was the small sample size of our participation pool and volunteer-based recruitment for convenience. From the perspective of improving on the experimental settings, future studies should cooperate with medical institutions and invite the target population of upper extremity therapy, such as Parkinson's patients, post-stroke patients or people who have experienced serious injuries. ## 6. Conclusion In this paper, we presented a VR therapy exergame for upper extremity rehabilitation that captured both hand and elbow joint movement with a smart wearable sensor. The results provide insights that the orientation and configuration of virtual content may possibly be used for therapeutic applications. Such therapy does not decrease patient accuracy or depth perception and importantly may increase the movement of the involved joint. Our findings show that these study conditions may be appropriate for exercises that prefer a seated position of the user and want to focus on upper extremity mobility. Further research is necessary to study how to improve visual cues, depth perception, and reaching capabilities for 3D and curved objects in virtual environments, especially for VR therapy where accuracy is important for a patient's healthcare. Our results provide insights and show potential research directions for at-home virtual therapy, especially by using multi-model sensing data for further in-depth analysis of limb movement and range of motion assessment.
2302.07640
Detection and classification of vocal productions in large scale audio recordings
We propose an automatic data processing pipeline to extract vocal productions from large-scale natural audio recordings and classify these vocal productions. The pipeline is based on a deep neural network and adresses both issues simultaneously. Though a series of computationel steps (windowing, creation of a noise class, data augmentation, re-sampling, transfer learning, Bayesian optimisation), it automatically trains a neural network without requiring a large sample of labeled data and important computing resources. Our end-to-end methodology can handle noisy recordings made under different recording conditions. We test it on two different natural audio data sets, one from a group of Guinea baboons recorded from a primate research center and one from human babies recorded at home. The pipeline trains a model on 72 and 77 minutes of labeled audio recordings, with an accuracy of 94.58% and 99.76%. It is then used to process 443 and 174 hours of natural continuous recordings and it creates two new databases of 38.8 and 35.2 hours, respectively. We discuss the strengths and limitations of this approach that can be applied to any massive audio recording.
Guillem Bonafos, Pierre Pudlo, Jean-Marc Freyermuth, Thierry Legou, Joël Fagot, Samuel Tronçon, Arnaud Rey
2023-02-14T14:07:09Z
http://arxiv.org/abs/2302.07640v2
# Detecting human and non-human vocal productions ###### Abstract We propose an automatic data processing pipeline to extract vocal productions from large-scale natural audio recordings. Through a series of computational steps (windowing, creation of a noise class, data augmentation, re-sampling, transfer learning, Bayesian optimisation), it automatically trains a neural network for detecting various types of natural vocal productions in a noisy data stream without requiring a large sample of labeled data. We test it on two different data sets, one from a group of Guinea baboons recorded from a primate research center and one from human babies recorded at home. The pipeline trains a model on 72 and 77 minutes of labeled audio recordings, with an accuracy of 94.58% and 99.76%. It is then used to process 443 and 174 hours of natural continuous recordings and it creates two new databases of 38.8 and 35.2 hours, respectively. We discuss the strengths and limitations of this approach that can be applied to any massive audio recording. keywords: detection, classification, neural network, automatic pipeline, natural environment, baboon, baby, vocalization + Footnote †: journal: ## 1 Introduction There is a growing number of massive continuous audio recordings made in natural environments that aim to study the vocal productions of different animal species. The need to collect such data is particularly important in comparative approaches. It is essential notably to further progress on the issue of language evolution [1]. In particular, studies on the vocal productions of different species allow us to question hypotheses that were no longer discussed for decades [2; 3]. Beyond wild ecosystems and the study of non-human animals, Gilkerson et al. [4] note the importance of studying the vocal productions of human children in their natural environment with their parents. By quantifying these vocalizations over time, one can estimate the relationship between certain covariates and child development. Cabon et al. [5] have shown, for example, the value of recording and retrieving the cries of newborns in neonatal wards to ensure the proper development of these children. Similarly, ter Haar et al. [6] showed that forms of babbling, an important phase in human language development, can be found in species other than humans. For all these questions, there is a need for new methods to efficiently and rapidly analyze massive audio data to further study these critical developmental phases in detail. To deal with this type of problem, _deep learning_ approaches have proven their efficiency in different areas to treat massive data, possibly noisy, with increasingly good results [7]. However, one problem with deep learning approaches is the need for huge amount of data and computational resources to learn the relevant information. Here we propose a complete pipeline based on deep learning neural networks that has been designed to quickly detect the target signal (i.e., vocalizations from a given species), to treat massive and noisy data, and to minimise the loss of information. The originality of this approach is multiple. The pipeline is complete and truly end-to-end, from learning to prediction. It has been tested on real data and its generalizability has been evaluated on two qualitatively distinct data sets from two animal species, Guinea baboons (_Papio papio_) and human infants (_Homo sapiens_). In each case, training of the model has been done on a restricted labeled data set while predictions were done on massive records. Computations were relatively fast and done on a laptop. Finally, the model provides supplementary information about the class of each detected vocalization. In the following sections, we first provide a review of the pattern recognition literature that address the issue of detecting and classifying vocalizations from large scale audio recordings. Second, we present the proposed pipeline. Third, we test it on two completely different data sets. Fourth, we show that the results reach state-of-the-art performances on comparable data sets and how the method can be easily applied to other data sets. ## 2 Related work ### Traditional methods Pattern recognition methods are traditionally based on the following 3-steps procedure: 1) filtration of the signal, 2) extraction of features to a vector that represents the signal, and 3) classification from this vector. Dietrich et al. [8], for example, propose a pipeline to detect cricket vocalizations. Since bioacoustic time series are generally noisy, the pipelines start by filtering the recorded acoustic signal. The signal is then normalized and segmented with a speech recognition algorithm. Once the relevant segments of vocalizations are found, several descriptors are extracted in order to characterize the signal, therefore creating a vector of representation. With this type of approach, a lot of prior information is necessary. For example, the filter is tuned to the range of frequencies corresponding to the cricket vocalizations but from one species of cricket to another, the range may change. The segmentation algorithm also needs to be tuned according to each cricket species. Finally, this approach is strongly dependent on the information collected in the vector of representation that characterizes each vocalization. Selection of the relevant information might be arbitrary or biased by anthropogenic choices. Following the same approach, several studies have reported better performances in the detection and classification of acoustic events by modifying one or several steps from this 3-step approach. For example, Xia et al. [9] improved performance by modifying the vector representing the signal and by coding contextual information in a different way. The contextual information coming before and after the target signal are taken into account in this approach to enrich the representation vector. However, by integrating both prior and posterior contextual information, it prevents this approach to do online classification. Similarly, Nguyen et al. [10; 11] have mainly proposed innovations regarding the third step of the traditional procedure, which allowed them to perform online classification. For this purpose, they have used a Bayesian classifier, estimated by variational inference. The outcome is efficient, notably when the data flows are massive. However, even if the results were encouraging, this strategy remains dependent on the preliminary step of feature extraction and the arbitrariness of the choices to generate vector representations. The choice of data representation, of how to vectorize the signal to do the classification, is central in this type of approach. Strisciuglio et al. [12] focus particularly on this question for the tasks we are interested in, when the Signal to Noise Ratio (SNR) is low. Relying on the local maxima of the signal gammatonegrams, weighted, smoothed and averaged, they insist on the problem of building hand-crafted features. This task is demanding, time consuming and requires important domain-specific knowledge. Representation learning, the solution generated by deep neural networks, certainly provides a method to overcome this problem. Compared to the traditional approach, neural networks seem to provide more robust and more generalizable representational solutions. As a pure end-to-end approach, it avoids multiple processing steps. It does not require hand-crafted features, which usually requires significant domain-specific engineering skills. Finally, neural networks can provide online processing. ### Motivations for using deep neural networks Deep learning, a possible solution through hierarchical representation learningLeCun et al. [13] explain how traditional machine learning methods are limited to process raw natural data. Deep learning, and more specifically _convolutional neural networks_ (CNN) automatically discover the useful representation for detection or classification, from the raw data. The representation learning done through the layers finds structures in data of high dimension and discovers hierarchical structures in natural signal. Farabet et al. [14] describe why a good representation of the data is hierarchical. Although they have applied this analysis to the problem of scene parsing, the same reasoning can be transposed to sound processing. For speech, we can indeed assume a hierarchical representation going from phonemes to sentences, passing through syllables and words. Similar hierarchical structures can be assumed in other species for different vocal communication systems. A CNN extracts these learned hierarchical features from the raw data and alleviates the risk of creating anthropocentric representations [15]. The deep learning approach is thus able to process a large amount of data and to provides data driven descriptors. In a recent study, Gu et al. [16] review the different successes of the deep learning approach, in different areas, with various large scale data sets. Good performances for object detectionDeep networks have already proven their efficiency for image classification and for detecting objects in a visual image. For instance, Girshick et al. [17] show that CNNs, which classify images efficiently, can also reach good performances on object detection. Through a selective search algorithm, regions from an image are selected and the network is able to provide a vector representation of each region. An SVM model then classifies the regions using the vector representation. Ren et al. [18] also use a deep learning approach to detect object. They describe a system which runs at near real time frame rates with a higher accuracy than the baseline models. These results suggest that a deep learning classifier can be used to detect a target signal in a stream of information. Ability to model temporal dimension: the case of videosAudio signal has a temporal dimension that is absent from images. Ji et al. [19] use a 3-dimensional CNN to recognise action. Discriminative hierarchical features are learnt automatically on the spatial and temporal dimensions. This strategy, which avoids pre-processing steps and provides online recognition due to the feed forward nature of the models, is very competitive with state-of-the-art models, or even better depending on the data set. Similarly, Tran et al. [20] use a 3-dimensional CNN to learn spatio-temporal descriptors. Here also, whereas handcrafted methods are not able to process large data sets, 3-dimensional CNN avoid preprocessing difficulties and generate rapidly vector representations and classifications. Therefore, CNNs are able to model the temporal dimension, to construct features from the raw data and look more suitable to process large data sets. Learning temporal signal with neural networksIn a recent review, Ismail Fawaz et al. [21] show how deep learning is able to deal with time series and to learn discriminative features from the data. Filters of convolutional layers can indeed be time invariant thanks to parameters sharing. Furthermore, end-to-end approaches are domain agnostic, which reduces the bias of handcrafted features. To model the temporal characteristics of the speech signal, Abdel-Hamid et al. [22] uses a Hidden Markov Model (HMM) associated with a CNN to perform automatic speech recognition in order to estimate the probability distribution of speech signals. More recently, Zhang et al. [23] show that it is possible to perform the same type of processing only with a CNN, without HMM. Moreover, van den Oord et al. [24] demonstrate with WaveNet the ability of CNNs to model audio data without recurrent connections, thus simplifying learning. Engel et al. [25] show that this model is also successful for audio synthesis (_e.g._, showing how a WaveNet autoencoder trained on the Nsynth data set produces efficient representations and generates new types of sounds of instruments). Similarly, Mehri et al. [26] introduce SampleRNN, another generative model which is also able to produce realistic audio samples for different tasks. It is composed of hierarchical modules, operating at different temporal resolution to capture the temporal dependence. Thus, the deep learning approach is adapted to process time series and model the temporal dependence of the audio signal. Whether it is for the classification of speech signals, music or environmental sounds, several works have proven its efficiency. Examples on speech processingZhu et al. [27] build a convolutional model which learns features directly from the raw data in order to recognize speech. It uses filters of different scales, ones with a large window for the low frequencies, others with a small window for the high frequencies, to cover all the frequency spectra of the signal. It obtains better results than with spectrograms, overcoming the time/frequency trade-off of Fourier representations. Schindler et al. [28] also use different temporal resolutions and conclude that several resolutions are better than one for a problem of acoustic scene classification. However, the input of their model is not the raw signal; a pre-processing based on short time Fourier and mel-transformation is required. Examples on musicIn classifying musical genres, Zhang et al. [29] point out the problem of choosing an input that may lack universality and be difficult to construct for a specific task. With a short time Fourier transform passing through a CNN, they find better results than with handcrafted features. Nonetheless, they note that it is not a truly end-to-end method, taking the spectrogram instead of the raw waveform as input. The model of Choi et al. [30] also does not take raw audio as input, but log amplitude mel-spectrograms to classify the Million Song data set by genre, mood, instrument and era. Dong [31] achieves human-level accuracy with a CNN on a problem of music information retrieval, but again using mel-spectrograms (_i.e._, making a mel-scale transformation, particularly adapted for the human auditory system). In contrast, Chen et al. [32] propose a model that starts by learning descriptors from a wave signal, using 1-dimensional convolutional layers, and then use spectrogram-like representations that are produced to extract the melody through an inception module. Lee et al. [33], also facing a problem of music information retrieval, present a model that learn hierarchical representations of the signal directly from the waveform. Examples in bioacousticsThe previous examples show how neural networks manage to model sound time series. Bergler et al. [34] illustrate why deep learning is useful in bio-acoustic tasks, when these time series have a low signal-to-noise ratio. On the Archive data set (records representing 19 000 hours on 23 years, in which only a small portion contains animal vocalization and the majority is environmental noise), a ResNet dealing with spectrograms provide a good solution. Using the Archive Annotation Catalog (in which 1.68% of Orchive has been labeled), the network is able to discriminate correctly between moments of signal (when orcas vocalize) and moments of noise (everything else). It is then able to detect vocalizations, emphasizing the importance of the variety of noises to be provided during training. Oikarinen et al. [35] also develop a strategy to detect and classify sounds of animals in a noisy environment. They use 38 hours of marmoset primates records for training and they use spectrograms as inputs. Not only the results in classification and attribution are superior to humans but the inter-human variability is avoided. Therefore, deep learning has real advantages for bioacoustics. The question of how to encode the input data of the network remains a critical issue. Stowell et al. [36] point out in the bird audio detection challenge that some sounds have similar characteristics on a spectrogram and encourage the use of waveform in the future instead of transformed. CNNs reached the best results during the challenge, despite the weather noise, the variability in the bird sounds and the mismatched training data. Ability to scale and robustness to noiseDeep neural networks can effectively process large scale and noisy data sets. Using a _deep learning_ approach to process and classify radio signal, Chen et al. [37] empirically shows that the model can process a quite large amount of data and has a good robustness against noise in a classification task. The architecture of their model is a mixture between Inception and ResNet modules. Furthermore, the model can be adapted to related but different tasks and more specifically, in situations where the labeled data is limited. In this case, it is possible to reuse a model that has been trained on a large data set and adapt it to the limited labeled data set. This method of transfer-learning is also particularly useful when computational resources are limited or if we want to tune the model as much as possible (it is simpler to multiply the iterations when those are "inexpensive" because the learning is done quickly). Examples on environment sound classificationThe ability of deep learning models to deal with noisy and heterogeneous data has also been proved in environmental sound classification. For instance, Piczak [38] show how CNNs are effective on referencial data sets, such as UrbandSound8K, ESC 10 and 50, surpassing methods with manually engineered features [see also 39, 40, 41]. Note that none of these studies take raw data as input but start with a transformed representation, even if Li et al. [41] use multiple input streams, one of them being raw data. Conversely, Tokozume and Harada [42] propose EnvNet, a model which learns a representation of the raw data that might discover new characteristics of the input signal that human could not distinguish. Using raw data may also allow the network to learn representations that would not be discovered with transformed data, like mel spectrograms, in which the information has been shaped for auditory human perception. All these studies indicate that deep neural networks are good candidates for modelling to our task. They are fast for inference, they scale to massive data, they are robust to noise and have good generalization results. Moreover, this approach avoid the problem of handcrafted features, making the pipeline more accessible and generalizable. It avoids manual configuration often used in traditional approaches. It is a purely end-to-end strategy. However, this approach also has some limits. Massive data is the keystone of successful representation learning, but we usually do not have large labeled corpus on which training can be done. In addition, training a neural network is complex and computationally heavy. The main point of the present pipeline is to go beyond these limits. It is in line with the roadmap proposed by Stowell [7] who suggested to detect sound events, learn from small data sets, use integrated workflow, and prefer solutions with low impact (thanks to shorter learning and more efficient predictions). ## 3 Proposed Method ### Global workflow We propose a unified pipeline that can detect a vocalization produced by a given species in a massive continuous audio stream of several hours, days or months. For this, we resort to a CNN that learns to distinguish two classes: a vocalization class and a noise class. It is thus necessary to have samples representative of these two classes. Moreover, if we want to train the CNN to distinguish different classes of vocalizations, we will also need a labeled database. Once trained on the labeled data, the model segments the massive audio database into small time segments for which it will produce two outputs: 1) if the segment contains a target signal (i.e., a vocal production) and 2) in the case of a vocalization, what is the class to which this vocalization belongs. The model is designed to be fast and easy to use. Here, the pipeline is tested on two problems. The first one is a bio-acoustic problem of detection of baboon vocalizations, in which the target signal is a baboon's vocalization and all other sounds are considered as non-signal. The second problem aims at detecting the vocal productions of human babies during their first year of life in their home, from daylong audio recordings. The target signal is a baby vocalization and noise corresponds to all the other sounds that can be heard at home. For both problems, we have prior information on the repertoire of the vocal productions that allows us to predict the second output, i.e., the class of the vocalization. The procedure is automatic and can be applied to other problems of event detection. The pipeline is fully and freely available, using open-source library to process the data [43]. There is no preprocessing steps and the tuning is done during the learning. It is tested on data having two different signal-to-noise distributions, proving the robustness of the workflow together with its adaptation to small data sets, and it runs on a laptop. Figure 1 provides a schematic description of the global workflow. In the next sections, we present the main features of the pipeline. We start by a formal description of the problem. We then introduce: how the signal is processed; how the vocalizations and noise classes are constructed; how we perform data augmentation in order to improve learning; the resampling procedure; the transfer-learning strategy we adopted; the training of the back-end layers; and the tuning of hyper-parameters. Finally, we propose a practical description of the pipeline. ### Formalization of the problem Here we assume that we have labeled data, i.e., sound files that correspond to each of the classes to be learnt. These files will be used to train the model. After training, the model will be able to classify massive audio recordings and detect the learned classes. Let \(p_{data}(\mathbf{x},\mathbf{y})\) be the data generating distribution and \(\hat{p}_{data}(\mathbf{x},\mathbf{y})\) the empirical distribution, _i.e.,_ our training samples. \(\mathbf{y}=(y^{s},y^{v})\), where \(y^{s}\) and \(y^{v}\) are two categorical variables, respectively whether it is a signal (i.e., a vocalization) or another sound, and if it is a vocalization, to which class it belongs. \(\mathbf{x}\) corresponds to a 1-time frame. ### Windowing the signal The final objective of the model is to process continuous audio records and detect in the flow each moment containing a target signal. Note that each audio file, both for the training and prediction phases, was resampled to 16,000 Hz to reduce computation time. We use the same windowing procedure during the training and the prediction phases. Practically, the input to the model is a 1-second wave file, the complete audio recording being divided into time frames of 1-second with an 80% overlap. We consider each frame as independent. This choice has several advantages. First, in a situation where labeled data are lacking, it allows to artificially increase the data set. Second, we can avoid using recurrent neural networks which are much more difficult and slower to train. This also makes it easier to reuse the pipeline in another situation without needing great skills to adjust the model. Third, by using the same time frame during the training and prediction phases, we can be more confident in the obtained results. Fourth, it is a computationally efficient way to quickly discard noise segments from the stream. This is particularly useful whenever the overall vocalization duration is small compared to the total recording time. Finally, a time window of one second is consistent with the type of data we are dealing with, i.e., vocalizations produced by different species. One second thus seems to be a good compromise, sufficient to encompass most vocalizations but not too large to be easily processed. This choice implies specific prediction methodology which we describe in Section 3.11. ### Construction of the vocalization classes The way we implemented the detection task is to train a CNN model to discriminate the target signal (i.e., the vocalizations) from a noise class (all the other sounds in the environment). Thus, to train the model to discriminate signal from noise, we need labeled sound files for each of the vocalization classes we want the model to learn. In the two situations tested here (i.e., baboon and human baby vocalizations), we had approximately 70 minutes of labeled audio data including respectively 6 and 5 classes of vocalizations. ### Construction of the noise class The soundscape is the acoustic expression of an ecosystem [44]. It is composed by the biophony (the sounds produced by the animals), the geophony (the physical environment) and the anthropophony (the human activities). The latter includes the technophony, the sounds produced by the mechanical and electronic machines. \(p_{data}\) describes all the soundscape including the vocalizations. Because we want to detect the moments of signal in a noisy environment, we enrich \(\hat{p}_{data}\) by adding sound files representing the environmental sounds surrounding the vocalizations. In our first working example, we have 35 days of continuous audio recordings from a primate facility where a group of baboons live. We extracted 7 hours, recorded at different days and at different moments of the day, in order to have a good representation of sounds produced in that environment. Of these 7 hours, we removed the baboons' vocalizations, resulting in 355.62 minutes of signal-free sound. Since the full audio record contains approximately 443 hours (i.e., 35 days), we only needed to label 1.6 % of all records. Our second working example is composed of daylong recordings done at several points within one year. To construct the noise class, we have extracted approximately 5 hours from records done at different months, Figure 1: Schematic representation of the global workflow. The front-end of the model is transfered from YamNet. The back-end is trained on the labeled data. The model has two outputs, the probability there is a vocalization within the frame and the class of the vocalization. Once trained, the model can similarly process unlabeled audio data to extract segments of vocalization. days and hours of the day. From these records, we have removed the baby vocalizations. Since the total audio records represent 174.15 hours, we needed to label 3% of all records. Note that these steps of class construction (for both vocalizations and noise) are the only one that requires hand-labeling. ### Data Augmentation Due to the small size of the labeled data sets, we need to increase the amount of training data through data augmentation. We use the ready-to-use library of McFee et al. [45], which allows us to multiply by 15 the number of labeled records we have. From each original file, we change five times the pitch, the speed and the background noise. In addition to create new audio files, data augmentation is a good way to avoid over-fitting and to increase the ability of the network to distinguish different type of sounds in a noisy environment. This strategy is well suited for the case of environmental sound detection and will improve the ability of the model to generalize [46]. ### Resampling In addition to being small, the vocalization labeled data set is probably not balanced (some vocalizations being more frequent than others). Vocalizations classes might also be underrepresented relative to the noise class. A non-uniform learning distribution between classes can be very detrimental to network learning. It may not learn to recognize underrepresented classes. To overcome this problem, we refine the training data generation process. \(\hat{p}_{data}\) is built by sampling conditionally on each class. Let \[\mathbf{\pi}^{s}\sim p(\mathbf{\pi}^{s}),\] be the categorical distribution of signal and noise, where \(\mathbf{\pi}^{s}_{j}\) for \(j=1,2\) is the probability to draw an element of the class \(j\), and \[\mathbf{\pi}^{v}\sim p(\mathbf{\pi}^{v}),\] the categorical distribution of each vocalization class, where \(\mathbf{\pi}^{v}_{k}\) for \(k=1,...,K\) is the probability to draw an element of the class \(k\), \(K\) being the number of classes in the repertoire (_i.e._, the number of solutions for the second output produced by the model). For \(\mathbf{\pi}^{s}\), the probability of drawing a sample from the signal or noise class, we set \(\pi^{s}_{1}=\pi^{s}_{2}=\frac{1}{2}\). For \(\mathbf{\pi}^{v}\), the probability of drawing a vocalization from a specific class of the repertoire, we set \(\forall k,\pi^{v}_{k}=\frac{1}{K}\). At last, we draw the samples from the training data generating process : \[(\mathbf{x},\mathbf{y})\sim\hat{p}_{data}(\mathbf{x},\mathbf{y}|\mathbf{\pi}^{s},\mathbf{\pi}^{v}).\] To sum up, the training distribution \(\hat{p}_{data}\) is built by drawing equiprobably between the noise frames and the signal frames. The signal frames are themselves drawn equiprobably between the different class of vocalizations in the repertoire. Therefore, for each epoch of training, there are as many frames of noise as there are signal frames. Similarly, among signal frames, there are as many frames per class. Because the training set is built by sampling, the definition of an epoch changes. As it is generally the case with neural networks, the loss function of our model is optimized by stochastic gradient descent. It is an iterative optimization method : at each iteration, the gradient is computed on a batch of the entire data set. Once all the frames have been processed, there is no more batch to feed the network and no gradient to compute and it is the end of an epoch. An epoch is the number of training iterations over the data set [47]. The number of iterations to complete an epoch is the number of batches to see all the frames. Here, the data set is an infinite flow of frame drawn with replacement from the set of labeled frames. A number of training iterations has to be fixed. We heuristically choose the number of batches needed to see each frame of the most represented class once, _i.e._ \[training\ iterations=\frac{(K+1)\times N}{N_{batch}},\] where \(N\) is the cardinal of the set of the biggest class, \(N_{batch}\) is the size of the batch. We add one to \(K\), the number of classes in the repertoire, because of the noise class. There are as many examples of each class of vocalization in the set used for training, and as many examples of signal in a set as there are noise. ### Transfer-Learning One of the interests of using _deep-learning_ is to learn a hierarchical representation of the data that we want to classify, project the raw data onto a latent space \(\mathcal{X}\) adapted to the task. The advantage is to avoid feature extraction, which would require domain-specific engineering skills. The features are extracted from the data during learning. However, the amount of data required to reach good results must be much larger than the amount of labeled data we presently have. Moreover, learning has to be feasible with a realistic budget constraint. In this situation, transfer learning is a good solution. It consists in taking the front-end of a model that has been previously trained on a large data set and to use it as the front-end of our own model. Through transfer-learning, the knowledge learned on a source domain \(\mathcal{D}_{S}\), for a specific task \(\mathcal{T}_{S}\), can be used for a target task \(\mathcal{T}_{T}\) or domain \(\mathcal{D}_{T}\)[48]. The source model can be learned from data coming from a different distribution [49], but the transfer will work all the better if the source data is linked to the target data. YamNet is a model freely and easily accessible1. Based on the architecture of MobileNet [50], it has been trained on more than 2 billions of 10-seconds audio records from the AudioSet corpus [51] to detect 521 classes of events, among which we find human, animal, and environmental sounds. Their data generating distribution is certainly linked to our, and its empirical distribution is rich enough to characterise the soundscape of the ecosystems we are dealing with. Consequently, the latent space \(\mathcal{X}\) of YamNet, the source, and the data we want to process with our pipeline, the target, are likely to be the same. Plus, the label space \(\mathcal{Y}\) of the source (i.e., the 521 labels) and of the target are also likely linked, because we are basically trying to detect acoustical events in a flow of sounds. The cardinal of \(\mathcal{Y}_{S}\) being important, \(\mathcal{Y}_{T}\) can be assumed to be a subset of it. Footnote 1: [https://tfhub.dev/google/yamnet/1](https://tfhub.dev/google/yamnet/1) Therefore, we have extracted the front-end of YamNet and we have used it as the front-end of our CNN. It corresponds to the first convolutional layers extracting the information from the audio file up to latent space of dimension 1024. These layers are then connected to the back-end of the model, which will predict if there is signal or not in the frame, and which kind of signal is presented. ### Learning the back-end The global objective of the pipeline is to learn \(p(\mathbf{y}|\mathbf{x};\mathbf{\theta})\), where \(\mathbf{y}=(y^{s},y^{v})\) and \(\mathbf{x}\) corresponds to a 1-time unit frame. \(\mathbf{x}\) is mapped to the front-end of the model and its latent space \(\mathcal{X}\). The back-end takes this representation and has to predict \(\mathbf{y}_{model}=(y^{s}_{model},y^{v}_{model})\). The model does two tasks simultaneously : it detects if there is signal or not, and classifies it. To this end, it produces two outputs, the probability that there is signal or not, and the most likely class within the frame. The front-end is fully transferred and consequently not learned, which allows to successfully use _deep learning_ on small data set. The back-end \(\mathbf{\theta}=(\mathbf{\theta}^{s},\mathbf{\theta}^{v})\) has to be learned. On the optimization side, both tasks must contribute to the loss function. Therefore, we minimise a global loss function \(\mathcal{L}\) which is the sum of each task, \[\mathcal{L}(\mathbf{x},\mathbf{y},\mathbf{\theta})=\mathcal{L}(\mathbf{x},y^{s},\mathbf{\theta}^{ s})+\mathbb{1}_{\{y^{s}=1\}}\mathcal{L}(\mathbf{x},y^{v},\mathbf{\theta}^{v}),\] where \(\mathbb{1}_{\{y^{s}=1\}}\) is the indicator function, equals to one if \(y^{s}=1\), _i.e._, the frame contains signal, 0 otherwise. In that way, the classification part of the loss function cancels when there are noise samples. Both loss function are cross-entropy ones, one distinguishing between noise and signal, the other between the different classes of the repertoire of the task. The loss is minimized using the NAdam algorithm [52], an extension of Adam developed by Kingma and Ba [53], which includes Nesterov momentum. \(p(\mathbf{y}|\mathbf{x};\mathbf{\theta})\) is learned by a composition of dense layers with Parametric Rectified Linear Unit activation function [54]. The weights are initialized following the initialization scheme proposed by He et al. [54], which should facilitate the convergence of the model [55]. The back-end is split into two modules: one binary, predicting if there is signal in the frame or not ; another which is multi-categorical, predicting which kind of signal is present, with as many categories as the cardinality of the repertoire. To interpret the outcome of the multi-categorical module as a probability, the activation function of its final layer is softmax. Similarly, the activation function of the final layer of the binary module is the sigmoid function. Given our model, three choices of back-end architecture were possible. 1: a purely multi-class approach; the possible outputs are all the classes of the repertoire plus a noise class. 2: a hierarchical approach; the model first predicts whether the image contains a signal or not and then, if it has predicted that the segment is a signal, it predicts the class. 3: our approach, where we predict two outputs, namely whether the segment is a signal or noise and to which class of the repertoire it belongs. Our choice seems to be the most appropriate given our constraints and objectives. First, with respect to the first approach, we emphasize the importance of not losing information, while having an efficient workflow to quickly process a massive recording stream. We make the conservative choice to produce two outputs to minimize the risk of wrongly rejecting a vocalization. We also have the class information within the directory, but our primary focus is on finding the signal. At the same time, we want a pipeline that is easy and fast to learn. By producing both outputs simultaneously, it makes modeling easier. Compared to the second approach, this modeling choice avoids using, for example, a nested loss function that would have been more complicated to optimize. The indicator function allows to cancel the part of the loss related to the multi-class problem during the learning process. If the observation is not a signal, we are only interested in the binary problem for the loss calculation and gradient backpropagation. This is a simple modeling that allows for easy optimization. To avoid problem of over-fitting, a regularization strategy is used. Batch normalization [56] is computed after each layer, as well as drop-out and a max constraint on the norm of the weights [57]. A schedule decreases the learning rate by a factor of 0.2 if the global validation loss has not decreased after 5 epochs. The early-stopping procedure stops the learning after 20 epochs without validation error decrease. The architecture of the back-end is determined by the number of layer and the quantity of nodes by layers, for each problem (the detection and the classification). The former is between 1 and 6, the second between 32 and 1024. Both are considered as hyper-parameters for each module. As the other hyper-parameters of the model, the most adapted task architecture is learned from the data. ### Hyper-parameters tuning As for the preprocessing, the choice of the hyper-parameters is difficult and conditions the convergence of the model. It usually requires skills and experience to be chosen at best and to ensure good results using _deep learning_. Given that the pipeline has to be used on computers, and not on clusters of GPU, the selection procedure is automatic and as optimal as possible, avoiding a grid search in an extensible space of choices. The pipeline includes a nested learning procedure to learn the best hyper-parameters adapted to the task together with the best architecture for the back-end of the model. This is done through bayesian optimization [58; 59; 60]. Thus, hyper-parameters are not chosen by the user but learned to best fit the task. They correspond to the probability of drop-out, the norm of the weights, the parameters of the NAdam algorithm (learning rate \(\alpha\), the decay rates of the moment estimates \(\beta_{1},\beta_{2}\) of the gradient algorithm), the number of layers, and the number of nodes per layer. ### Prediction Once the model is trained, we use it to find, in a long form audio-recording \(\mathbb{X}\) of length \(T\) seconds, the target signal moments. For prediction, the procedure is the same as for learning: \(\mathbb{X}\) is windowed in \(B\) frames \(\{\mathbf{x}_{b}\}_{b=1}^{B}\). With an overlap of 80% and a 1-second window, the model advances by steps of 200 milliseconds via iteration of the \(\mathbf{x}_{b}\) frames, and predicts \(\hat{\mathbf{y}}_{b}=(\hat{y}_{b}^{*},\hat{y}_{b}^{v})\). If \(\hat{y}_{b}^{*}>0.5\), then \(\mathbf{x}_{b}\) contains signal. If the previous frame to contain signal is more than one second away, a new vocalization starts. If it is less than one second away, the current frame is integrated into the current vocalization, as well as the intermediate frames (if they have not already been predicted as containing signal). Taking the frames between two positive frames less than one second apart avoids excessive partitioning of the vocalizations and smoothes the results. When more than 1 second has passed since the last frame predicted to contain signal, the end of the vocalization is marked at the last frame containing signal. This is done over the entire \(\mathbb{X}\) recording, producing \(N\) vocalizations, each of its length \(D_{n}\). We describe the procedeure in algorithm 1. In addition to detecting the \(N\) vocalizations in \(\mathbb{X}\), the model predicts, for each frame constituting the vocalization, the corresponding class. To determine the class of the vocalization, we do a majority vote on the \(\{\hat{y}_{d}^{*},\hat{y}_{d}^{v}\}_{d=1}^{D_{n}}\) frames constituting the vocalization, for which \(\hat{y}_{d}^{*}=1\), _i.e._, the frames the model predicts to actually contain signal (not those we include in the vocalization because surrounded by positive frames). Algorithm 2 describes the majority voting procedure used for the assignment of the class of the vocalization. ``` 0:\(\mathbb{X}\), a long form audio-recordings of \(T\) seconds, \(\tau\), the length of the window and \(\delta\), the overlap parameter 0:\(N\) vocalizations of length \(D_{n}\), \(voc_{n}=\{(\hat{y}_{d_{n}}^{s},\hat{y}_{d_{n}}^{v})\}_{d_{n}=j}^{j+D_{n}}\) with \(j\in\{1,...,B\}\) Split \(\mathbb{X}\) into \(B\)\(\tau\)-frames with a \(\delta\%\) overlap, with padding if necessary. for\(b\) from \(1\) to \(B\)do \(\mathbf{x}_{b}=\{x_{1+(b-1)\tau(1-\delta)},...,x_{1+(b-1)\tau(1-\delta)+\tau}\}\) \(\hat{\mathbf{y}}_{b}=p_{model}(\mathbf{y}|\mathbf{x}_{b})\) if\(\hat{y}_{b}^{s}=1\)then \(\mathbf{x}_{b}\) contains signal if\(\forall i\in\{b-\frac{2}{1-\delta},...,b-1\},\hat{y}_{i}^{s}=0\)then We start a new vocalization \(n\) at frame \(b\) \(d_{n}\gets b\) elseif\(\exists i\in\{b-\frac{2}{1-\delta},...,b-1\},\hat{y}_{i}^{s}=1\)then We merge frame \(b\) with the current vocalization (including intermediate frames if any) \(D_{n}\gets D_{n}+1+\#\{\text{intermediate frames}\}\) endif elseif\(\hat{y}_{b}^{s}=0\)then \(\mathbf{x}_{b}\) does not contain signal if\(\hat{y}_{b-\frac{2}{1-\delta}}^{s}=1\) and \(\forall i\in\{b-\frac{2}{1-\delta}-1,...,b-1\},\hat{y}_{i}^{s}=0\)then We end the current vocalization \(n\) at frame \(b-\frac{2}{1-\delta}\) \(D_{n}\gets 0\) elseif\(\forall i\in\{b-\frac{2}{1-\delta},...,b-1\},\hat{y}_{i}^{s}=0\)then There is no current vocalization endif endif endfor ``` **Algorithm 1** Detection of a vocalization ``` 0:A vocalization \(n\) of length \(D_{n}\),\(\{(\hat{y}_{d}^{s},\hat{y}_{d}^{v})\}_{d=1}^{D_{n}}\) The vocalization class, for \(K\) classes in the repertoire \(class_{n}=\arg\max\limits_{k\in K}\sum\limits_{d=1}^{D_{n}}\{\hat{y}_{d}^{v}= k|\hat{y}_{d}^{s}=1\}\) return\(class\) ``` **Algorithm 2** Determination of the vocalization class by majority vote ## 4 Experimental Validation The pipeline is used to detect and classify signal from two different problems (i.e., baboon vocalizations and human baby vocalizations), leading to the creation of two new large-scale data-bases. ### The data #### 4.1.1 Continuous data We first test the pipeline on a pure bio-acoustics problem. We recorded continuously during approximately one month a group of 25 Guinea baboons (_Papio papio_) from the CNRS primatology center of Rousset-sur-Arc (France). The group lives in semi-liberty in a large rectangular enclosure. Ethical agreement (# 02054.02) was obtained from the CEEA-14 for experimental animal research to conduct audio recordings of the baboons' vocalizations. Two microphones are placed at two corners of the enclosure. In addition to the baboons' vocalizations, the sound environment is composed of climatic events (wind, rain), the presence of other nearby animal species (sheep, birds), and human activities (people around the enclosure, cars on the nearby highway, planes, etc.). One month of recording leads to a tremendous amount of data : without including night recordings when baboons sleep inside a room (from 9 pm to 7 am), there is a total of 460 files representing 443 hours of recording (i.e., 1 595 018.24 seconds). We also test the pipeline on a developmental psycho-acoustics problem. We collected recordings from two human babies at home from birth to their first birthday, at a rate of three days per month. An ethical agreement (# 2019-12-12-005) was obtained from the ethics committee of Aix-Marseille University as well as a declaration of conformity from the CNIL (# 2222631 v 0) for experimental research on humans in order to make audio recordings of human baby vocalizations. The records were done by the parents at different moments of the days and nights. The parents were instructed to start and stop themselves the recordings. Although less noisy than the baboon environment, the recordings are composed of a lot of heterogeneous sources of sounds: TV, radio, domestic works, parents, other children. In total, the records represent 174.15 hours (626 940 seconds) for the two children. In both cases, the pipeline is fast and efficient enough to be used on a laptop computer, to detect and classify each relevant sound segment (i.e., a baboon or human baby vocalization). Before being used on this massive, unlabeled continuous recording, the model must first learn to distinguish relevant sounds from noise. A small amount of labeled data is sufficient to learn the parameters of the model. #### 4.1.2 Labeled data Only 72.49 minutes of labeled records for the baboons and 77.03 minutes for the human babies are enough to successfully train the model and discriminate between the signal and the other types of sounds included in the records. The pipeline is the same in each situation, but it is adapted to each data set. The adaptation is done through the learning of the model's back-end (the front-end being transferred). For both problems, the model learns to distinguish the signal from any other sound. It also predicts the class of the signal from the repertoire of each species, the number of possible outcomes depending on the repertoire of each species. For the baboons, we have 1410 labelled records of labeled data, divided into 6 classes (bark, copulation grunt, grunt, scream, wahoo, and yak), which come from a previous study [2]. Table 1a shows the number of records for each class of the baboon repertoire. The data set is initially divided into two sets: a training and a testing set, of 80% and 20 % of the total records, respectively. The training set is divided into a training subset (Table 1b) and a validation subset (Table 1c) representing respectively 75 % and 25 % of the training set. The validation subset is used to calculate the results of the generalization of the model during the hyperparameters tuning. The Bayesian optimization algorithm selects the best model hyperparameters from the accuracy measures on the validation set. Because we do not have large computational resources, we do not use cross-validation, which is not the most appropriate strategy for _deep learning_. This partition of labeled data is done only once, so the same training and validation sets are used throughout the training. Table 1d shows the distribution of the test set. Figure 2 provides a schematic description of the successive partition and transformations done on the labeled data set. For the human babies, we have 13,748 records of labeled data. These records come from Cychosz et al. [61], a database including daylong audio recordings of 49 children (1-36 months) from five different lan \begin{table} \end{table} Table 1: Total and per partition distribution of the baboon labelled data set. guage/cultural backgrounds that were annotated by citizen scientists. The database is freely accessible2. The repertoire is composed of 5 classes (canonical, crying, junk, laughing, non-Canonical). The number of records for human babies is larger than for baboons but they are on average shorter and their duration is more uniform within classes. Consequently, _a contrario_ to the baboons' data set, which is increased by the windowing, babies' vocalizations do not overlap on several windows. Table 2a shows the distribution of the total data set for each class of the repertoire. Likewise, the data set is partitioned between a training subset (table 2b), a validation subset (table 2c) and a testing set (table 2d). Footnote 2: [https://osf.io/rz4tx/](https://osf.io/rz4tx/) For both problems, a noise class is constructed using the massive continuous recordings. We manually label 7h and 5h of these recordings (for baboons and human babies respectively) from which we remove vocalizations. These labeled files are distributed according to the same partition as the vocalization files. ### Learning #### 4.2.1 Learning procedure The same steps are followed for both data sets. The records go through a data augmentation (see 3.6. Data Augmentation) and windowing procedure (see 3.7. Resampling). Each frame is considered an independent observation with two labels (signal or noise, class from the repertoire). During training of the model, each sample is drawn from \(\hat{p}_{data}(\mathbf{x},\mathbf{y}|\mathbf{\pi}^{s},\mathbf{\pi}^{v})\). For both data sets, the front-end of the CNN model, which maps the wave file to the latent space, is the same. Back-propagation concerns only the two sides of the model's back-end, i.e., the prediction of the probability of having a signal or not, and the prediction of the probability for each class of the repertoire (the prediction being the class with the maximal probability). Through the bayesian optimization algorithm, a surrogate model of the network is estimated by gaussian processes. It estimates the score of the true model for each combination of hyper-parameters and the uncertainty associated to this prediction. The acquisition function is based on a trade-off between exploration and exploitation to optimally select the next set of points in the hyper-parameter space. Once the hyper-parameters are chosen, the model is trained with these values and evaluated on the validation set. The Figure 2: Partitions and successive transformations of the labeled data. \begin{table} \end{table} Table 2: Total and per partition distribution of the human baby labelled data set surrogate model is updated with the new information and the next set of selected points. The model is evaluated 20 times, meaning that we train the model 20 times for 20 combinations of each set of hyper-parameters. We keep the model with the best results on the validation set. #### 4.2.2 Metrics of the model on training, validation and testing sets For both data sets, the metrics of the model on the training, validation and testing sets are reported in Table 3. The precision, \(\frac{TP}{TP+FP}\), measures the proportion of true vocalizations among the frames predicted as signal. It gives an information about the quantity of non-informative records in the new data set, extracted from the continuous records. The recall, \(\frac{TP}{TP+FN}\), measures the proportion of signal that we are able to detect. The AUC, the area under the ROC curve, gives an estimate of the model's ability to separate signal from noise. It can take values between 0 and 1, 1 being the value for which the model produces the best result. Note that the multi-categorical loss, i.e., the part of the loss computed for the estimation of the vocalization classes, is computed without the noise samples. The results are quite similar for both data sets, suggesting that this pipeline can be used for different types of problems. First, the results suggest that there does not appear to be any over-fitting. The total loss does not increase from one partition to the next. While the loss may increase a little between the training set and the validation set, we can see that it decreases between the training set and the test set for both problems. The model seems to have acquired a good generalization capacity. The regularization strategies have avoided over-fitting. We also do not see any discrepancy between the different partitions for binary and multi-categorical losses, another evidence of the absence of over-fitting. Furthermore, the generalization error is always small and equivalent to the training error. This pattern is the same for all metrics and there is no difference between the three partitions. Second, we can distinguish between the two outputs generated by the model. Indeed, it predicts the probability that there is a vocalization in the 1-second frame provided to the network and simultaneously, it indicates to which class in the repertoire the vocalization belongs. This is illustrated in the confusion matrices of Figures 3 and 4 computed on the test set for each output. Most of the total loss is explained \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Partition} \\ \cline{2-4} & Train & Validation & Test \\ \hline Loss & 0.13 & 0.13 & 0.12 \\ Binary Loss & 0.09 & 0.09 & 0.04 \\ Multi-categorical & 0.23 & 0.23 & 0.23 \\ Loss & & & \\ Binary Accuracy & 93.56 & 94.50 & 94.58 \\ Multi-categorical & 52.08 & 49.81 & 48.92 \\ Accuracy & & & \\ AUC & 0.93 & 0.94 & 0.94 \\ False Positives & 4451 & 854 & 901 \\ False Negatives & 882 & 312 & 463 \\ True Positives & 12505 & 4000 & 4300 \\ True Negatives & 65012 & 16049 & 19479 \\ Precision & 73.75 & 82.41 & 82.68 \\ Recall & 93.41 & 92.76 & 90.28 \\ \hline \hline \end{tabular} \end{table} Table 3: Metrics on each action. by the multi-category problem, with the loss from the binary problem converging to small values3. For both data sets, the results are better for the binary problem (i.e., the detection of the vocalization) than for the multi-categorical problem (i.e., the classification of the vocalization). While the binary accuracy is above 93% for all partitions on all data sets, the multi-category accuracy does not reach this level. More specifically, 90% of the baboon vocalizations are detected as signal and 96% of the frames without baboon vocalization are correctly classified as noise (Figure 2(a)) indicating that the model clearly distinguishes frames with signal from those without signal. These numbers are even larger for the _human baby_ data for which the binary problem is almost completely solved (Figure 3(a)). However, if the results for the multi-categorical problem are not as good as for the binary problem, the diagonal of the confusion matrix contains a majority of the predicted responses. Footnote 3: Note that the multi-categorical loss is corrected to exclude samples with no signal. The value of the loss with these samples included is the difference between the total loss and the loss for the binary problem. Third, we can notice higher score for the binary problem for the _human baby_ data set on most binary metrics. For the test set, the precision metric indicate that we can expect that more than 82% of the new data set being effectively composed of baboon vocalizations. For human babies, this metric is above 99%. More importantly, since our objective is to minimize the loss of information, the recall metric indicates that we achieve to find 90% of the baboon vocalizations and 99% of the human baby vocalizations. Similarly, the AUC is close to its maximum value (i.e., 1) reaching 0.93 for the baboon data and 0.99 for the human baby data. The better results in detection for the _human baby_ data can be explained by the quality of the records. Babies are recorded at home, in a much quieter place, unlike baboons which evolve in a more noisy environment. Fourth, the multi-categorical results are better for the _baboon_ data. On the test set, 48% of the detected vocalizations are correctly classified for the baboons while we obtain 40% for human babies. The better classification results obtained for the baboon data can be explained by more pronounced differences between the vocalizations of the baboon repertoire than for the human baby repertoire. Furthermore, looking at the confusion matrix of the baboon data (figure 2(b)), we notice that there are more confusions when the vocalizations are acoustically close. For exampel, the "copulation grunt" is a special type of "grunt" and the "bark" is very similar to a "wahoo". For the other classes, the proportions of correctly classified vocalizations are higher because the difference between vocalizations is more obvious. To sum up, the pipeline produces a model with good generalization results, especially regarding the detection problem. The quality of the training data and the definition of the repertoire play an important role, and despite the often very noisy data, we obtain satisfactory results indicating that the model is able to adapt to different situations. Figure 3: Confusion matrices for the _baboon_ data. ### Segmentation and classification Once the model has been trained on the labeled data, we can use it on the massive continuous data to extract the moments of vocalization and create two new large-scale data sets. We can measure the amount of data extracted and the time to do it. #### 4.3.1 Duration of prediction The prediction is done on a laptop. Tensorflow [62] is used during all the pipeline, for the training and to process the continuous data [43]. The 460 files representing 443 hours of continuous recording of the baboon enclosure during one month have been loaded, segmented, and classified in 9 hours 28 minutes. The 261 files representing 174 hours of continuous recording for the two human babies have been loaded, segmented, and classified in 9 hours 44 minutes. For both data sets, the reported time is the total time for processing the raw data without any filtering. The resampling to 16 kHz is automatically done, as the windowing. Each frame is fed to the network which provides the probability that there is a signal as well as the class of this signal. The frame lasts one second and then moves by 200 milliseconds. The new database is composed of all the frames predicted as a signal. If there is less than one second between two frames predicted as signal, they are merged into one vocalization. #### 4.3.2 New large-scale databases Two new databases are constructed from the continuous recordings processed by the model through our pipeline. The new human baby database represents 35.20 hours of records. The table 4 summarizes the distribution for each class. The new baboon database represents 38.75 hours of records. The table 5 summarizes the distribution for each class. \begin{table} \begin{tabular}{l l l l l} \hline \hline \multicolumn{5}{c}{Classes} \\ \hline Canonical & Crying & junk & Laughing & Non-canonical \\ \hline 3 & 248 & 11 893 & 966 & 113 627 \\ \hline \hline \end{tabular} \end{table} Table 4: Seconds of vocalization for human babies, for each class, over the year Figure 4: Confusion matrices for the _human baby_ data. ## 5 Discussion ### Results in relation to the objectives The goals of the pipeline were to quickly process and classify hundreds of hours of audio, with as few errors as possible, minimizing information loss, through an end-to-end pipeline with no engineering steps, so that it can be reused in different situations. In addition, the pipeline had to adapt to various environmental sound classification problems, with little labeled data for learning. First, the computation time to detect and classify the signal in all continuous records is relatively fast. The model detects all interesting segments of the signal over more than a month of recordings in less than 10 hours. These results are not obtained with powerful computing machines or GPU clusters. These polluting and expensive tools are not available to everyone. Here the predictions were performed on a personal laptop. Thus, the pipeline can be used in practice by any user. Furthermore, the results on the test set show that the CNN model that was trained develop an excellent signal detection capability. It is difficult to know the exact level of error that humans make when performing this manual labeling task in continuous recordings, but it is an error-prone task. Using the pipeline, we can accurately estimate the level of detection achieved. Moreover, since the labeling is performed by the same agent (i.e., the CNN model), the errors made are a priori predictable because they should always follow the same pattern. Thus, this automatic procedure generates fewer errors than if the labeling had been done by hand and these errors are normally more predictable and identifiable. Second, our goal is to not lose information. The high recall scores illustrate how the CNN model minimizes information loss. Thus, it is reasonable to think that we do not have to go back to the continuous recordings to check that we have not lost too many vocalizations and to focus on the newly constructed databases. Moreover, this result is not achieved at the cost of an inflation of false positives. The new databases include almost all the recorded vocalizations without being invaded by noise. Third, the integrated approach of the pipeline makes it usable by anyone, even by people who are not experts in signal processing. It is usable because it does not require expensive computational resources and because it makes neural network learning accessible through all the successive steps of the pipeline. Fourth, the pipeline is generic enough to be transferable to a variety of problems. The pipeline is generalizable to multiple vocalizations and different species. Fifth, an initial labeled database of about one hour is sufficient to use it. All the techniques used in the pipeline allow us to overcome the limitations of this relatively small labeled database (i.e., data augmentation, windowing, oversampling, transfer learning). ### Importance of the noise class to represent the soundscape An important step in the pipeline is the construction of the noise class. This step, the only time in the workflow that requires users to label the audio themselves, is important for several reasons. The problem we want to solve is to distinguish the vocalizations of the species we are looking for from all other sounds in its environment. First, since we have sounds from other sources, we can treat this problem as a supervised classification problem. Therefore, it is easier to pose it, to define the loss function to be optimized, and to use the different steps of the pipeline leading to its good results. Moreover, thanks to its supervised mode, there are metrics to evaluate the accuracy of the results, to have an idea of the errors made and to have information on the uncertainty related to the predictions. In this study, we hypothesized that a neural network was a good modeling choice for our problem, especially because it is scalable. Indeed, the scalability of deep learning is useful for capturing all the information about the soundscape. Pijanowski et al. [44] define the soundscape as "the collection of biological, geophysical and anthropogenic sounds that emanate from a landscape and which vary over space and time \begin{table} \begin{tabular}{l l l l l l} \hline \hline & \multicolumn{4}{c}{Classes} \\ \hline Bark & Copulation & Grunt & Scream & Wahoo & Yak \\ \hline 3 197 & 8 455 & 50 679 & 35 070 & 23 026 & 19 080 \\ \hline \hline \end{tabular} \end{table} Table 5: Seconds of vocalization for baboons, for each class, over the month. reflecting important ecosystem processes and human activities". By processing this amount of data, a CNN model is able to represent the soundscape of the relevant ecosystem. In this way, it is easier to distinguish between different sources. By defining the problem as we have, we learn to represent the soundscape. Once this is done, by processing the recordings from the ecosystem that produced the soundscape, we are able to select the sound events of interest. Pijanowski et al. [44] explain that the tools must be "soundscape-ready". The pipeline is soundscape-ready, thanks to the use of _deep learning_, which can process massive amounts of data. Second, as pointed out by Bergler et al. [34], increasing the noise class and its heterogeneity improves the classification results. This can be explained by the concept of soundscape. The addition of examples allows to integrate the temporal differences of the soundscape, their variations, and thus to better solve the problem. The soundscape is not exactly the same at a time \(t\) of the day and at a time \(t+n\). By adding examples from different moments of the day, the model captures this dynamic that we will encounter when dealing with continuous recordings. If there is variability in the vocalizations produced, given the different classes of repertoire and the different individuals that may produce them, there is also variability in the other sounds that make up the soundscape of the recorded natural environment. There are many sources of sound with much variability in each source. By adding examples of this variability, the model more easily captures the structure of the soundscape and can distinguish vocalizations from anything else. The theoretical distribution can be thought of as the soundscape. The labeled data set we have is sampled from it, but it is only one source among others of the soundscape. The empirical distributions we learn from are biased, there are only vocalizations. By creating the noise class, we de-bias the empirical distribution, which becomes more representative of the theoretical distribution, i.e., the soundscape. Then, the model that learned on \(\hat{p}_{data}\) shows better results. ### Improvement of the classification We note a difference between the outputs of the binary detection problem and the multi-categorical classification problem. For both data sets (i.e., baboons and human babies), the results are excellent for detection, to distinguish the moments when species produce sounds from other sounds that may occur in the ecosystem. In contrast, the classification part does not reach this high level of performance. The number of examples per class probably plays a critical role here. Indeed, the initial labeled data set is limited, there are not so many different examples and variations among the training examples. While the labeling data are grouped together for the binary problem (they are all a vocalization example), they are split for the multi-categorical problem. One possible way to improve the results would be consider another rule than the majority one to determine the class of a segment. Another one would be to reintroduce some form of temporal dependency. We discarded it for computational reasons and to facilitate learning. Since the prediction is done per frame, it is probably possible to smooth and improve the classification prediction results. This could be done for example by using HMM. After skimming the amount of data by selecting the relevant moments, it should be less expensive to model the time dependence. ### New insights from the new databases : information and variability gain Manual labeling of continuous audio recordings is a complex, tedious and error-prone task. The databases we have are the result of a large and time-consuming task. With the proposed pipeline, we can quickly and cheaply build new massive databases. The time saved allows researchers to focus on more interesting and relevant tasks, instead of tedious and repetitive listening/labeling. It becomes all the more relevant to make continuous recordings, without the presence of a person, as the tools will be able to detect the signal _a posteriori_ in an efficient and rapid way. We can thus avoid an effect linked to the experimenter and perhaps discover new vocalizations, depending on the species. In any case, with this possibility in mind, we now know that it is possible to extract information from autonomous microphones placed in nature. Following this strategy, soundscapes can be recorded for weeks. From these recordings, vocalizations can be extracted in a few hours. The amount of extracted vocalizations is much larger than the initial amount of examples. For instance, in the two situations we presented, we multiplied the time of the vocalizations by 32 and 28, respectively. As a result, the new databases created will be much richer. Because they are larger, there are more occurrences of vocalizations, more variability is expected to be found, and thus repertoire definitions can be refined or even challenged. These new massive databases provide a clear representation of the extent of the species repertoire and the variability of each repertoire class. Having a larger number of examples of each class, a larger base with greater variability, would provide domain experts with much more important and relevant information. Nevertheless, we make the assumption that the learned representation space is general enough to detect species vocalizations against any other, even for vocalization types for which we had no examples. It is possible that the training set we use contains no examples of some of the vocalizations of the species under study. It is possible that these unknown classes, for which the model had no training examples, are missed when predicting on continuous recordings. This may be a blind spot in our model. The previous point is a possible limitation of the proposed workflow. Nevertheless, it is not the preferred hypothesis. On the contrary, the strategy followed should avoid it. The model has learned to distinguish the different types of vocalizations from all other sources in the soundscape from which the vocalizations originate. We expect a never-seen class of vocalizations to be closer to other classes in the species' repertoire than to other sources of sound. We will gain in finesse of analysis thanks to the new variability obtained in these new large-scale databases. The multiplicity of examples that we will draw should allow us to refine the taxonomy. But above all, this new variability and this massive amount of data also allow us to apply new classification techniques and different approaches (unsupervised, learning representations). These approaches, questioning the learned representations and the choice of categorization, can now be further developed. ## 6 Conclusion The objective of this paper was to propose a strategy for detecting and classifying vocalizations in a natural environment. The workflow had to be general, adaptable to find vocalizations of different species, produced in different ecosystems. It had to be fast and cheap, not requiring prohibitive computational resources or massive labeled data. Furthermore, it had to be user-accessible. To this end, we built a pipeline based on a CNN model. To make this learning possible in a context where labeled data is scarce, the front-end of the model is transferred from another model. Data augmentation and resampling are performed. We create a noise class to learn to discriminate between the vocal productions and other sound sources in the environment where it originates. The whole learning procedure is automated to make this task accessible. The model processes the data in the same way during learning and prediction, taking 1-second windows. Thus, during prediction, we retrieve the moments of the signal as well as the name of the class to which it belongs. We test the pipeline for two different species and ecosystems, baboons and human babies. For each problem, the learning sets are about 70 minutes. Despite this small set of labeled data, through successive pipeline steps, we manage to learn a model with high results: 94.58% and 99.76% accuracy for the detection problem in baboons and babies, respectively; 48.92% and 39.96% accuracy for the classification of the vocalization in each repertoire. We apply the model to continuous recordings in their respective natural environments. The model manages to process 443 and 174 hours of recordings in less than 10 hours each. The objectives are met, within the constraints set. The workflow produced two new data sets of vocal productions. These data can be used to improve the classification results. These new massive and natural databases are an asset to do unsupervised classification and propose finer and more continuous repertoires. By using this pipeline for an increasing number of species and ecosystems, we should achieve a better description of the vocal productions of different species, leading to a better understanding of vocal production and language development. ## Acknowledgement This work was carried out in a collaboration between the CNRS, Aix-Marseille University and Resurgences R&D around the CIFRE PhD no215582 with the support of the ANRT, within the Labex BLRI (ANR-11-LABX-0036) and the Institut Convergence ILCB (ANR-16-CONV-0002). It also benefited support from the French government, managed by the French National Agency for Research (ANR) and the Excellence Initiative of Aix-Marseille University (A*MIDEX). This work was also supported by the CHUNKED ANR project (ANR-17-CE28-0013-02). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We are grateful to Rosic Ferry-Huiban and Myriam Sabatier for their help in labeling the baboon database. For the purpose of Open Access, a CC-BY4 public copyright licence has been applied by the authors to the present document and will be applied to all subsequent versions up to the Author Accepted Manuscript arising from this submission. Footnote 4: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)
2303.10015
Where and What do Software Architects blog? An Exploratory Study on Architectural Knowledge in Blogs, and their Relevance to Design Steps
Software engineers share their architectural knowledge (AK) in different places on the Web. Recent studies show that architectural blogs contain the most relevant AK, which can help software engineers to make design steps. Nevertheless, we know little about blogs, and specifically architectural blogs, where software engineers share their AK. In this paper, we conduct an exploratory study on architectural blogs to explore their types, topics, and their AK. Moreover, we determine the relevance of architectural blogs to make design steps. Our results support researchers and practitioners to find and re-use AK from blogs.
Mohamed Soliman, Kirsten Gericke, Paris Avgeriou
2023-03-17T14:44:13Z
http://arxiv.org/abs/2303.10015v1
# Where and What do Software Architects blog? ###### Abstract Software engineers share their architectural knowledge (AK) in different places on the Web. Recent studies show that architectural blogs contain the most relevant AK, which can help software engineers to make design steps. Nevertheless, we know little about blogs, and specifically architectural blogs, where software engineers share their AK. In this paper, we conduct an exploratory study on architectural blogs to explore their types, topics, and their AK. Moreover, we determine the relevance of architectural blogs to make design steps. Our results support researchers and practitioners to find and re-use AK from blogs. Architecture knowledge, Architecture design decisions, blog articles ## I Introduction Software engineers need _architectural knowledge_ (AK) [1] to make reasonable _architectural design decisions_[2]. For example, when software engineers work on component design (i.e. the structure and behavior of the system components), they often rely on their AK from previous projects, or from discussions with experienced architects. To understand the nature of this AK, researchers have explored various _AK concepts_[3], such as architectural solutions (e.g. patterns [4] and technologies [5]), as well as constraints, benefits and drawbacks of solutions [6]. Software engineers make design decisions in consecutive steps, and for each step, they need different AK concepts [7]. To exemplify this, let us consider an example of such a stepwise process, the Attribute Driven Design (ADD) [8], the most well-established architecture design process (see Section II). In the _identify design concept_ step of ADD, AK concepts such as architectural solutions and quality attributes (e.g. performance and security) are considered. In the _select design concept_ step, AK concepts such as benefits and drawbacks between alternative solutions are considered. In the _instantiate architecture elements_ step, AK concepts such as components and connectors are considered. While AK is critical for those design steps, it is rather challenging to find AK. For instance, software engineers struggle to find design decisions from architectural documents, because design decisions are not systematically documented and formalized [9]. To address this problem, researchers have recently explored AK in software repositories [10] such as issue tracking systems [11], and mailing lists [12], as well as in certain web resources such as Stack Overflow [13] and technology documentations [14]. These contributions provide concrete datasets, AK ontologies [15], and search approaches [7] that can help researchers and practitioners to find AK. Moreover, there is evidence that software engineers share their AK in technical articles within blogs (i.e. _architectural blogs_) [3]. Such blogs showed to be the most relevant source on the Web to perform architectural tasks, and the richest source of AK concepts, compared to forums (e.g. Stack Overflow), source code repositories and technology documentations [3]. Despite the value of architectural blogs in finding AK, software engineering researchers rarely explored blogs (e.g. [16]), and did not specifically explore architectural blogs and their contained AK. Moreover, architectural blogs are quite unstructured and disorganized [17]: they are scattered on the Web among thousands of websites and their _type_ varies from personal blogs to blogs hosted by software companies and communities. This makes it hard to understand which blogs one should look for AK in. Furthermore, architectural blogs discuss multiple _topics_ (e.g. comparing solutions), each involving different AK concepts. However, blog articles are not classified (e.g. using tags) [16], which makes it challenging to search for and subsequently reuse their contained AK concepts. Therefore, in this paper, we aim to _explore the types and topics of architectural blogs, as retrieved by Web search engines, and their relevance for software engineers during certain design steps_. This allows us to provide an overview of architectural blogs, which can be used by researchers and practitioners to find and re-use AK from architectural blogs. To this end, we empirically analyzed the dataset of architectural blogs from Soliman et al. [3], and applied grounded theory to explore the types and topics of architectural blogs. Moreover, we applied the most popular topic modeling algorithm, namely Latent Dirichlet Allocation (LDA) [18] to explore architectural topics based on their contained AK concepts. Using two research methods (i.e. grounded theory and LDA) to identify topics of architectural blogs supports exploring topics from different perspectives, as well as data triangulation. Finally, we applied statistical analysis to identify significant co-occurrences between types and topics of architectural blogs, and the relevance of topics for certain ADD steps. In summary, we achieved the following contributions: * The _types of blogs_ that contain AK as retrieved by the most popular Web search engine Google. * The _topics of blogs_, their contained AK concepts, and their significant co-occurrences with blog types. * The _relevance_ of architectural blog articles from certain topics in following ADD steps. * A _dataset_ of 718 classified architectural blog articles, based on their types and topics. * A semi-automated _approach to identify architectural topics_ using LDA and AK ontology. Section II provides a background on ADD. Section III explains our study design, Sections IV, V, and VI present our results. Sections VII, VIII, IX discuss our results, threats to validity, and related work. Finally, Section X concludes the paper. ## II Background on the Attribute Driven Design In this paper, we evaluate the relevance of architectural blogs to follow the ADD steps [8] (see Section VI). We have selected the ADD process, as it is the most well-established architecture design process, it provides concrete steps, and it was previously applied by Soliman et al. [3]. In this section, we provide a brief summary of three out of the seven ADD steps, namely those that involve making design decisions, and thus require AK: _Identify design concepts_: A list of alternative architectural solutions is identified. For example, a software engineer might identify a list of technology solutions (eg. Spark, Storm, Flink, etc.) when designing a stream processing engine. _Select design concepts_: One design concept from the list of alternative solutions is selected. Each solution is compared and evaluated against requirements, constraints, quality attributes (e.g. security, performance, etc.), and considering the benefits and drawbacks of solutions. For example, a software engineer might require high performance of stream processing, she would therefore rather select Spark or Flink over Storm. _Instantiate architecture elements_: Elements of the selected architectural solution are modified to achieve system requirements. For example, a software engineer might configure a reference architecture to meet a quality attribute such as add a custom layer, or specify relationships and responsibilities of elements of an architectural pattern [8]. ## III Study design ### _Research questions_ To achieve our aim (see Section I), we ask the following research questions (RQs): _(RQ1) What types of architectural blogs do Web search engines retrieve?_ Software engineers write architectural articles in blogs (e.g. personal blogs) with different characteristics. For instance, some blogs are hosted by software companies, and focus on specific solutions (e.g. Big Data technologies), while other blogs are hosted by software development communities and discuss multiple topics on different solutions. We ask this RQ to determine types of blogs where AK is shared, and their characteristics as retrieved by Web search engines (e.g. Google). By answering this RQ, researchers can develop approaches that classify and index architectural blogs to find and re-use AK. Moreover, practitioners can make informed decisions on the types of blogs to share and search for AK. _(RQ2) What kind of topics are discussed in the different types of architectural blogs?_ Blog articles discuss different topics (e.g. comparing solutions). Each topic involves different AK concepts. Thus, we ask this RQ to determine topics discussed in architectural blogs, their involved AK concepts, and the types of blogs (e.g. personal blog) that discuss each topic. These topics can guide practitioners to search for and re-use certain AK concepts. They can also help researchers to develop approaches that automatically classify architectural blogs based on their topics. _(RQ3) How relevant are the topics of architectural blogs as retrieved by Web search engines to support practitioners follow the ADD steps?_ Some topics of architectural blogs could be more useful for practitioners to conduct certain design steps (such as the ADD steps in Section II). However, Web search engines (e.g. Google) retrieve blog articles based on keyword-searches, and thus might not retrieve relevant results for each design step [3]. Therefore, we ask this RQ to determine topics of architectural blogs that Web search engines do retrieve, as well as which topics are relevant for conducting the ADD steps, according to practitioners. By answering this RQ, practitioners could target their search towards more relevant architectural blogs to achieve their design steps effectively. Moreover, researchers can adapt their AK finding approaches by promoting architectural blogs with the highest relevance. Figure 1 shows an overview on the research process to answer the RQs. ### _Dataset of architectural blog articles_ We re-used the dataset of architectural blog articles from the study of Soliman et al. [3]. Following, we provide a Fig. 1: High level view on the research process to answer RQs brief description of that study, while further details are in [3]. 53 software engineers used Google searches to perform six architectural tasks (see Table I) that correspond to the three ADD steps (see Section II). A complete description of each task is available online [19]. To perform the six tasks, software engineers evaluated each web-page according to its relevance to an architectural task in a five-level Likert scale: \(\bullet\)_Very High Relevance (5)_: fulfills more than one requirement. \(\bullet\)_High Relevance (4)_: fulfills one requirement of the task. \(\bullet\)_Medium Relevance (3)_: provides information to the task. \(\bullet\)_Low Relevance (2)_: is remotely relevant to the task. \(\bullet\)_No Relevance (1)_: has no relevance information. The 53 software engineers in Soliman et al. [3] evaluated 2623 unique web-pages according to their relevance. Subsequently, 945 web-pages were classified as blogs and tutorials. These 945 web-pages provide the base dataset of this study. We first verified whether these 945 web-pages are indeed architectural blogs, and excluded the following web-pages: \(\bullet\)Web-pages with _no relevance (level 1 above)_ to an architectural task as specified by practitioners. This is to ensure that all blog articles in the dataset are actually architectural. \(\bullet\)Blog articles in languages other than English. \(\bullet\)Web-pages that cannot be accessed anymore. \(\bullet\)Web-pages that include tutorials rather than blogs. \(\bullet\)Duplicated blog articles with different URLs. As a result of this filtering step, a total of 718 unique architectural blog articles were used to answer the RQs. ### _Apply grounded theory to identify types and topics of architectural blogs_ To identify types and topics of architectural blogs, we manually analyzed all 718 of the architectural blog articles by following steps and best practices from grounded theory [20, 21]. In detail, we performed the following three steps: #### Iii-C1 Open coding To identify initial types and topics of blogs, the first and the second authors randomly selected a significant sample of 300 architectural blog articles from the 718 articles, and independently inspected each blog article as well as the web-sites, in which the blog articles are hosted. Both researchers inspected the 300 architectural blog articles through 3 iterations. Within each iteration, both researchers independently assigned a type and a topic for each blog article, by focusing on the content of the blog articles and avoiding being biased from literature; subsequently they _wrote memos_ to explain why a type or a topic of blog has been determined for a certain article. For example, if all articles in a web-site are written by the same software engineer, then it is a personal blog, and if an article provides benefits and drawbacks of alternative solutions, then it is about comparing solutions. We share the memos for all blog articles online [19]. After each iteration, both researchers discussed the assigned types and topics of architectural blog articles, and performed _constant comparisons_ to determine differences between the different types and topics of blogs. For instance, we determined the following attributes to differentiate the types of blogs: \(\bullet\)_Number of authors who write blog articles in a web-site_. For example, personal blogs tend to have a single author. \(\bullet\)_Host of the web-site_. For example, a blog article could be hosted by technology vendors or IT service companies. \(\bullet\)_Relationship between authors of articles and the hosting web-site_. For example, authors of articles could be independent or employees of the company that hosts a web-site. For the topics, we determined the following attributes: \(\bullet\)_The purpose of an article_. Some articles discuss steps to design a system, while others provide steps to implement. \(\bullet\)_The shared AK concepts in an article_. Some articles provide benefits and drawbacks of solutions, while other articles provide a list of alternative solutions to solve a problem. \(\bullet\)_The number of discussed architectural solutions in an article_. Some articles discuss and compare multiple solutions, while other articles elaborate and evaluate a single solution. As a result of this step, we defined initial types and topics of architectural blogs. During the third iteration, neither of the two researchers could determine new types or topics; consequently, this indicated _theoretical saturation_. #### Iii-C2 Axial coding Based on the initial types and topics from the open coding step, the first two authors of this paper classified the rest of the 718 architectural blog articles: the first author identified the topics for the rest of the 718 articles, while the second author identified the types for the rest of the 718 articles. For some uncertain cases of articles, both researchers discussed the article to agree upon a single type or topic. During discussions, both researchers noticed that some types of blogs caused disagreements between them. Thus, we merged these types of blogs to ensure good agreement between researchers. For example, some community blogs have sub-types (e.g. general versus technology-specific) which could be confusing to differentiate; therefore, we merged community blogs in a single type. By the end of this step, we reached the final set of _types_ and _qualitative topics_ of architectural blogs (see Sections IV and V-A). #### Iii-C3 Measure agreement To ensure the reliability of our qualitative analysis, we conducted a final agreement test: we randomly selected per author, a significant sample of 300 blog articles, which were not previously classified by that author. The first author independently classified the test sample regarding blog types, while the second author classified the test sample regarding topics. Based on this test sample, we calculated the kappa coefficient among the two authors, which are 0.81 regarding its type and 0.71 regarding the topics. These kappa values indicate good agreement beyond chance. \begin{table} \begin{tabular}{p{42.7pt} p{341.4pt}} \hline \hline **ADD step** & \multicolumn{2}{c}{**Task description**} \\ \hline Identify design concepts & \(\bullet\) For a realtime dashboard, identify middleware technologies which scale to \(>\)100k users \\ • Identify high performance Java JSON parsers for mobile app communication. & \\ \hline Select design concepts & \(\bullet\) Compare interoperability and latency of RabbitMQ, Kafka, and ActiveMQ. • Compare data collectors, message brokers, and ETL engines for big data systems. & \\ \hline Instantiate architecture elements & \(\bullet\) Search for technology features and component design to select deployment topology and routing. & \\ \hline \multirow{3}{*}{elements} & \(\bullet\) Search for best practices regarding service decomposition to achieve high cohesion and low coupling. & \\ \cline{1-1} & & \\ \hline \hline \end{tabular} \end{table} TABLE I: Architectual tasks ### _Apply LDA to identify topics of architectural blogs_ In the previous section, we applied grounded theory to identify qualitative topics of architectural blogs. In this Section, we applied LDA to identify topics (i.e. _LDA topics_) in architectural blogs. Using LDA, we consider frequencies of terms and AK concepts within an article; these provide a different perspective on topics than qualitative analysis. We also note that we determined co-occurrences between the qualitative topics and LDA topics in Section III-E. For the LDA analysis, we followed steps and best practices for applying LDA in software engineering [18]. In summary, we applied 5 steps. We performed the first three steps iteratively; subsequently, we performed steps 4 and 5. These five steps are elaborated in the following. #### Iii-D1 Pre-process blogs Blog articles contain textual contents, which can mislead the LDA algorithm, resulting in identifying irrelevant and inconsistent topics. Thus, this step aims to remove and process textual contents in blog articles to guide LDA towards architecturally-relevant topics. We removed the following terms from blog web-pages: \(\bullet\) HTML tags using the Beautiful Soup library1, stop words using the NLTK library2, and special characters (e.g. #,-). \(\bullet\) Too frequent words that appear in more than 95% of the blogs, and less frequent words that appear in less than 5% of the blogs, because these terms bias LDA to form separate topics. Other researchers have also followed this step [18]. Footnote 1: [https://www.crummy.com/software/BeautifulSoup](https://www.crummy.com/software/BeautifulSoup) Footnote 2: [https://www.nltk.org](https://www.nltk.org) \(\bullet\) Generic keywords that do not refer to an AK concept but could mislead the LDA algorithm in creating separate topics for them, such as "microsoft", "medium" and "software". We expanded the list of generic keywords iteratively (see Step 3). \(\bullet\) Common terms related to source code implementation such as "method", "procedure", and "array". The list of source code terms has been expanded iteratively (see Step 3). We removed source code terms, because we found that they bias LDA to create dedicated topics for blogs that contain source code elements; such topics are irrelevant to the actually discussed architectural topic. After filtering out non-useful terms, we lematized terms to a single form. For example, "deciding" and "decided" are reduced to a single form "decide". We used NLTK to lematize English terms. Moreover, we extended the NLTK dictionary to lematize architectural terms, which cannot be lematized by the default NLTK wordnet dictionary (e.g. "scalability" and "scalable" are reduced to a single form "scale"). Subsequently, we found out that LDA was biased to identify topics based on their domain and technologies (e.g., Big Data and Middleware topics). Thus, to identify architecturally-relevant topics, independent of their domain, we replaced certain terms with tags that corresponds to their AK concepts. We considered specifically AK concepts from the ontology of AK in Stack Overflow [15]. We have decided on this ontology due to the possible similarity of AK between Stack Overflow and blogs. Table II presents the list of selected AK concepts from the AK ontology [15], and examples of terms associated with each AK concept. In addition to replacing terms with AK concepts, we associated the frequency of unique terms in a blog article with their respective tag of AK concept. For example, if a blog post discusses three technologies: Kafka, RabbitMQ and ActiveMQ, these will be replaced with \(<\)Technology_Solution_1\(>\), \(<\)Technology_Solution_2\(>\), and \(<\)Technology_Solution_3\(>\). In this way, LDA can differentiate between blog articles that contain different terms from specific AK concepts. For example, blog articles which compare different technology solutions would have \(<\)Technology_Solution\(>\) tags with higher frequency, while blog articles that discuss different components will have \(<\)Component\(>\) tag with higher frequency. We note that we extended the list of terms associated with each AK concept iteratively (see Step 3). We provide online [19] lists of terms that correspond to each AK concept, as well as scripts to replicate this pre-processing step. #### Iii-D2 Apply LDA After pre-processing blog articles, we experimented with a different number of topics (\(k\)), between 5 and 20 topics, and applied the most common LDA parameters (\(\alpha=50/k\) and \(\beta=0.01\)) [18]. We note that we further tuned LDA parameters in Step 4. After executing LDA on our dataset, each blog article is assigned to a single topic (with the highest coherence of terms and AK concepts). However, LDA does not describe or define each topic. Therefore, to understand each topic, we identified the top most frequently occurring terms and AK concepts in each topic. We used this list of terms and AK concepts in Step 3 to enrich the pre-processing step. Moreover, we used this list of terms and AK concepts in Step 5 to define _LDA topics_ of architectural blogs. #### Iii-D3 Update pre-processing In this step, we checked the list of terms from the LDA topics (see Step 2), and determined general terms and source code terms that could be removed. Moreover, we determined new architectural terms that refer to certain AK concepts, which could be replaced with their respective AK concepts. Accordingly, we updated the pre-process blogs step with new terms to refine the LDA topics, and repeated the first three steps (i.e. pre-process blogs, apply LDA, and update pre-processing) to ensure that we identified all frequent terms from blog articles. We stopped repeating the three steps, once no new terms appeared in the list of the most occurring terms of each topic, which needed to be either removed or replaced with an AK concept. #### Iii-D4 Tune LDA parameters After establishing the pre-processing of blogs, we experimented with a different number of topics up to 20 topics, and calculated the coherence. We noticed that we got higher coherence for number of topics between three and six topics, while the coherence decreased \begin{table} \begin{tabular}{c} \hline **AK concept** & **Examples of terms** \\ \hline \(<\)Technology\(>\) ActiveMQ, SQS, JSON, Kafka, Maven, Flume \\ \hline \(<\)Pattern\(>\) layer, publish subscribe, Microservice, ESB, SOA \\ \hline \(<\)Quality\(\_\)attribute\(>\) performance, accuracy, usability, security, throughput \\ \hline \(<\)Requirement\(>\) business, dashboard, chart, customer, market, financial \\ \hline \(<\)Component\(>\) application, server, client, back end, front end \\ \hline \(<\)Connector\(>\) retrieve, connect, consume, call, save, route, depend \\ \hline \(<\)Connector\(\_\)data\(>\) socket, payload, message, dump, token, call, update \\ \hline \end{tabular} \end{table} TABLE II: Examples of terms for each AK concept for more than six topics. Therefore, we decided to explore six topics of architectural blogs. Moreover, we experimented with \(\alpha=50/k\) to \(\alpha=1/k\), and calculated the density of topics among architectural blogs. Our experiments showed that the standard parameter of \(\alpha=50/k\) provides a balanced density of topics among architectural blog articles. #### Iii-D5 Define LDA topics LDA represents topics as a set of occurring terms and AK concepts. Using these terms and AK concepts, we provided a definition of each LDA topic (see Section V-B). ### _Apply statistical analysis to answer RQ2 and RQ3_ As part of answering RQ2, we determine significant co-occurrences between qualitative topics, LDA topics and types of architectural blogs using the \(\tilde{\chi}^{2}\) significant test [22]. For each co-occurrence between a type or a topic, we calculated a 2x2 contingency table. For example between one type of blogs (e.g. personal blogs), and one topic of blogs (e.g. compare solutions), we calculate: number of co-occurring articles between the specified type and topic, number of articles with the specified type but with other topics, number of articles with the specified topic but with other types, number of articles with different types and topics. Among this 2x2 contingency table, we calculate the \(\tilde{\chi}^{2}\) value, and determine significant co-occurrences with \(\tilde{\chi}^{2}>10\) at p-value \(<\)0.05. We present significant co-occurrences in Section V-C and Figure 2. To answer RQ3, we used the relevance indicated by the 53 software engineers in Soliman et al. [3] to determine relevant articles for each ADD step. Specifically, we used descriptive statistics to calculate the number of relevant blog articles and their relevance for each topic when following the ADD steps. Moreover, we conducted a Kruskal-Wallis H test [23] to determine significant relevance between the different topics. We decided on the Kruskal-Wallis H test over the Anova test [24], because it can test abnormally distributed data as in our dataset. Further details about the significance test are shared online [19]. We present the results in Section VI. ## IV **RQ1**: Types of Architectural Blogs We analyzed the architectural blogs in our dataset using grounded theory (see Section III-C), and identified the following types of architectural blogs (in parentheses their respective percentages in the dataset), as retrieved by Google: \(\bullet\)**Community blogs (43%)**: hosted by open publishing platforms (e.g. \(dzone.com\)), and authored by many different software engineers, who volunteer to share their experience in blog articles. The majority of community blogs host a wide range of architectural topics such as those in \(dzone.com\) and \(medium.com\). But some community blogs focus on specific topics such as data integration as in \(dataintegrationinfo.com\). \(\bullet\)**Technology vendor blogs (25%)**: hosted by specific technology vendors, such as commercial companies (e.g. SAP) or open source foundations (e.g. Apache). Blog articles focus on topics related to the products or technologies produced by the hosting technology vendor such as \(blogs.sap.com\) and \(blog.rabbitq.com\). The authors of articles are experts and developers of these technologies. \(\bullet\)**Personal blogs (15%)**: hosted and authored by individual software experts, and reflecting their personal experience such as articles in \(marinfowler.com\) and \(alexanderdevelopment.net\). \(\bullet\)**IT service blogs (11%)**: hosted by IT service companies that provide services such as software development or consulting (e.g. \(openlogic.com/blog\)). Articles are posted by employees of these companies. \(\bullet\)**Magazines and newspapers blogs (3%)**: hosted by specialized magazines or newspapers (e.g. _opensourceforu.com_), where specific authors, possibly hired, write articles. \(\bullet\)**Educational blogs (3%)**: hosted by training organizations or universities such as _edureka.co/blog_. Articles are posted by students or educators. **RQ1 key takeaways**: \(\bullet\) Software engineers share their AK in different types of blogs with different hosting and authorship policies. \(\bullet\) Google search results contain architectural articles mostly from community blogs, followed by technology vendor blogs, and then personal blogs. ## V **RQ2**: Topics of architectural blogs We analyzed architectural blog articles using grounded theory (see Section III-C) and LDA (see Section III-D). Each approach produced separate but related topics. Subsequently, we determined significant co-occurrences between the topics from the two approaches, as well as the types of blogs (see Section III-E). Figure 2 shows an overview on the qualitative and LDA topics, and their significant co-occurrences with each other and with blog types according to the \(\tilde{\chi}^{2}\) significance test. In sub-sections V-A and V-B, we respectively explain the qualitative and LDA topics, and their contained AK concepts. Moreover, we support our explanations with examples of articles (see Section "References of blog articles"), referenced as [B\(\#\)] (e.g. [B1]). Finally, we discuss the co-occurrences between the topics and types of blogs in sub-section V-C. ### _Qualitative topics of architectural blogs_ We explain the qualitative topics (in parentheses their respective percentages in the dataset), as retrieved by Google (see Section III-B), as well as their contained _AK concepts_ (underlined in the text). \(\bullet\)**Elaborate and evaluate a solution (28%)**: Articles in this topic elaborate and evaluate a single architectural solution. such as a technology and its features [B1], or a pattern and its elements [B2], or an architecture and its components [B3], or a design principle and its application [B4]. For example, in [B1], the author first elaborates the main features of Amazon SQS technology such as "_SQS is a distributed, queue-based, messaging service...It supports building (loosely coupled) integrations between two applications_", then the author discussed abilities of Amazon SQS to achieve _quality attributes_ such as performance and availability. Another example is [B2], where the author elaborates the Microservices architecture, and discusses its benefits and drawbacks. \(\bullet\)**List of related solutions (20%)**: Articles in this topic provide a list of related architectural solutions such as lists of technologies [3], patterns [1], tactics [2], design principles [1], architectural components [1], and best practices [1]. Solutions could be either alternative solutions to solve a design issue, or solutions that complement each other to design an architecture of a system. Each solution could be associated with a description of its characteristics. In these articles, the solutions are not compared to each other. For example, [1] provides a numbered list of 10 Microservices Best Practices, giving a short description of each item. \(\bullet\)**Compare solutions (20%)**: Articles in this topic compare two or more _architectural solutions_ such as technologies [1] or patterns [2], list their benefits and drawbacks, and explain in which cases each could be applied. For example, [2] defines two solutions: Synchronous and Asynchronous request handling, states factors to decide on them (i.e. _requirements_ and _constraints_), and states their benefits and drawbacks. Another example is [1], which compares several JSON libraries (i.e. _architectural solutions_) based on their performance (i.e. _quality attribute_). \(\bullet\)**How to design (18%)**: Articles in this topic discuss different steps or issues to design a system within a certain domain (e.g. Web applications [1]), or to design a system based on specific _architectural solutions_ such as technologies [1] or patterns [2]. The explanation of steps or issues are commonly supported with _use-cases_, and alternatives of architectural solutions. For example, [1] presents a _"Seven Step API Design Methodology"_, within the domain of the Web, using the HTTP protocol. Another example is [1], which explains how to design a routing topology using RabbitMQ, and discusses design issues, use cases and evaluations against quality attributes such as performance and scalability. \(\bullet\)**How to implement (14%)**: Articles in this topic provide guidelines to implement a conceptual _architectural solution_ (e.g. component design [1] or pattern [1]) through certain technologies, or provide guidelines to integrate different technologies together [1]. For example, [1] explains how to implement a RESTful web service, specifically in Java using JSON-B and JSON-P. Another example is [1], which explains how to integrate Apache Camel with the CData JDBC Driver to copy Dynamics CRM data to a JSON file on disk. ### _LDA topics of architectural blogs_ We define LDA topics based on the most commonly occurring terms and AK concepts in each topic, as identified by the LDA algorithm. Table III provides top terms, AK concepts and their frequencies (columns freq.) among articles for each LDA topic. Also, number of unique terms that belong to each AK concept per blog article (column terms/article). \(\bullet\)**Design using patterns and components (24%)**: Articles in this topic contain terms that refer to designing an architecture, as well as names of architectural patterns, components and quality attributes. For example, [1] discusses different patterns and configurations (e.g. number of layers) to design web applications. Another example is [1], which discusses over 12 design patterns to design microservice systems. \(\bullet\)**Achieve quality attributes using component design (22%)**: Articles in this topic contain terms that refer to quality attributes, as well as terms that refer to components, connectors and patterns. For example [1] explains how to achieve performance requirements through different component design using Apache Kafka. Another example is [1], which compares RabbitMQ and Apache Kafka regarding their abilities to realize a component design that achieves certain quality attributes (e.g. scalability and performance). \(\bullet\)**Implement technologies and component design (20%)**: Articles in this topic contain names of technologies, and terms that refer to implementation details (e.g. "example"), as well as terms that refer to components and connectors. For example, [1] explains implementation details for Microservice architecture using Spring technologies. Another example is [1], which explains how to implement event-driven integration between components using Apache Camel and Kubernetes. \(\bullet\)**Analyze decision factors (16%)**: Articles in this topic contain terms that refer to factors that influence design decisions. These involve business requirements, constraints (e.g. "time"), and benefits of solutions (e.g. using terms "free" and "easy"). For example [1] discusses loading data to the cloud. Several Fig. 2: Significant co-occurrences between LDA topics and qualitative topics of architectural blogs, and between topics and types of blogs. The line thickness corresponds to the significance of co-occurrence as measured by the \(\tilde{\chi}^{2}\) significance test. decision factors are discussed such as productivity, infrastructure complexity and performance. Another example is [1], which discusses the integration between two technologies: SAP and power Apps, and explain the business benefits from such integration in terms of real time visibility and efficiency. \(\bullet\)**Compare and evaluate technologies (13%)**: Articles in this topic contain terms that refer to comparing technologies such as "vs", "feature", and names of different technologies, as well as terms that refer to quality attributes and requirements. For example, in [1], the two technologies ActiveMQ and RabbitMQ are compared regarding their features and quality attributes, such as scalability and interoperability. \(\bullet\)**Design using multiple technologies (5%)**: Articles in this topic contain terms that refer to designing an architecture, as well as terms that refer to requirements and technologies. For example, [1] explains the architecture of web applications using Microsoft technologies. ### _Co-occurrences of types and topics of architectural blogs_ Figure 2 shows significant co-occurrences between the LDA topics (see Section V-B), the qualitative topics (see Section V-A), and the types of blogs (see Section IV). From the co-occurrences, we can observe the following: \(\bullet\)_Some LDA topics logically co-occur with their corresponding qualitative topics_. For instance, the LDA topic _Implement technologies and component design_ significantly co-occurs with the _How to implement_ qualitative topic. Both topics focus on the implementation of an architecture. Similarly, the LDA topic _Compare and evaluate technologies_ significantly co-occurs with the _Compare solutions_ qualitative topic. Both topics focus on comparing solutions. This confirms that our proposed approach that uses LDA and AK ontology (see Section III-D) succeeded to identify some topics that correspond to their qualitative counterpart. \(\bullet\)_Some LDA and qualitative topics complement each other_. On the one hand, LDA topics focus on certain types of architectural solutions (e.g. either technologies or patterns). On the other hand, qualitative topics focus on the purpose of an architectural blog. For example, _Design using patterns and components_ significantly co-occurs with _Compare solutions_. This indicates that software engineers discuss the comparison of patterns and components to design an architecture of a system in these articles. Another example is _Implement technology and architecture_, which significantly co-occurs with _Elaborate and evaluate solution_. This indicates that software engineers provide implementation details (e.g. source code) to elaborate a solution, and to evaluate a solution (e.g. provide source code to benchmark a technology). \(\bullet\)_Most blog types significantly co-occur with a single topic_. With the exception of IT service blogs, four blog types significantly co-occur with one topic, and one blog type (_Personal blogs_) co-occurs with two topics. For instance, articles in _Community blogs_ significantly co-occur with the _Design using patterns and components_ LDA topic. While articles in _Personal blogs_ significantly co-occur with the _Implement technology and architecture topic_ and _How to implement_ topics. This finding can help software engineers to find AK. For example, to find AK about patterns and components, software engineers should better search in community blogs, while to find AK related to the implementation of an architecture, software engineers should better search in personal blogs. \(\bullet\)_Few LDA and qualitative topics do not significantly co-occur with other topics_. The LDA topics _Achieve quality attributes using component design_ and _Design using multiple technologies_, as well as the qualitative topic _How to design_ do not significantly co-occur with other topics. Thus, articles in these topics either rarely co-occur with each other or equally co-occur with several other topics, but not one to a significant degree. For example, the _How to design_ qualitative topic involves long articles that discuss patterns, components, technologies, and decisions factors. These articles equally co-occur with the LDA topics, because they do not focus on a specific architectural solution. **RQ2 key takeaways**: \(\bullet\)_Listing, elaborating, evaluating, and comparing architectural solutions_ present the majority of AK (68%) in architectural blogs (see Section V-A). \(\bullet\)_Architectural patterns, component design and principles_ are discussed in the majority (46%) of architectural blogs, followed by discussions on _technologies_ and their implementations (38%) (see Section V-B). \(\bullet\)_Using LDA in combination with AK ontology_ identifies reasonable architectural topics that correspond and complement the qualitative topics (see Section V-C). \(\bullet\)Each _type of blog_ is specialized in one or two topics. \begin{table} \begin{tabular}{l c c|c c c} \hline **LDA Topic** & **Terms** & **freq.** & **AK concepts** & **freq. terms/article** \\ \hline \multirow{4}{*}{_Design using patterns and components_} & architecture & 2253 & \(<\)Pattern\(>\) & 4514 & 2 to 5 \\ & design & 1617 & \(<\)Component\(>\) & 4509 & 6 to 16 \\ & need & 1250 & \(<\)Quality\(\_\)attribute\(>\) & 1226 & 2 to 4 \\ & web & 1103 & \(<\)Connector\(\_\)data\(>\) & 1010 & 5 to 6 \\ & approach & 814 & \(<\)Connector\(>\) & 302 & 7 \\ \hline \multirow{4}{*}{_Achieve quality attributes using component design_} & time & 1213 & \(<\)Component\(>\) & 7430 & 6 to 16 \\ & exchange & 1104 & \(<\)Connector\(\_\)data\(>\) & 2031 & 4 to 10 \\ & need & 1021 & \(<\)Pattern\(>\) & 2041 & 2 to 4 \\ & topic & 1010 & \(<\)Connector\(>\) & 907 & 7 to 10 \\ & support & 716 & \(<\)Quality\(\_\)attribute\(>\) & 805 & 2 to 3 \\ \hline \multirow{4}{*}{_Implement technologies_} & create & 1103 & \(<\)Technology\(>\) & 7411 & 6 to 31 \\ & example & 1121 & \(<\)Connector\(\_\)data\(>\) & 2212 & 4 to 7 \\ & version & 801 & \(<\)Connector\(>\) & 1607 & 6 to 13 \\ & type & 721 & \(<\)Component\(>\) & 713 & 7 to 9 \\ & test & 451 & \(<\)Requirement\(>\) & 301 & 4 \\ \hline \multirow{4}{*}{_Analyze decision_} & time & 1132 & \(<\)Requirement\(>\) & 8741 & 4 to 14 \\ & real & 901 & \(<\)Component\(>\) & 2028 & 6 to 18 \\ & report & 851 & \(<\)Connector\(\_\)data\(>\) & 402 & 4 \\ & analysis & 801 & & & \\ & free & 703 & & & \\ \hline \multirow{4}{*}{_Compare and evaluate technologies_} & vs & 1803 & \(<\)Technology\(>\) & 4721 & 6 to 23 \\ & support & 1103 & \(<\)Requirement\(>\) & 1328 & 5 to 11 \\ & feature & 946 & \(<\)Component\(>\) & 1316 & 6 to 13 \\ & time & 810 & \(<\)Quality\(\_\)attribute\(>\) & 953 & 2 to 4 \\ & offer & 510 & \(<\)Pattern\(>\) & 407 & 4 \\ \hline \multirow{4}{*}{_Design using multiple technologies_} & question & 601 & \(<\)Technology\(>\) & 1553 & 6 to 27 \\ & developer & 506 & \(<\)Requirement\(>\) & 608 & 4 to 12 \\ \cline{1-1} & development & 450 & \(<\)Component\(>\) & 206 & 10 \\ \cline{1-1} & design & 406 & & & \\ \cline{1-1} & architecture & 403 & & & \\ \hline \end{tabular} \end{table} TABLE III: Top terms, AK concepts, their frequencies (freq.), and number of unique terms per article (terms/article) for LDA topics. ## VI **RQ3**: Relevance of Architectural Blog Topics to Attribute Driven Design Steps In the study of Soliman et al. [3], 53 software engineers evaluated the relevance of blog articles to perform tasks that apply the ADD steps (see Section III-B). We use this data to determine what topics of architectural blogs (see Section V) are retrieved by the Google search engine, and what topics are most relevant for software engineers to apply the ADD steps (see Section III-E). In the following sub-sections, we present the number of relevant blog articles (as retrieved by Google), and their relevance to each of the ADD steps, across the different topics (see Sections VI-A and VI-B). ### _Relevance of qualitative topics_ Figure 3 shows the number of relevant blog articles for each qualitative topic as retrieved by Google, and their relevance as specified by practitioners. From Figure 3, we can observe that Google search results are different among the three ADD steps, and practitioners evaluate articles from different topics differently for each ADD step: \(\bullet\) For _Identify design concepts_, Google tends to retrieve the five topics of blogs similarly (see Figure 2(a)). However, our significance test shows that _Compare solutions_ is significantly more relevant than _Elaborate and evaluate a solution_ (see Figure 2(b)). Thus, Google tends to retrieve blog articles with low relevance from the _Elaborate and evaluate a solution_ topic, which make it challenging for practitioners to find relevant AK for this step. \(\bullet\) For _Select concepts_, Google tends to retrieve articles from the _Compare solutions_ topic significantly more than other topics (see Figure 2(a)). The Google search results are reasonable for this step, and align with our significance test, which shows that articles from the _Compare solutions_ topic are significantly more relevant to this step than other topics (see Figure 2(b)). \(\bullet\) For _Instantiate architecture element_, Google retrieves mostly articles from the _Elaborate and evaluate a solution_ topic, followed by the _How to design_ topic (see Figure 2(a)). Both topics have the highest relevance according to our significance test (see Figure 2(b)). However, Google retrieves some blog articles of low relevance from the _How to implement_ and the _Compare solutions_ topics, which negatively affect the effectiveness of Google to find AK for this step. ### _Relevance of LDA topics_ Figure 4 shows the number of relevant blog articles for each LDA topic as retrieved by Google, and their relevance as specified by practitioners. From Figure 3(a), we can observe that Google tends to predominantly retrieve blog articles from a specific LDA topic to support an ADD step. Similarly to the qualitative topics, practitioners evaluate articles from different LDA topics differently for each ADD step (see Figure 3(b)): \(\bullet\) For _Identify design concepts_, Google retrieves most blog articles from the _Implement technologies and component design_ topic. However, our significance test shows that the topic _Achieve quality attributes using component design_ is significantly relevant to this step as well. Moreover, some Fig. 4: Number and relevance of architectural blog articles for each LDA topic, and for each Attribute Driven Design step Fig. 3: Number and relevance of qualitative topics per ADD step articles that belong to the _Analyze decision factors_ topic have significantly low relevance for this ADD step. Thus, search approaches should consider to filter out blog articles from the _Analyze decision factors_ topic to make this step effectively. \(\bullet\) For _Select design concepts_ Google retrieves the majority of blog articles from the _Achieve quality attributes using component design_ topic, followed by articles from the _Analyze decision factors_ and _Compare and evaluate technologies_. However, our significance test shows that the topic _Achieve quality attributes using component design_ is significantly more relevant than both topics _Analyze decision factors_ and _Compare and evaluate technologies_. Thus, search approaches can filter out irrelevant articles from these two topics to ensure effective search for this ADD step. \(\bullet\) For _Instantiate architecture elements_ Google tends to retrieve predominantly articles from the _Design using patterns and components_ topic. However, our significance test shows that articles from the _Achieve quality attributes using component design_ topic are also significant to this design step. In contrast, articles from _Implement technologies and component design_ have significantly low relevance to this step, which could be filtered out by approaches to ensure effective search for AK. \begin{tabular}{p{34.1pt}} \hline **RQ3 key takeaways**: \\ \(\bullet\) Practitioners evaluate the relevance of topics differently for each design step (see Figures 2(b) and 3(b)), and Google results are different among the ADD steps: \\ \(\bullet\) For _Identify design concepts_ and _Select design concepts_, topics on comparing solutions, and achieving quality attributes are most relevant. But Google cannot distinguish highly relevant topics, and retrieves low relevant topics. \\ \(\bullet\) For _Instantiate architecture elements_, topics on elaborating and evaluating solutions such as patterns, and how to design are most relevant; Google retrieves articles on patterns, but misses other highly relevant topics. \\ \hline \end{tabular} ## VII Discussion ### _Implications for researchers_ _The results of RQ1_ split architectural blogs into types as retrieved by Google. Researchers could use these types and dataset to develop specialized Web search approaches, which classify architectural blogs based on their types. For instance, an approach can allow practitioners to filter personal blogs from other types of architectural blogs, to facilitate searching for AK, because some types of blogs are specialized in certain architectural topics (see Figure 2). Furthermore, the types of blogs and our dataset facilitate future studies on AK. For instance, researchers recently analyze grey literature (including blog articles) to extract design decisions (e.g. on Microservices APIs [25]). Using the types of blogs and our dataset, researchers can collect a sample of architectural blog articles from specific types of blogs to explore certain topics. _The results of RQ2_ help to understand the similarities and differences between the AK in architectural blogs and AK in Stack Overflow. In Table IV, we compare the AK in architectural blogs based on our results (see Section V), and the AK in Stack Overflow based on Soliman et al. [26]. We could infer the following similarities and differences: \(\bullet\) The majority of articles in architectural blogs discuss patterns and components, while the majority of architectural posts in Stack Overflow discuss technologies and features. \(\bullet\) Most of the articles in architectural blogs and architectural posts in Stack Overflow evaluate and compare solutions. \(\bullet\) Architectural blogs involve articles that guide software engineers to design a system (i.e. How to design). These are complex articles with multiple steps and issues. On the other hand, posts in Stack Overflow focus on a single issue, and do not involve multiple steps or issues to design a system. Based on the comparison between architectural blogs and Stack Overflow, researchers could benefit from both sources. For example, an approach can find AK on component design from blogs, and AK on technologies from Stack Overflow. The results of co-occurrences between LDA topics and qualitative topics (see Section V-C) show that using LDA in combination with AK ontology (see Section III-D) identifies reasonable architectural topics that correspond and complement the qualitative topics. Thus, researchers could re-use this approach (LDA + AK ontology) to explore architectural topics in other sources (e.g. issue trackers [11] and mailing lists [27]). _The results of RQ3_ can guide researchers to develop heuristics, which improve the effectiveness of AK search approaches. For instance, Soliman et al. [7] developed a heuristic-based specialized search approach to find relevant AK in Stack Overflow. Similarly, using the results of RQ3, researchers can develop heuristics to improve the search for AK in blogs by promoting highly relevant topics, and filtering low relevant topics. For example, when practitioners search for AK that pertains to the _identify design concepts_ an approach can promote topics _Compare solutions_ and _Achieve quality attributes using component design_, and filter out topics _Analyze decision factors_ and _Elaborate and evaluate a solution_. ### _Implications for practitioners_ _The results of RQ1_ inform practitioners about the types of architectural blogs, which could help them to share and find AK on the Web. On the one hand, practitioners could decide on suitable types of blogs to share their AK. For instance, practitioners should better share AK in community blogs, because Web search engines find them better than other types. On the other hand, the types of blogs and our dataset can guide practitioners to search for AK on the Web. For instance, practitioners should go directly to specific websites for certain types of architectural blogs in our dataset, and search in these \begin{table} \begin{tabular}{p{34.1pt} p{34.1pt}} \hline **Architectural blogs** & **Stack Overflow** \\ \hline **Types of architectural solutions** & 46\% components and pat- 78\% technologies and their features, 38\% technologies, 22\% architectural components (incl. 16\% decision factors combinations with other solutions) \\ \hline **Purpose of article or post** & 48\% elaborate, evaluate, 50.7\% evaluate solutions (incl. complete solutions, 20\% parison), 40.7\% synthesize solutions list solutions, 32\% How to (incl. list solutions), 8.6\% combine design or implement evaluation and synthesis \\ \hline \end{tabular} \end{table} TABLE IV: Comparison of AK in architectural blogs (see Section V) and Stack Overflow architectural posts [26]. websites directly, because some types of blogs are not well retrieved by Web search engines (e.g. personal blogs). _The results of RQ2_ inform practitioners about the topics in architectural blogs. This can help practitioners to search for relevant articles in architectural blogs. For instance, practitioners could use terms and AK concepts of the LDA topics (see Table III) to search for specific architectural topics using keywords searches. Furthermore, the co-occurrences between blog types and topics (see Section V-C) guide practitioners to find certain architectural topics. For example, practitioners could search directly in community blogs (e.g. dzone.com) to find articles about architectural patterns and component design, because community blogs significantly co-occur with the LDA topic _Design using patterns and components_ (see Figure 2). _The results of RQ3_ guide practitioners on topics of architectural blogs, which are relevant to each of the ADD steps. Thus, if a practitioner requires AK in order to perform a specific ADD step, she can search specifically for the topics that are mostly relevant to this step. For example, regarding the _instantiate architecture elements_ ADD step, practitioners can search for blog articles with topics _design using patterns and components_ and _achieve quality attributes using component design_, because they have the highest relevance to this step. ## VIII Threats to validity #### Vi-1 Construct validity One threat to construct validity is regarding the use of AK ontology [15], which map AK concepts into specific terms. The terms associated with each AK concept might not be complete, and thus might miss AK concepts in blog articles. Nevertheless, we followed an iterative process (see Section III-D) to enrich the AK ontology with new terms from architectural blogs to mitigate this threat. #### Vi-2 Reliability To identify types and topics of architectural blogs, we manually analyzed articles using grounded theory. This might involve a threat to the reliability of the study. Nevertheless, we carefully followed the steps of grounded theory, and discussed articles to ensure agreement among researchers. Finally, we measured the agreement among researchers (see Section III-C3), which indicated good agreement beyond chance. Regarding the LDA analysis, we provide the scripts and AK ontology online [19] to facilitate replicating the study. #### Vi-3 External validity One threat to the external validity is the limited number of analyzed architectural blog articles (i.e. 718 articles), which might not generalize to all architectural blog articles on the Web. Nevertheless, our sample is significant [28] with 95% confidence level and 3.58% error margin. Moreover, other studies that explored AK analyzed samples with comparable sizes: 858 Stack Overflow posts [26, 781 issues from issue trackers [11], and 980 decisions from mailing list [12]. Thus, our results provide a first hypothesis of AK in blogs, which can be well compared to related work. ## IX Related Work We are not aware of any dedicated studies on architectural blogs and their contained AK. Thus, our study is the first study focusing on architectural blogs. In this section, we discuss related work in the AK field, architectural grey-literature reviews, and studies on blogs in software engineering. #### Vi-1 Architectural knowledge Previous studies identified and modeled AK concepts on design decisions [2], their kinds [1], their reasoning [6], and solutions alternatives [29]. Although these studies identified AK concepts, they do not explore concrete sources of AK. Recent efforts explored AK in different software repositories and on the Web. For example, Gorton et al. [14] found AK regarding architectural tactics in technology documentation, and Bhat et al. [11] found AK regarding types of decisions in issue tracking systems. Furthermore, Bi et al. [13] explored quality attributes and architectural tactics in Stack Overflow. Recently, Fu et al. [27] developed an approach to identify decisions in mailing list. Moreover, Mahadi et al. [30] developed an approach to identify design discussions in pull requests. However, all previously mentioned approaches do not explore architectural blogs and their contained AK. #### Vi-2 Architectural grey-literature reviews Grey-literature may involve blog articles but also other sources (e.g. forums). Researchers recently utilized grey literature to answer research questions. For example, Soldani et al. [31] analyzed grey-literature to explore the benefits and drawbacks of microservices. While, Singjai et al. [25] analyzed grey-literature to determine relationships between Microservice APIs and Domain-Driven Design. However, these studies extract certain information (e.g. decision rules [25]) from blogs, but do not explore architectural blogs and its AK as one source of AK. #### Vi-3 Blogs in software engineering Blogs did not have much attention from software engineering researchers. Pagano et al. [16] was the first to analyze blog articles, and determined topics discussed by software developers such as features and domain concepts. Similarly, Parin et al. [32] analyzed topics of software blogs (e.g. documentation, technology discussion), and surveyed practitioners to determine motivations and challenges to write blog articles. Recently, Williams et al. [33, 34] developed an approach to assess the credibility of blog articles. However, these studies analyzed blogs with different focus, and did not explore architectural blogs and AK. ## X Conclusion and Future Work We aimed to explore architectural blogs, their types, topics and relevance to design steps. To this end, we analyzed architectural blog articles using qualitative and quantitative research methods. Our results show that architectural blog articles are shared in different types with different hosting and authorship policies, and discuss different topics that involve listing, elaborating, and evaluating solutions such as patterns, components and technologies. Furthermore, we found that some topics are more relevant for practitioners to make certain design steps than others. However, Web search engines (e.g. Google) do not always retrieve the most relevant topics for design steps. Our future work aims to develop approaches that automatically identify and classify blogs to improve the search for architectural knowledge in blogs.
2301.01761
Lyman-alpha Scattering Models Trace Accretion and Outflow Kinematics in T Tauri Systems
T Tauri stars produce broad Lyman-alpha emission lines that contribute $\sim$88% of the total UV flux incident on the inner circumstellar disks. Lyman-alpha photons are generated at the accretion shocks and in the protostellar chromospheres and must travel through accretion flows, winds and jets, the protoplanetary disks, and the interstellar medium before reaching the observer. This trajectory produces asymmetric, double-peaked features that carry kinematic and opacity signatures of the disk environments. To understand the link between the evolution of Lyman-alpha emission lines and the disks themselves, we model HST-COS spectra from targets included in Data Release 3 of the Hubble UV Legacy Library of Young Stars as Essential Standards (ULLYSES) program. We find that resonant scattering in a simple spherical expanding shell is able to reproduce the high velocity emission line wings, providing estimates of the average velocities within the bulk intervening H I. The model velocities are significantly correlated with the K band veiling, indicating a turnover from Lyman-alpha profiles absorbed by outflowing winds to emission lines suppressed by accretion flows as the hot inner disk is depleted. Just 30% of targets in our sample have profiles with red-shifted absorption from accretion flows, many of which have resolved dust gaps. At this stage, Lyman-alpha photons may no longer intersect with disk winds along the path to the observer. Our results point to a significant evolution of Lyman-alpha irradiation within the gas disks over time, which may lead to chemical differences that are observable with ALMA and JWST.
Nicole Arulanantham, Max Gronke, Eleonora Fiorellino, Jorge Filipe Gameiro, Antonio Frasca, Joel Green, Seok-Jun Chang, Rik A. B. Claes, Catherine C. Espaillat, Kevin France, Gregory J. Herczeg, Carlo F. Manara, Laura Venuti, Péter Ábrahám, Richard Alexander, Jerome Bouvier, Justyn Campbell-White, Jochen Eislöffel, William J. Fischer, Ágnes Kóspál, Miguel Vioque
2023-01-04T18:57:15Z
http://arxiv.org/abs/2301.01761v1
# Ly\(\alpha\) Scattering Models Trace Accretion and Outflow Kinematics in T Tauri Systems ###### Abstract T Tauri stars produce broad Ly\(\alpha\) emission lines that contribute \(\sim\)88% of the total UV flux incident on the inner circumstellar disks. Ly\(\alpha\) photons are generated at the accretion shocks and in the protostellar chromospheres and must travel through accretion flows, winds and jets, the protoplanetary disks, and the interstellar medium before reaching the observer. This trajectory produces asymmetric, double-peaked features that carry kinematic and opacity signatures of the disk environments. To understand the link between the evolution of Ly\(\alpha\) emission lines and the disks themselves, we model _HST_-COS spectra from targets included in Data Release 3 of the _Hubble_ UV Legacy Library of Young Stars as Essential Standards (ULLYSES) program. We find that resonant scattering in a simple spherical expanding shell is able to reproduce the high velocity emission line wings, providing estimates of the average velocities within the bulk intervening H I. The model velocities are significantly correlated with the \(K\) band veiling, indicating a turnover from Ly\(\alpha\) profiles absorbed by outflowing winds to emission lines suppressed by accretion flows as the hot inner disk is depleted. Just 30% of targets in our sample have profiles with red-shifted absorption from accretion flows, many of which have resolved dust gaps. At this stage, Ly\(\alpha\) photons may no longer intersect with disk winds along the path to the observer. Our results point to a significant evolution of Ly\(\alpha\) irradiation within the gas disks over time, which may lead to chemical differences that are observable with ALMA and _JWST_.
2310.06997
Similarity of Triangles and Intercept Theorem in Elamite Mathematics
In this article, we study similarity of triangles in the Susa Mathematical Texts (\textbf{SMT}). We also suggest that the Susa scribes were aware of intercept theory because they used this theorem in solving a complicated system of equations.
Nasser Heydari, Kazuo Muroi
2023-06-12T19:24:14Z
http://arxiv.org/abs/2310.06997v1
# Similarity of Triangles and Intercept Theorem in Elamite Mathematics ###### Abstract In this article, we study similarity of triangles in the Susa Mathematical Texts (**SMT**). We also suggest that the Susa scribes were aware of intercept theory because they used this theorem in solving a complicated system of equations. ## 1 Introduction Applications of similarity of triangles occur in some texts of the **SMT** such as **SMT No. 18**, **SMT No. 23**, and **SMT No. 25**. We have already mentioned this technique in **SMT No. 23** and **SMT No. 25** whose subject is the bisection of a trapezoid by a transversal line. For a full discussion about the mathematical interpretations of these two texts, see [12, 2]. Here, we carefully examine **SMT No. 18**, which solves a complicated system of equations, and give our mathematical interpretation. This text was inscribed by an Elamite scribe between 1894-1595 BC on one of 26 clay tablets excavated from Susa in southwest Iran by French archaeologists in 1933. The texts of all the tablets, along with their interpretations, were first published in 1961 (see [1]). **SMT No. 181** contains only a single problem which shows one of the characteristics of Babylonian mathematics known as indifference to dimensions2. In fact, the product of an area multiplied by another area, which would be meaningless to the ancient Greeks, occurs in one of the three equations given in this text. Moreover, the scribe of this tablet has great skill, introducing new variables in order to solve a system of complex simultaneous equations. Similarity of Triangles Two figures in the plane are said to be _similar_ if they can be obtained from one another by applying a combination of the following transformations: scaling, translating, rotating or reflecting. For example, all circles and regular \(n\)-gons for \(n\geq 3\) are similar to each other, but this is not true for general isosceles triangles. We usually use the symbol "\(\sim\)" for similarity. In particular, if two figures are similar and the scaling coefficient is \(1\), they are called _congruent_ (see Figure **1**). Among all polygons, the similarity of triangles has been of great interest to mathematicians, which has led to different theorems giving the conditions under which two triangles are similar. Since similarity definitions are equivalent, no definition takes priority over the others. For example, one theorem says that two triangles \(\triangle ABC\) and \(\triangle A^{\prime}B^{\prime}C^{\prime}\) are similar if the length of their corresponding side are proportional. This means there exists a positive number \(k>0\) such that \[k=\frac{\overline{AB}}{A^{\prime}B^{\prime}}=\frac{\overline{AC}}{A^{\prime}C ^{\prime}}=\frac{\overline{BC}}{B^{\prime}C^{\prime}}.\] In such a case, the number \(k\) is usually called the _ratio of similarity_ (see Figure **2**). Figure 1: Similar and congruent figures Figure 2: Similar triangles Note that proportionality of corresponding sides in similar triangles implies that the corresponding angles are equal and vice versa (see Figure 2). So, some authors use the equality of angles to define the similarity of triangles: two triangles \(\triangle ABC\) and \(\triangle A^{\prime}B^{\prime}C^{\prime}\) are similar if \[\angle A=\angle A^{\prime},\ \angle B=\angle B^{\prime},\ \angle C=\angle C^{\prime}.\] Another definition for similarity of triangles says that two triangles \(\triangle ABC\) and \(\triangle A^{\prime}B^{\prime}C^{\prime}\) are similar if two sides are proportional and the angles between these two sides are equal. It can be shown that the above-mentioned definitions of similarity are equivalent (see [20]). ## 3 Intercept Theorem Another concept, which has a close relation with the similarity of triangles, is the _intercept theorem_. This theorem is usually attributed to the Greek philosopher Thales of Miletus (circa 624-545 BC) and sometimes called the _Thales's intercept theorem_. Let us explain this elementary theorem in the following example. Consider two straight lines \(L_{1}\) and \(L_{2}\) intersecting in a point \(O\) and assume that two parallel lines \(L_{1}^{\prime}\) and \(L_{2}^{\prime}\) intersect \(L_{1}\) and \(L_{2}\) such that they do not pass through \(O\). There are only two cases (see Figure 3): (1) \(O\) is in the region bounded by \(L_{1}^{\prime}\) and \(L_{2}^{\prime}\), or (2) \(O\) is not in the region bounded by \(L_{1}^{\prime}\) and \(L_{2}^{\prime}\). If \(L_{1}\cap L_{1}^{\prime}=\{A\}\), \(L_{1}\cap L_{2}^{\prime}=\{B\}\), \(L_{2}\cap L_{1}^{\prime}=\{C\}\), and \(L_{2}\cap L_{2}^{\prime}=\{D\}\), then the theorem says that \[\overline{\frac{OA}{OB}}=\overline{\frac{OC}{OD}}=\overline{\frac{AC}{BD}}.\] Note that in Figure 3 the two triangles \(\triangle AOC\) and \(\triangle BOD\) formed by the four lines \(L_{1},L_{2},L_{1}^{\prime},L_{2}^{\prime}\) are similar. In fact, it can be shown that this theorem is somehow Figure 3: Intercept theorem equivalent to the similarity of triangles. In other words, by assuming one assertion, one can prove the other. The first proof of this theorem seems to be provided by Euclid in his famous book _Elements_ (see [1]). ## 4 Transversals In elementary geometry, a _transversal line_ or a _transversal_ is a line intersecting two different lines in the plane in two distinct points. For example, in Figure **3** line \(L^{\prime}_{1}\) is a transversal with respect to intersecting lines \(L_{1}\) and \(L_{2}\). For two dimensional figures such as polygons, a transversal is usually a line dividing the figure into two parts whose areas are positive. So it has to intersect at least two sides of the figure. Figure **4** shows three lines but one of which is transversal. Of all possible transversals for a polygon, the one parallel to a third side of the polygon is of special interest. Some authors consider these special intersecting lines as transversals of polygons. For example, a transversal of a triangle is a line intersecting its two sides and parallel to the third one (see Figure **5**, left). In such cases, one can use the intercept theorem to compute the length of a part of the triangle with respect to the other parts. For a trapezoid, a transversal is usually a line parallel to the two bases which intersects the two legs (Figure **5**, right). Figure 4: Transversal and non-transversal lines Figure 5: Transversals of polygons Applications of Intercept Theorem Although intercept theorem seems elementary, it has many practical applications. This theorem has been used for a long time by mathematicians and surveyors to measure distances that were not capable of measurement using standard methods. For example, the theorem can be used to compute the width of rivers or the height of tall trees or structures. These two situations are depicted in Figure 6. In the case of the tree, \(c\) can be chosen as the length of shadow of the tree and \(a\) and \(b\) can be the lengths of a stick and its shadow respectively. In both cases, one can easily determine the values of reachable quantities \(a,b,c\) and then use the intercept theorem to compute the unreachable value \(x\) by \(x=\frac{ac}{b}\). It has been said that Thales applied the intercept theorem in a similar manner to measure the height of the _pyramid of Cheops3_ (see [1]). Footnote 3: The pyramid of Cheops (also known as the pyramid of Khufu or the great pyramid of Giza) is the oldest and largest of the pyramids in the Giza pyramid complex bordering present-day Giza in the Greater Cairo, Egypt. ## 6 Similarity in the SMT As we said before, the similarity of triangles and transversals were used in some texts of the **SMT**. We have discussed elsewhere the transversal bisectors of trapezoids, which are the subjects of **SMT No. 23** and **SMT No. 25** (see [11, 12, 13]). The main idea of the problems in those texts was to use a transversal parallel to the bases in order to divide the trapezoid into two subtrapezoids with equal areas (see Figure 5). The key property of the transversal of a trapezoid is that its length depends only on the length of the two bases. In fact, if \(a,b\) are the lengths of the two bases, then the length \(d\) of the transversal is obtained by \[d=\sqrt{\frac{a^{2}+b^{2}}{2}}. \tag{1}\] Figure 6: Applications of intercept theorem The key point to prove formula (1) is to consider the height of the trapezoid and use the similarity of triangles (see [11, Section 3], Section 3). ### SMT No. 18 #### Transliteration Obverse: Lines 1-12 (L1) us ki-ta _a-na_ us an-ta nignin-_ma_\(<\)10\(>\) (L2) a-sa an-ta _a-na_ a-sa [ki]-ta nignin 36 (L3) [sag an]-ta nignin dal nignin ul-gal 20,24 (L4) za-e 36 _sa_ a-sa ki a-sa nignin (L5) _a-na_ 4 _a-li-ik-ma_ 2,24 _ta-mar_ (L6) igi-10 _sa mu_ ki _mu_ nignin _pu-ta-<ir> 6 ta-mar_ (L7) 2,24 _a-na_ 6 _i-si-ma_ 14,24 _ta-mar_ (L8) 14,24 nignin 3,27,21,[36] _ta-mar a-na_[2] _i-si_[6,54],43,12 _ta-mar_ (L9) [_tu-i_]_r_ 14,24 _a-na_ 2 _i-si_[28,48 _ta-mar_...]... Reverse: Lines 1-3 (L1) \(\cdots\) [\(\cdots\) \(\cdots\) \(\cdots\) ] (L2) 30 _ta-[mar_...]... (L3) _i-st-ma_ 20 _ta-ma_[\(r\cdots\)...] #### Translation Obverse: Lines 1-9 (L1) I multiplied the lower length by the upper length, and (the result is) 10,0. (L2) I multiplied the upper area by the lower area, (and the result is) 36,0,0. (L3) I added the squared upper width (and) the squared transversal, (and the result is) 20,24. (L4) You, 36,0,0 that is the (result of) multiplication of the area by (another) area, (L5) multiply (it) by 4, and you see 2,24,0,0. (L6) Make the reciprocal of 10,0 that is (the result of) multiplication of the perpendicular (that is, the lower length) by the perpendicular (that is, the upper length), (and) you see 0;0,6. (L7) Multiply 2,24,0,0 by 0;0,6, and you see 14,24. (L8) Square 14,24, (and) you see 3,27,21,36. Multiply (it) by 2, (and) you see 6,54,43,12. (L9) Return. Multiply 14,24 by 2, (and) you see 28,48....... Reverse: Lines 1-3 (L1) \(\cdots\) \(\cdots\) \(\cdots\). (L2) you see 30....... (L3) Multiply (30) by (0;40), and you see 20. ### Mathematical Interpretation The Susa scribe is considering the dimensions and areas of a right triangle and a trapezoid as shown in the following figure. We have a right triangle with a transversal line which is parallel to the base of the right triangle and divides it into two figures: a smaller right triangle and a right trapezoid. According to Figure 7 and the translation, there are four variables in this problem: the upper length, the lower length, the width and the transversal. We use the following symbols for these variables in our discussion4 : Footnote 4: Following Babylonian tradition, for determining the lower and the upper lengths, we consider right-to-left direction, while for the lower and upper width we take the down-to-up direction. \[\begin{cases}x=\text{the upper length},\\ y=\text{the lower length},\\ z=\text{the width},\\ w=\text{the transversal}.\end{cases}\] (Note that the scribe has implicitly assumed that \(z>w\).) In Figure 8, we consider the perpendicular lines from \(C\) onto \(AB\) and \(AD\) and denote the intersection points by \(F\) and \(E\). As shown in Figure 8, we have labeled the vertices and parts of the figure as follows: \[\overline{AE}=x,\ \overline{ED}=y,\ \overline{AB}=z,\ \overline{EC}=w.\] In this case, we have \(\overline{FB}=z-w\). Figure 7: Transversal of a right triangle simultaneous equations: \[\begin{cases}xy=10,0\\ \left(\frac{1}{2}x(z+w)\right)\times\left(\frac{1}{2}yw\right)=36,0,0\\ z^{2}+w^{2}=20,24.\end{cases} \tag{2}\] Note that the expression \(\frac{1}{2}x(z+w)\) is the very area of the right trapezoid \(ABCE\) and the other expression \(\frac{1}{2}yw\) is that of the right triangle \(\triangle CDE\). As we can see, there are four variables and three equations which makes it impossible to solve the system (2). Because of this difficulty, the scribe solved this system of equations in two steps. In the first step, he eliminates \(x\) and \(y\) to obtain a system of equations with respect to only \(w\) and \(z\) in order to find their values. Then, in the second step, he tries to find the values of \(x\) and \(y\) by using the first equation in (2) and a property of the figure which provides him with one more equation with respect to \(x\) and \(y\) (this is where he is using the intercept theorem). We describe each step in details. **Step 1.** According to lines 4-7, the scribe uses the first and the second equations in (2) to get the value of \(w(z+w)\) as follows: Figure 8: Dimensions of a right triangle with transversal \[\left(\frac{1}{2}x(z+w)\right)\times\left(\frac{1}{2}yw\right)=36,0,0\] \[\implies 4\times\left(\frac{1}{2}x(z+w)\right)\times\left(\frac{1}{2}yw \right)=4\times(36,0,0)\] \[\implies xy\times w(z+w)=2,24,0,0\] \[\implies (10,0)\times w(z+w)=2,24,0,0\] \[\implies w(z+w)=\frac{1}{(10,0)}\times(2,24,0,0)\] \[\implies w(z+w)=(0;0,6)\times(2,24,0,0)\] thus \[w(z+w)=14,24. \tag{3}\] According to line 8, by squaring and then doubling both sides of (3), we get \[w(z+w)=14,24\] \[\implies \left(w(z+w)\right)^{2}=(14,24)^{2}\] \[\implies w^{2}(z+w)^{2}=3,27,21,36\] \[\implies 2w^{2}(z+w)^{2}=2\times(3,27,21,36)\] \[\implies 2w^{2}(z+w)^{2}=6,54,43,12.\] So \[2w^{2}(z+w)^{2}=6,54,43,12. \tag{4}\] Next, according to line 9, we multiply both sides of (3) by 2 to obtain \[2w(z+w)=28,48. \tag{5}\] It seems that at this point of the text the scribe has used new variables to proceed. We may recover the next calculations as follows. First introduce new variables \(X\) and \(Y\) by \[\begin{cases}X=(z+w)^{2}\\ Y=2w^{2}.\end{cases} \tag{6}\] It follows from (4) and (6) that \[XY=6,54,43,12. \tag{7}\] On the other hand, from (2), (5) and (6) we can write \[X+Y =(z+w)^{2}+2w^{2}\] \[=z^{2}+w^{2}+2zw+2w^{2}\] \[=z^{2}+w^{2}+2w(z+w)\] \[=20,24+28,48\] \[=49,12\] thus \[X+Y=49,12. \tag{8}\] Now, we can apply the usual Babylonian method, i.e., completing the square to find the values of \(X\) and \(Y\) satisfying simultaneous equations (7) and (8). Since \[\frac{X+Y}{2}=\frac{49,12}{2}=24,36\] we can write \[\frac{X-Y}{2} =\sqrt{\left(\frac{X+2}{2}\right)^{2}-XY}\] \[=\sqrt{\left(\frac{49,12}{2}\right)^{2}-6,54,43,12}\] \[=\sqrt{(24,36)^{2}-6,54,43,12}\] \[=\sqrt{10,5,9,36-6,54,43,12}\] \[=\sqrt{3,10,26,24}\] \[=\sqrt{(13,48)^{2}}\] \[=13,48\] thus we obtain5 Footnote 5: Note that the scribe might have computed \(\sqrt{3,10,26,24}=\sqrt{2^{4}\times 3^{4}\times 23^{2}}=2^{2}\times 3^{2}\times 23^{ 1}=13,48\). \[\frac{X-Y}{2}=13,48. \tag{9}\] It follows from the (8) and (9) that \[X=\frac{X+Y}{2}+\frac{X-Y}{2}=\frac{49,12}{2}+13,48=24,36+13,48=38,24\] and \[Y=\frac{X+Y}{2}-\frac{X-Y}{2}=\frac{49,12}{2}-13,48=24,36-13,48=10,48.\] Therefore, we obtain \[X=38,24\quad\text{and}\quad Y=10,48. \tag{10}\] Now, we can use (6) and (10) to compute the values of \(z\) and \(w\) as follows: \[2w^{2}=10,48\] \[\Longrightarrow w^{2}=\frac{10,48}{2}\] \[\Longrightarrow w^{2}=5,24\] \[\Longrightarrow w=\sqrt{5,24}\] \[\Longrightarrow w=\sqrt{(18)^{2}}\] \[\Longrightarrow w=18\] \[(z+w)^{2}=38,24\] \[\implies \quad z+w=\sqrt{38,24}\] \[\implies \quad z+w=\sqrt{(48)^{2}}\] \[\implies \quad z+18=48\] \[\implies \quad z=48-18\] \[\implies \quad z=30.\] Therefore, we get \[w=18\quad\text{and}\quad z=30 \tag{11}\] which completes the first step. **Step 2.** In the second step, the scribe needs to use another condition in order to find an equation involving only \(x\) and \(y\) other than the first equation \(xy=10,0\). In fact, if we substitute the values of \(w=18\) and \(z=30\) into the second equation of (2) and simplify, it is clear that we get the first equation \(xy=10,0\). A second equation can be obtained from the properties of the transversal line in Figure 8. Since \(AB\parallel EC\) and \(AD\parallel CF\), the intercept theorem implies that two triangles \(\triangle CDE\) and \(\triangle BCF\) are similar and thus \[\frac{\overline{CF}}{\overline{BF}}=\frac{\overline{DE}}{\overline{CE}}\] or equivalently \[\frac{x}{z-w}=\frac{y}{w}. \tag{12}\] It follows from (11) and (12) that \[\frac{x}{12}=\frac{y}{18}\] or \[x=\frac{2}{3}y. \tag{13}\] Equation (13) is the very condition that the scribe has used to finish the solution. Thus, we have obtained the following system of equations with respect to \(x\) and \(y\) only: \[\begin{cases}xy=10,0\\ x=\frac{2}{3}y.\end{cases} \tag{14}\] Let us solve this system of equations. By substituting the value of \(x\) with respect to \(y\) given by the second equation of (14) into the first equation, we can write \[xy=10,0\] \[\implies \left(\frac{2}{3}y\right)y=10,0\] \[\implies \frac{2}{3}y^{2}=10,0\] \[\implies y^{2}=\frac{3}{2}\times(10,0)\] \[\implies y^{2}=15,0\] \[\implies y=\sqrt{15,0}\] \[\implies y=30.\] So, according to line 2 on the reverse, we get \(y=30\). Finally, according to line 3 on the reverse, we have \[x=\frac{2}{3}y=\frac{2}{3}\times 30=20.\] Thus the solutions of the system of equations (14) are \(x=20\) and \(y=30\). Ultimately, the solutions of the main system of equation (2) are given by \[x=20,\quad y=30,\quad z=30,\quad w=18.\] **Remark 1**.: A mathematical interpretation of this text has been given by Friberg in [10, 11, 12]. ### Conclusion From our mathematical interpretation of **SMT No. 18** we consider it is apparent that the Susa scribes were familiar with the idea of two similar triangles and the relation between their sides. This observation is of great importance to the history of mathematics in that it confirms the origin of similarity and intercept theorem date to approximately a millennium before the Greeks. **SMT No. 18** also deals with one of the most complicated systems of equations in Babylonian mathematics because there are four unknown variables involved in the system. Although there are only three equations and four unknowns at the beginning of the problem, the Susa scribe has used his geometrical knowledge to obtain another equation employing similarity of triangles in order to achieve the solution.
2306.14532
The length of mixed identities for finite groups
We prove that there exists a constant $c>0$ such that any finite group having no non-trivial mixed identity of length $\leq c$ is an almost simple group with a simple group of Lie type as its socle. Starting the study of mixed identities for almost simple groups, we obtain results for groups with socle ${\rm PSL}_n(q)$, ${\rm PSp}_{2m}(q)$, ${\rm P \Omega}_{2m-1}^\circ(q)$, and ${\rm PSU}_n(q)$ for a prime power $q$. For such groups, we will prove rank-independent bounds for the length of a shortest non-trivial mixed identity, depending only on the field size $q$.
Henry Bradford, Jakob Schneider, Andreas Thom
2023-06-26T09:09:01Z
http://arxiv.org/abs/2306.14532v1
# The length of mixed identities for finite groups ###### Abstract. We prove that there exists a constant \(c>0\) such that any finite group having no non-trivial mixed identity of length \(\leq c\) is an almost simple group with a simple group of Lie type as its socle. Starting the study of mixed identities for almost simple groups, we obtain results for groups with socle \(\mathrm{PSL}_{n}(q)\), \(\mathrm{PSp}_{2m}(q)\), \(\mathrm{P}\Omega^{\circ}_{2m-1}(q)\), and \(\mathrm{PSU}_{n}(q)\) for a prime power \(q\). For such groups, we will prove rank-independent bounds for the length of a shortest non-trivial mixed identity, depending only on the field size \(q\). ###### Contents * 1 Introduction * 2 Basic observations * 3 Reduction to almost simple groups of Lie type * 4 The projective special linear groups \(\mathrm{PSL}_{n}(q)\) * 5 An alternative approach to \(\mathrm{PSL}_{2}(q)\) * 6 The projective symplectic groups \(\mathrm{PSp}_{2m}(q)\) * 7 The odd-degree projective orthogonal groups \(\mathrm{P}\Omega^{\circ}_{2m-1}(q)\) * 8 The projective special unitary groups \(\mathrm{PSU}_{n}(q)\) * 9 Outlook and further comments ## 1. Introduction In this article we study _identities with constants_ (also called _mixed identities_) for finite groups. A word with constants in a finite group \(G\) is an element of the free product \(w\in G\ast\mathbf{F}_{r}\). Note that \(w\) induces a map \(w\colon G^{r}\to G\) by evaluation. A non-trivial word with constants \(w\) is called an _identity with constants_ or a _mixed identity_ for \(G\) if and only if \(w(g_{1},\dots,g_{r})=1_{G}\) for all choices of the \(g_{i}\in G\) (\(i=1,\dots,r\)). Without loss of generality, we will restrict our attention almost only to the case \(r=1\), see Lemma 2.2. The study of word maps with and without constants on finite and algebraic groups has seen a lot of progress in the past decades, see for example [14, 15, 16, 17, 18, 19, 20, 21, 22, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 220, 221, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 32, 334, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 203, 204, 205, 206, 207, 208, 209, 211, 212, 22, 213, 214, 215, 216, 217, 218, 219, 220, 221, 223, 224, 225, 226, 228, 229, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 251, 252, 254, 255, 256, 257, 258, 259, 260, 262, 263, 264, 265, 266, 267, 268, 269, 270, 281, 282, 283, 284, 285, 286, 287, 288, 29, 293, 30, 31, 32, 334, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 12, 12, 12, 13, 14, 1 _In the latter case, there is an absolute constant \(c>0\), so that, if \(G\) has no mixed identity of length \(\leq c\), then the socle of \(G\) is a simple group of Lie type, different from \(\mathrm{PSp}_{2m}(q)\), for \(m\geq 2\), and \(\mathrm{P\Omega}_{2m-1}^{\circ}(q)\), for \(m\geq 3\) odd, or \(m\geq 3\) arbitrary and \(q\equiv 1\) mod \(4\)._ The characterization of almost simple groups that admit mixed identities of bounded length proceeds family by family, where we only have partial results so far. First of all, note the following, which is a consequence of Lemma 3.2 below. **Lemma 1.1**.: _Let \(G\leq H\) be an inclusion of almost simple groups with socle \(S\). If \(G\) has a mixed identity of length \(l\), then \(H\) has a mixed identity of length at most \(2l\)._ This applies to groups with socle \(A_{n}\) by direct inspection (for a \(3\)-cycle \(\sigma\), \(w(x)=[x,\sigma]^{30}\in A_{n}*\langle x\rangle\) is a mixed identity for \(A_{n}\)) and to groups with socle \(\mathrm{PSp}_{2m}(q)\), for \(m\geq 2\), or \(\mathrm{P\Omega}_{2m-1}^{\circ}(q)\), for \(m\geq 3\) odd or \(q\equiv 1\) mod \(4\), as a consequence of results of Tomanov [26]. For convenience, we reproduce his results with short and self-contained proofs. The first interesting case is the case of almost simple groups with socle \(\mathrm{PSL}_{2}(q).\) In this case, we get a complete answer as follows. Let \(F\) denote the Frobenius automorphism \(x\mapsto x^{p}\) of the finite field of order \(q=p^{e}\), and also the induced automorphism of \(\mathrm{PGL}_{n}(q)\). **Theorem 2**.: _Let \(G\) be an almost simple group with socle \(\mathrm{PSL}_{2}(q)\) and \(q=p^{e}\) for a prime number \(p\). Let \(f\mid e\) be the smallest natural number, such that \(F^{f}\in G.\) Then the length of a shortest mixed identity of \(G\) is \(\Theta(\frac{e}{f}p^{f})\)._ In the case of almost simple groups with socle \(\mathrm{PSL}_{n}(q)\) for \(n\geq 3\), we only have partial results. Note however that the implied constants in the next theorem are independent of the rank. **Theorem 3**.: _Let \(G\) be an almost simple group with socle \(\mathrm{PSL}_{n}(q)\) and \(q=p^{e}\) for a prime number \(p\). Then, \(G\) has a mixed identity of length \(O(q)\). Moreover, if \(G\leq\mathrm{PGL}_{n}(q)\rtimes\mathrm{Aut}(\mathbb{F}_{q})\), \(F\) is the Frobenius automorphism as above, and \(f\mid e\) is the smallest natural number such that \(F^{f}\in G\), then any mixed identity of \(G\) is of length \(\Omega(\frac{e}{f}p^{f})\)._ Note that this is contrast to the minimal length of identities without constants for \(\mathrm{PSL}_{n}(q)\) which are known to be bounded from below by \(q^{\lfloor n/2\rfloor}\) and bounded from above by \(O(q^{\lfloor n/2\rfloor}\log(q)^{O_{n}(1)})\), by results of the first and the third author [5]. In case \(n\geq 3\), we do not know yet what effect the transpose-inverse has on the length of shortest mixed identities. Our result for the family \(\mathrm{PSU}_{n}(q)\) is less refined and reads as follows: **Theorem 4**.: _Let \(G\) be an almost simple group with socle \(\mathrm{PSU}_{n}(q)\). Then, \(G\) has a mixed identity of length \(O(q^{2})\). Moreover, any mixed identity for \(\mathrm{PSU}_{n}(q)\), even with constants from \(\mathrm{PGL}_{n}(q^{2})\), is of length \(\Omega(q)\)._ Even though there exist mixed identities of bounded length for \(\mathrm{PSp}_{2m}(q)\), our methods allow for some more refined understanding of the structure of the mixed identities that can occur. A constant appearing in a word with constants is called _critical_ if its removal leads to cancellation of the variables. **Theorem 5**.: _Let \(q\) be a prime power and \(m\geq 2\). A shortest mixed identity for \(\mathrm{PSp}_{2m}(q)\) without critical constants which lift to involutions in \(\mathrm{Sp}_{2m}(q)\) is of length \(\Theta(q)\) for \(q\) odd. For \(q\) even it lies in \(\Omega(q)\)._ These results resemble analogous results of Tomanov [26] and Gordeev [9], for algebraic groups over infinite fields. One may also use these results for algebraic groups, combined with the Schwartz-Zippel Lemma, to prove lower bounds on the lengths of mixed identities for finite groups of Lie type. Indeed, we shall exploit these methods in a forthcoming paper. However, the results obtained by such methods would not be uniform in the rank, as our bounds here are. The article is organized as follows. After the introduction we have a section covering basic observations. After that we have one section for each family of simple groups that is covered, i.e. \(\mathrm{PSL}_{n}(q)\), \(\mathrm{PSp}_{2m}(q)\), \(\mathrm{P}\Omega_{2m-1}^{\circ}(q)\), and \(\mathrm{PSU}_{n}(q)\). Various arguments for \(\mathrm{PSp}_{2m}(q)\) and \(\mathrm{PSU}_{n}(q)\) will follow the same lines as the prototypical argument for \(\mathrm{PSL}_{n}(q)\) and we recommend the reader to read this case first. We end the paper with a section on further remarks and goals for the future. We apply the main results of this paper in [3] and answer a question from [1] on the length of non-solutions to equations with constants in linear groups. ## 2. Basic observations Let \(G\) be a finite group and \(C\geq G\) be the overgroup of possible constants. Recall that a mixed identity \(w\in C*\mathbf{F}_{r}\) is called a _shortest_ mixed identity for \(G\) with constants from \(C\) if there is no shorter one, i.e. for \(v\in C*\mathbf{F}_{r}\) another mixed identity, we have \(|w|\leq|v|\). Here \(|w|=l\) measures the length of the fixed word \[w=c_{0}x_{i(1)}^{\varepsilon(1)}c_{1}\cdots c_{l-1}x_{i(l)}^{\varepsilon(l)}c _{l}\in C*\mathbf{F}_{r},\] where \(\varepsilon(j)=\pm 1\) (\(j=1,\ldots,l\)) and \(c_{j}\in C\) (\(j=0,\ldots,l\)). We always assume that \[x_{i(j)}^{\varepsilon(j)}=x_{i(j+1)}^{-\varepsilon(j+1)}\] for \(j=1,\ldots,l-1\) implies \(c_{j}\neq 1_{C}\); i.e. \(w\) is _reduced_. The word \(w\) is called _cyclically reduced_ if \(x_{i(l)}^{\varepsilon(l)}=x_{i(1)}^{-\varepsilon(1)}\) implies \(c_{l}c_{0}\neq 1_{C}\). The first basic observation is that a shortest mixed identity for \(G\) with constants from some given group \(C\) is always cyclically reduced: **Lemma 2.1**.: _Let \(w\in C*{\bf F}_{r}\) be a shortest mixed identity for \(G\). Then \(w\) is cyclically reduced._ Proof.: We can write \(w\) as \(w=u^{-1}vu\), where \(u,v\in C*{\bf F}_{r}\) and \(v\) is cyclically reduced. If \(w\) is a mixed identity for \(G\), then \(w(g_{1},\ldots,g_{r})=1_{C}=v(g_{1},\ldots,g_{r})^{u(g_{1},\ldots,g_{r})}\) for all \(g_{1},\ldots,g_{r}\in G\). Thus \(v\) is also a mixed identity for \(G\) whose length is at most \(|w|\). But we cannot have \(v=c\) for a \(c\in C\), since then if \(c\neq 1_{C}\), we have \(w(1_{G},\ldots,1_{G})=v(1_{G},\ldots,1_{G})^{u(1_{G},\ldots,1_{G})}=c^{u(1_{G},\ldots,1_{G})}\neq 1_{C}\). If \(c=1_{C}\), then \(w\) would be trivial. Hence \(v\in C*{\bf F}_{r}\setminus C\) is a shortest mixed identity and \(u\in C\). The proof is complete. Fix a reduced word \(w=c_{0}x_{i(1)}^{\varepsilon(1)}c_{1}\cdots c_{l-1}x_{i(l)}^{\varepsilon(l)}c _{l}\in C*{\bf F}_{r}\). Define the sets of indices \(J_{0}(w),J_{+}(w),J_{-}(w)\subseteq\{1,\ldots,l-1\}\) by \(J_{0}(w)\coloneqq\{j\,|\,i(j)\neq i(j+1)\}\), \(J_{+}(w)\coloneqq\{j\,|\,i(j)=i(j+1)\) and \(\varepsilon(j)=\varepsilon(j+1)\}\), and \(J_{-}(w)\coloneqq\{j\,|\,i(j)=i(j+1)\) and \(\varepsilon(j)=-\varepsilon(j+1)\}\), which partition the set \(\{1,\ldots,l-1\}\). The constants \(c_{1},\ldots,c_{l-1}\in C\) are called _intermediate constants_. The constants \(c_{j}\) with \(j\in J_{-}(w)\) are called _critical constants_. We have the following second observation which guarantees that we need to consider only words with one variable \(x\): **Lemma 2.2**.: _Let \(w=c_{0}x_{i(1)}^{\varepsilon(1)}c_{1}\cdots c_{l-1}x_{i(l)}^{\varepsilon(l)}c _{l}\in C*{\bf F}_{r}\) be reduced. Then, assuming \(l=|w|\leq|G|\), there is a substitution \(s\colon x_{i}\mapsto g_{-i}xg_{i}\) for \(g_{\pm i}\in G\) (\(i=1,\ldots,r\)) such that in_ \[w^{\prime}\coloneqq w(s(x_{1}),\ldots,s(x_{r}))=c_{0}^{\prime}x^{\varepsilon( 1)}c_{1}^{\prime}\cdots c_{l-1}^{\prime}x^{\varepsilon(l)}c_{l}^{\prime}\in C* \langle x\rangle\] _we have \(c_{j}^{\prime}\neq 1_{C}\) for \(j=1,\ldots,l-1\)._ Proof.: We have that \(c_{j}^{\prime}=g_{\varepsilon(j)i(j)}^{\varepsilon(j)}c_{j}g_{-\varepsilon(j+ 1)i(j+1)}^{\varepsilon(j+1)}\) (\(j=1,\ldots,l-1\)). So among all the possible \(|G|^{2r}\) choices for the constants \(g_{\pm i}\) (\(i=1,\ldots,r\)), each condition \(c_{j}^{\prime}\neq 1_{C}\) for \(j\in J_{0}(w)\cup J_{+}(w)\) rules out at most \(|G|^{2r-1}\) tuples. If \(j\in J_{-}(w)\), then we must have \[c_{j}^{\prime}=c_{j}^{g_{-\varepsilon(j)}^{-\varepsilon(j)}}\neq 1_{C},\] since \(c_{j}\neq 1_{C}\) by assumption. Hence, if \(l-1<|G|\), i.e. \(|G|^{2r-1}\,(l-1)<|G|^{2r}\), by counting, there must be one tuple \((g_{i})_{i=\pm 1}^{\pm r}\) such that \(c_{j}^{\prime}\neq 1_{C}\) (\(j=1,\ldots,l-1\)). **Remark 2.3**.: If \(w\) is cyclically reduced and \(l<|G|\), we can also guarantee \(w^{\prime}\) to be cyclically reduced. We have \[c_{l}^{\prime}c_{0}^{\prime}=g_{\varepsilon(l)i(l)}^{\varepsilon(l)}c_{l}c_{0}g_ {-\varepsilon(1)i(1)}^{\varepsilon(1)}.\] If \(i(l)=i(1)\) and \(\varepsilon(1)=-\varepsilon(l)\), then as \(w\) is cyclically reduced, we must have \(c_{l}c_{0}\neq 1_{C}\) and hence \[c_{l}^{\prime}c_{0}^{\prime}=(c_{l}c_{0})^{g_{\varepsilon(l)i(l)}^{-\varepsilon (l)}}\neq 1_{C}.\] In the opposite case, we rule out at most \(|G|^{2r-1}\) further tuples. But \(|G|^{2r-1}\,l<|G|^{2r}\), so there is a legal choice for \((g_{i})_{i=\pm 1}^{\pm r}\). From this we get the following immediate non-optimal corollary with a short proof: **Corollary 2.4**.: _There is a shortest mixed identity \(w\in C*\langle x\rangle\) for \(G\) with only one variable and all intermediate constants non-trivial. It is cyclically reduced and of length \(\leq|G|\)._ Proof.: Since \(x^{|G|}\) is a mixed identity for \(G\), a shortest mixed identity \(w\in C*{\bf F}_{r}\) for \(G\) has length at most \(|G|\). Clearly, we may assume \(G\neq{\bf 1}\), since otherwise \(w=x\) is a shortest mixed identity. Hence either \(w=(xc)^{|G|}\) (for \(c\in G\setminus{\bf 1}\subseteq C\)) is a shortest mixed identity with all intermediate constants non-trivial, or there is a shortest mixed identity \(w\in C*{\bf F}_{r}\) of length \(<|G|\), which by Lemma 2.1 is cyclically reduced. Applying Lemma 2.2 and Remark 2.3 gives a shortest mixed identity of length \(<|G|\) with only one variable and all intermediate constants non-trivial; it is cyclically reduced. The next lemma proves that there are no short identities of length less than four if the groups \(G\) and \(C\) fulfill some mild assumptions. **Lemma 2.5**.: _Let \(w\in C*\langle x\rangle\) be of length \(|w|=l\). Let \(G\neq{\bf 1}\) be non-abelian and \({\bf C}_{C}(G)={\bf 1}\) if \(l\leq 2\), and let \(C=G\), \(|G|\) be even when \(l=3\). Then \(w\) is not a mixed identity for \(G\) with constants from \(C\)._ Proof.: Clearly, any word of length one induces an injective map, so cannot be constant if \(G\neq{\bf 1}\). If \(l=2\), then, up to rotation and replacing \(x\) by \(x^{-1}\), either (a) \(w=xcxc^{-1}\) or (b) \(w=xcx^{-1}c^{-1}\) (for \(c\in C\)). In Case (a), if \(w\) induces the trivial map, we must have \(g^{c^{-1}}=g^{-1}\) for all \(g\in G\), in particular, \(g\mapsto g^{-1}\) would an automorphism of \(G\), so \(G\) would be abelian, which is not the case. In Case (b), if \(w\) is trivial on \(G\), then \(c\in{\bf C}_{C}(G)={\bf 1}\), so that \(c=1_{G}\) and \(w\) would not be reduced. If \(l=3\), up to rotation and replacing \(x\) by \(x^{-1}\), we have that (a) \(w=xaxbx^{-1}c\); or (b) \(w=xaxbxc\) with \(abc=1_{G}\). Hence, if \(w\) is a mixed identity, then \(xaxbx^{\varepsilon}cc^{-1}x^{-\varepsilon}=xaxxb=c^{-1}x^{-\varepsilon}\). Thus we get \(xaxa=c^{-1}x^{-\varepsilon}b^{-1}a\). This cannot hold when \(|G|\) is even, since then there is an element \(g\in G\) of order two, so that \(ga^{-1}.a.ga^{-1}.a=1_{G}=a^{-1}.a.a^{-1}.a\), but \(a^{-1}\neq ga^{-1}\) and both are from \(G\), since by assumption \(C=G\). However, the map \(x\mapsto c^{-1}x^{-\varepsilon}b^{-1}a\) is injective, which gives a contradiction. ## 3. Reduction to almost simple groups of Lie type In this section, we prove Theorem 1, modulo the statements about symplectic and orthogonal groups. The proof is based on the following two lemmas. **Lemma 3.1**.: _If \(G\) has a non-trivial center, then it satisfies the mixed identity \([x,c]\in G*\langle x\rangle\) for \(c\in\mathbf{C}(G)\setminus\mathbf{1}\) of length \(2\). Similarly, if \(G\) is a non-trivial direct product \(G=A\times B\), then it satisfies the mixed identity \([a^{x},b]\in G*\langle x\rangle\), for non-trivial \(a\in A\times\mathbf{1}\), \(b\in\mathbf{1}\times B\), which is of length \(4\)._ Proof.: A trivial computation. **Lemma 3.2**.: _Let \(G\) be a finite group and let \(\mathbf{1}\neq N\trianglelefteq G\) be a normal subgroup. Suppose \(N\) has a mixed identity with constants in \(G\) of length \(l\). Then \(G\) has a mixed identity of length at most \(2l\)._ Proof.: Let \(w\in G*\mathbf{F}_{r}\) be a mixed identity for \(N\) of length \(l\). By Corollary 2.4, we may assume that \(w\) is of length \(l\leq|N|\), has only one variable, and all intermediate constants of \(w\) are non-trivial, i.e. \[w=c_{0}x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}c_{l}\] with \(c_{j}\neq 1_{G}\) for \(1\leq j\leq l-1\). For \(n\in N\setminus\mathbf{1}\) set \(v\coloneqq w(n^{x})\). Clearly, \(\operatorname{im}(v)\subseteq\operatorname{im}(w)=\mathbf{1}\), so it suffices to check that \(v\neq 1_{G}\) is non-trivial in \(G*\langle x\rangle\). But there is no cancellation in \(v\), since \(x^{\varepsilon(j)}c_{j}x^{\varepsilon(j+1)}\) becomes \(x^{-1}n^{\varepsilon(j)}xc_{j}x^{-1}n^{\varepsilon(j+1)}x\), so \(v\) is non-trivial of length \(2l\). **Remark 3.3**.: It is clear that if \(w\in\mathbf{F}_{r}\) is an identity for the group \(G\), then \(w\) is also an identity for every subgroup \(H\) and every quotient \(Q\) of \(G\). In particular, the length of the shortest identities (without constants) for \(H\) and \(Q\) are at most the length of a shortest identity for \(G\). The analogous statements for mixed identities are false: Let \(H=\operatorname{PSL}_{2}(q)\) and \(G=\operatorname{PSL}_{2}(q)\times C_{2}\). Then, \(H\) is a subgroup and a quotient of \(G\); by Lemma 3.1, \(G\) has a mixed identity of length \(2\), whereas by Theorem 3 the length of a shortest mixed identity for \(H\) is \(\Theta(q)\). Let's recall Fitting's structure theorem. **Theorem 6** (Fitting).: _Let \(G\) be a non-trivial finite group and suppose \(G\) has no non-trivial abelian normal subgroup. Then there exist positive integers \(k\) and \(l_{1},\ldots,l_{k}\) and distinct non-abelian finite simple groups \(H_{1},\ldots,H_{k}\) such that:_ \[S=\prod_{i=1}^{k}H_{i}^{l_{i}}\trianglelefteq G\leq\prod_{i=1}^{k}\operatorname {Aut}(H_{i})^{l_{i}}\wr S_{l_{i}}=\operatorname{Aut}(S).\] _Here the socle \(S=\operatorname{soc}(G)\) of \(G\) is the product \(\prod_{i=1}^{k}H_{i}^{l_{i}}\)._ Now we are ready to prove Theorem 1: Proof of Theorem 1.: If \(G\) has a non-trivial abelian normal subgroup, then, by Lemma 3.1 and Lemma 3.2, it has a mixed identity of length at most \(4\). Otherwise, \(G\) is as in Theorem 6. Again by Lemma 3.1, if \(k\geq 2\) or some \(l_{i}\geq 2\) (\(i\in\{1,\ldots,k\}\)), then the socle of \(G\) satisfies a mixed identity of length \(4\), as it is a non-trivial direct product. By Lemma 3.2, the group \(G\) satisfies a mixed identity of length at most \(8\). Otherwise \(G\) is almost simple, with socle \(S\) (a non-abelian finite simple group). If \(S\) is alternating, then, by the argument on page 3 and Lemma 3.2, the group \(G\) has a mixed identity of length at most \(120\). If \(S\) is sporadic, then, by Lemma 3.2, the group \(G\) has a mixed identity of bounded length. The cases of the symplectic and odd-dimensional orthogonal groups are deferred, to Sections 6 and 7, respectively. Theorem 1 gives now the following optimal improvement of Corollary 2.4: **Corollary 3.4**.: _Every finite group \(G\) has a mixed identity of length \(O(|G|^{1/3})\)._ Proof.: This is a consequence of Theorem 1, the reduction to simple groups by Lemma 1.1, and the bounds obtained for simple groups in [5, Theorem 1.1]. ## 4. The projective special linear groups \(\operatorname{PSL}_{n}(q)\) In this section, we prove Theorem 3. We start with the construction of a mixed identity for \(\operatorname{PSL}_{n}(q)\). **Lemma 4.1**.: _There is a mixed identity of length \(O(q)\) for \(\operatorname{PSL}_{n}(q)\)._ Proof.: For \(h\) a rank-one matrix that squares to \(0_{V}\), set \(k\coloneqq 1_{V}+h\in\operatorname{SL}_{n}(q)\), where \(V\cong\mathbb{F}_{q}^{n}\) is the natural module of \(\operatorname{SL}_{n}(q)\). Then \(k\) fixes the hyperplane \(H\coloneqq\ker(h)\) pointwise. Now note that \(v(x,y)=[[[x,y^{p}],y^{-(q-1)}],y^{q+1}]\in\mathbf{F}_{2}=\langle x,y\rangle\) is a law of length \(O(q)\) in the variables \(x,y\) for \(\operatorname{SL}_{2}(q)\). This is, since any element \(g\in\operatorname{SL}_{2}(q)\) satisfies either \(g^{q-1}=\operatorname{id}\) if it is diagonalizable, \(g^{q+1}=\operatorname{id}\) if it has no eigenvectors, or \(g^{p}=\pm\operatorname{id}\) if it is a plus or minus a unipotent. Let \(g\in\operatorname{SL}_{n}(q)\) be arbitrary and consider the elements \(k\) and \(k^{g}\). Now, \(k\) and \(k^{g}\) fix the codimension-two subspace \(U:=H\cap H.g\) and can be written in the form \[k\text{ resp. }k^{g}=\begin{pmatrix}*&*\\ 0&1_{U}\end{pmatrix}.\] Hence if we consider the matrices \(v(k,k^{g})\) and \(v(k^{g},k)\), we get \[v(k,k^{g})\text{ resp. }v(k^{g},k)=\begin{pmatrix}1_{W}&*\\ 0&1_{U}\end{pmatrix}.\] where \(W\) is a complement of \(U\) in \(V\). Hence both lie in an abelian subgroup of unipotent elements and \([v(k,k^{g}),v(k^{g},k)]\) is trivial for all choices of \(g\). Thus \(w=[v(k,k^{x}),v(k^{x},k)]\in\operatorname{SL}_{n}(q)*\langle x\rangle\) is a mixed identity for \(\operatorname{SL}_{n}(q)\) and hence descends to a mixed identity for \(\operatorname{PSL}_{n}(q)\) of length \(O(q)\). This shows that any almost simple group with socle \(\operatorname{PSL}_{n}(q)\) has a shortest mixed identity of length \(O(q)\) by Lemma 3.2. Now we prove the lower bound in Theorem 3. The idea of the proof which we present stems from [8]. For the sake of clarity, let us first focus on the case where \(n=2\) and no field automorphisms are involved, i.e. \(f=e\). Write \(\overline{\bullet}\colon\operatorname{GL}_{2}(q)\to\operatorname{PGL}_{2}(q)\) for the natural map. Let \[w=c_{0}x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}c_{l}\in \operatorname{GL}_{2}(q)*\langle x\rangle\] have only one variable \(x\), as we may assume by Lemma 2.2. For \(c\in\operatorname{GL}_{2}(q)\) write \(\operatorname{fix}(\overline{c})\) for the set of fixed points \(p\in\mathbf{P}(V)\) of \(\overline{c}\). Here \(\mathbf{P}(V)\) denotes the projective line obtained from \(V\cong\mathbb{F}_{q}^{2}\). We have the following: **Lemma 4.2**.: _The following are equivalent:_ 1. \(\bigcup_{j=1}^{l-1}\operatorname{fix}(\overline{c}_{j})\neq\mathbf{P}(V)\)_;_ 2. _There exists a linear operator_ \(h\colon V\to V\) _of rank one such that (a)_ \(h^{2}=0_{V}\)_; and (b)_ \(hc_{j}h\neq 0_{V}\) _for all_ \(j=1,\ldots,l-1\)_._ 3. _There exists a linear operator_ \(h\colon V\to V\) _of rank one such that (a)_ \(h^{2}=0_{V}\)_; and (b')_ \(hc_{1}h\cdots hc_{l-1}h\neq 0_{V}\)_._ Proof.: (i)\(\Rightarrow\)(iii): Given \(v\in V\) such that \(\langle v\rangle\) is not a fixed point of any of the \(\overline{c}_{j}\) (\(j=1,\ldots,l-1\)) extend it to a basis \(B\) of \(V\) and define \(h\) by \(b\mapsto v\) for \(b\in B\setminus\{v\}\) and \(v\mapsto 0\). (iii)\(\Rightarrow\)(ii) is obvious. Now, conversely, that is (ii)\(\Rightarrow\)(i), if (a) and (b) is satisfied, then \(\operatorname{im}(h)=\ker(h)\) is a one-dimensional subspace of \(V\) which cannot be fixed by any \(\overline{c}_{j}\) (\(j\in\{1,\ldots,l-1\}\)), otherwise \(hc_{j}h=0_{V}\). Note that, with respect to the basis \(e_{1}=v,e_{2}=b\), the linear map \(h\) has the matrix \[h=\begin{pmatrix}0&0\\ 1&0\end{pmatrix}.\] **Lemma 4.3**.: _Assume the condition in Lemma 4.2 is satisfied. If \(0<l<q\), then \(\overline{w}\) is non-constant on \(\mathrm{PSL}_{2}(q)\)._ Before we prove Lemma 4.3, we need an auxiliary fact: **Lemma 4.4**.: _Let \(V\) be a finite-dimensional \(\mathbb{F}_{q}\)-vector space. Assume \(0<l<q\) and consider the map \(v\colon\mathbb{F}_{q}\to V\) given by \(v(\lambda)=v_{0}+\lambda v_{1}+\cdots+\lambda^{l}v_{l}\) for \(v_{0},\ldots,v_{l}\in V\), \(v_{l}\neq 0\) and one of \(v_{0},\ldots,v_{l-1}\) linearly independent from \(v_{l}\). Then \(\mathrm{im}(v)\) is not contained in a one-dimensional subspace of \(V\)._ Proof.: Let \(v_{0}^{\prime},\ldots,v_{n}^{\prime}\) be a basis of \(V\). Rewrite \(v\) as \(v(\lambda)=p_{0}(\lambda)v_{0}^{\prime}+p_{1}(\lambda)v_{1}^{\prime}+\cdots+p _{n}(\lambda)v_{n}^{\prime}\) for polynomials \(p_{i}\in\mathbb{F}_{q}[X]\) (\(i=0,\ldots,n\)), where \(v_{0}^{\prime}\coloneqq v_{l}\) and \(v_{1}^{\prime}\coloneqq v_{j}\), where \(j\) is chosen such that \(v_{l}\) and \(v_{j}\) are linearly independent (\(j\in\{0,\ldots,l-1\}\)). Then \(p_{0}\) has degree \(l\) and its coefficient of \(\lambda^{l}\) is one and the coefficient of \(\lambda^{j}\) is zero. Similarly, the coefficient in \(p_{1}\) of \(\lambda^{j}\) is one and the coefficient of \(\lambda^{l}\) is zero. Choose \(\mu\in\mathbb{F}_{q}\) such that \(p_{1}(\mu)\neq 0\), which is possible, since \(p_{1}\) is of degree less than \(q\) and non-zero. Then \(p_{0}(\lambda)p_{1}(\mu)=p_{0}(\mu)p_{1}(\lambda)\) cannot hold for all \(\lambda\), since \(p_{0}(\lambda)p_{1}(\mu)-p_{0}(\mu)p_{1}(\lambda)\) is a non-zero polynomial of degree \(l<q\) in \(\lambda\). Hence \(v(\mu)\) cannot be a multiple of \(v(\lambda)\). The proof is complete. Proof of Lemma 4.3.: The word \(\overline{w}\) is constant if and only if \(\overline{w}^{\prime}\) is constant, where \(w^{\prime}=x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}\). Let \(h\) be as in Lemma 4.2, then we can plug in \(k(\lambda)=1_{V}+\lambda h\) into \(w^{\prime}\). Note that \(k(\lambda)^{-1}=(1_{V}+\lambda h)^{-1}=1_{V}-\lambda h=k(-\lambda)\) as \(h^{2}=0\). Thus one obtains \[w^{\prime}(k(\lambda))=w^{\prime}(1_{V}+\lambda h)=p_{0}+\lambda p_{1}+\cdots +\lambda^{l}p_{l},\] where \(p_{0},\ldots,p_{l}\in\mathrm{End}(V)\), \(p_{0}=c_{1}\cdots c_{l-1}\), and \(p_{l}=\pm hc_{1}h\cdots hc_{l-1}h=\beta h\) for some \(\beta\in\mathbb{F}_{q}^{\times}\). Thus \(p_{0}\) and \(p_{l}\) are linearly independent as the former is invertible and the latter has rank one. Hence we may apply Lemma 4.4 to deduce that the image of \(\lambda\mapsto w^{\prime}(k(\lambda))=w^{\prime}(1_{V}+\lambda h)\) is not contained in a one-dimensional subspace of \(\mathrm{End}(V)=\mathbf{M}_{2}(q)\), so \(\overline{w}^{\prime}\) is non-constant. The proof is finished. We finish the proof of the lower bound for \(\mathrm{PSL}_{2}(q)\) in Theorem 3 by proving the following lemma. **Lemma 4.5**.: _Let \(w\in\mathrm{GL}_{2}(q)*\langle x\rangle\) be of length \(0<l\leq\frac{q}{2}+1\) such that \(\overline{w}\in\mathrm{PGL}_{2}(q)*\langle x\rangle\) is of positive length. Then \(\overline{w}\) is non-constant on \(\mathrm{PSL}_{2}(q)\)._ Proof.: As before, we apply Lemma 2.2 to get that all intermediate constants \(c_{j}\) (\(j=1,\ldots,l-1\)) are non-central, i.e. not equal to \(\lambda 1_{V}\) for some \(\lambda\in\mathbb{F}_{q}^{\times}\). Then we can pass from \[w=c_{0}x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}c_{l}\quad\text{ to}\quad w^{\prime}=x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}.\] The condition in Lemma 4.2 is satisfied since each \(c_{j}\) (\(j=1,\ldots,l-1\)) is non-central, so \(\overline{c}_{j}\) has at most two fixed points. Thus \[\left|\bigcup_{j=1}^{l-1}\operatorname{fix}(\overline{c}_{j})\right|\leq 2(l-1 )\leq q<|\mathbf{P}(V)|=q+1.\] Also \(0<l<q\) for \(q>2\), so the condition in Lemma 4.3 is satisfied and \(\overline{w}\) is non-constant. For \(q=2\) we can apply Lemma 2.5. The proof is complete. Finally, note that the above proof for \(\operatorname{PSL}_{2}(q)\) can be adapted for \(\operatorname{PSL}_{n}(q)\) for \(n\geq 3\): **Lemma 4.6**.: _Let \(n\geq 3\) and \(w\in\operatorname{GL}_{n}(q)*\langle x\rangle\) be of length \(0<l\leq q-1\) such that \(\overline{w}\in\operatorname{PGL}_{n}(q)*\langle x\rangle\) is of positive length. Then \(\overline{w}\) is non-constant on \(\operatorname{PSL}_{n}(q)\)._ Proof.: Let \(w^{\prime}=x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}\in \operatorname{GL}_{n}(q)*\langle x\rangle\), all \(c_{j}\) non-central (\(j=1,\ldots,l-1\)), \(n\geq 3\), and \(l\leq q-1\). Moreover, \(V\cong\mathbb{F}_{q}^{n}\). Then we need to find \(h\in\operatorname{End}(V)\) such that \(h^{2}=0\) and there is \(H\leq V\) a hyperplane such that \(\ker(h)=H\) and \(\operatorname{im}(h)=\langle v\rangle\leq H\) and \(hc_{j}h\neq 0\) for all \(j=1,\ldots,l-1\). This means \(v.c_{j}\notin H\). Then \(\langle v\rangle\notin\operatorname{fix}(\overline{c}_{j})\) for all \(j=1,\ldots,l-1\). As each \(c_{j}\) is non-central, \(\overline{c}_{j}\) has at most \(f=\frac{q^{n-1}-1}{q-1}+1\) fixed points in \(\mathbf{P}(V)\) (coming from the eigenspaces of \(c_{j}\)). These points are excluded for the choice of \(\langle v\rangle\), but \[f(l-1)<f(q-1)=q^{n-1}-1+q-1<|\mathbf{P}(V)|=\frac{q^{n}-1}{q-1}=q^{n-1}+\cdots +q+1\] (as \(n\geq 3\)), so we can choose \(\langle v\rangle\) to be a non-fixed point of all \(\overline{c}_{j}\) (\(j=1,\ldots,l-1\)). Now we have to choose \(H\leq V\) a hyperplane such that \(v.c_{j}\notin H\) for all \(j=1,\ldots,l-1\) and \(v\in H\). This condition excludes \(g=\frac{q^{n-2}-1}{q-1}\) hyperplanes containing \(v\) and \(v.c_{j}\). But \[g(l-1)<g(q-1)=q^{n-2}-1<\frac{q^{n-1}-1}{q-1}=q^{n-2}+q^{n-3}+\cdots q+1\] and there are that many hyperplanes containing \(v\). So we can choose a suitable hyperplane \(H\). Now we have defined \(h\) up to a scalar factor. We can proceed as in the proof of Lemma 4.3 to see that \(\lambda\mapsto\overline{w}^{\prime}(\overline{k(\lambda)})=\overline{w}^{ \prime}(\overline{1_{V}+\lambda\hbar})\) is non-constant. This shows that, when \(\overline{w}\) is a mixed identity for \(\mathrm{PSL}_{n}(q)\) with constants in \(\mathrm{PGL}_{n}(q)\) and \(n\geq 3\), then we cannot have \(|w|<q\). This finishes the proof. We will now start to take field automorphisms into account and study the groups \(\mathrm{PSL}_{n}(q)\rtimes\langle\alpha\mapsto{\alpha^{p^{f}}}\rangle\). Again, for simplicity, we start with the case \(n=2\). **Lemma 4.7**.: _The group \(\mathrm{PSL}_{2}(q)\rtimes\langle\alpha\mapsto{\alpha^{p^{f}}}\rangle\) has a mixed identity of length \(O(\frac{e}{f}p^{f})\)._ Proof.: Let \(q=p^{e}\) and \(r=p^{f}\) be powers of the prime \(p\) for \(f\mid e\). Write \(F\) for the \(r\)-Frobenius map on \(\mathrm{SL}_{2}(q)\); \(\alpha\mapsto\alpha^{r}\) entry-wise. For \(g\in\mathrm{SL}_{2}(q)\) we have that \(h\coloneqq gg^{F}\cdots g^{F^{e/f-1}}\) is mapped to \(h^{F}=g^{F}\cdots g^{F^{e/f}}=h^{g}\) under \(F\). But the eigenvalues \(\lambda_{1},\lambda_{2}\) of \(h\) are mapped to \(\lambda_{1}^{F},\lambda_{2}^{F}\) (where \(F\) is extended in the obvious way to \(\overline{\mathbb{F}}_{q}\)), so that we must have \(\{\lambda_{1},\lambda_{2}\}=\{\lambda_{1}^{F},\lambda_{2}^{F}\}\) for any choice of a continuation of \(F\). Thus either \(\lambda_{i}^{F}=\lambda_{i}^{r}=\lambda_{i}\) for \(i=1,2\), i.e. \(\lambda_{i}\in\mathbb{F}_{r}^{\times}\), or \(\lambda_{1}^{F}=\lambda_{1}^{r}=\lambda_{2}\) and \(\lambda_{1}\lambda_{2}=\det(h)=1\), so that \(\lambda_{1}\lambda_{1}^{F}=\lambda_{1}^{r+1}=1\). Hence, one of \(h^{p}\), \(h^{r-1}\), \(h^{r+1}\) is central in \(\mathrm{SL}_{2}(q)\), so that \(w(xx^{F}\cdots x^{F^{e/f-1}})\), where \(w(x)=[[[c,x^{p}],x^{r-1}],x^{r+1}]\) (\(c\) non-central in \(\mathrm{SL}_{2}(q)\)), is a mixed identity for \(\mathrm{SL}_{2}(q)\) (with constants in \(\mathrm{SL}_{2}(q)\rtimes\langle\alpha\mapsto\alpha^{r}\rangle\)) of length \(e|w|/\mathrm{f}\) and \(|w|\leq 2(2(2p+r-1)+r+1)\leq 14r\). Thus we have a mixed identity of length at most \(14\frac{e}{f}p^{f}\) for \(\mathrm{PSL}_{2}(q)\) with constants in \(\mathrm{PSL}_{2}(q)\rtimes\langle\alpha\mapsto\alpha^{r}\rangle\) as desired. The lower bound for \(\mathrm{PSL}_{2}(q)\rtimes\langle\alpha\mapsto{\alpha^{p^{f}}}\rangle\) needs some additional arguments. At first we need an auxiliary lemma about the fixed points of semi-linear maps: **Lemma 4.8**.: _Let \(p\) be a prime and \(q=p^{e}\). Let \(c\) be an invertible \((x\mapsto x^{p^{m}})\)-semi-linear map (\(1\leq m\leq e\)) on \(V\cong\mathbb{F}_{q}^{n}\). Then \(\overline{c}\colon\mathbf{P}(V)\to\mathbf{P}(V)\) has at most \(\frac{p^{m^{\prime}}n-1}{p^{m^{\prime}}-1}\) fixed points, where \(m^{\prime}\coloneqq\gcd(e,m)\)._ Proof.: Any proper power of \(\overline{c}\) has at least the fixed points of \(\overline{c}\) on \(\mathbf{P}(V)\) as its fixed points. Hence we can pass to a proper power of \(c\) which is \((x\mapsto x^{p^{m^{\prime}}})\)-semi-linear (recall that \(m^{\prime}=\gcd(e,m)\)). Take this as our new \(c\). Let \(U\) be the subspace spanned by all eigenvectors of \(c\). Let \(e_{1},\ldots,e_{l}\) be a basis of \(U\) consisting of eigenvectors of \(c\) (by assumption \(l\leq n\)). Then \[u.c=\left(u_{1}^{p^{m^{\prime}}}\quad\cdots\quad u_{l}^{p^{m^{\prime}}}\right) \mathrm{diag}(\lambda_{1},\ldots,\lambda_{l})\] when it is written in the basis \(e_{1},\ldots,e_{l}\) for suitable \(\lambda_{i}\) (\(i=1,\ldots,l\)). Assume there is an eigenvector \(u\in U\) that has exactly \(j\) coordinates unequal to zero. W.l.o.g. assume that these are the first \(j\) coordinates. Then \[u.c=\left(\lambda_{1}u_{1}^{p^{m^{\prime}}}\quad\cdots\quad\lambda_{j}u_{j}^{ p^{m^{\prime}}}\quad 0\quad\cdots\quad 0\right)=\left(\lambda u_{1}\quad\cdots \quad\lambda u_{j}\quad 0\quad\cdots\quad 0\right)\] So \(u_{i}^{p^{m^{\prime}}-1}=\lambda/\lambda_{i}\) (\(i=1\ldots,j\)). Then any other eigenvector with exactly these coordinates unequal to zero is of the form \[\left(\alpha_{1}u_{1}\quad\cdots\quad\alpha_{j}u_{j}\quad 0\quad\cdots\quad 0\right)\] for \(\alpha_{i}\in\mathbb{F}_{q}^{\times}\) such that \(\alpha_{i}^{p^{m^{\prime}}-1}=\mu\) for some \(\mu\in\mathbb{F}_{q}^{\times}\). The new eigenvector we get is a scalar multiple of \[\left(u_{1}\quad\tfrac{\alpha_{2}}{\alpha_{1}}u_{2}\quad\cdots\quad\tfrac{ \alpha_{j}}{\alpha_{1}}u_{j}\quad 0\quad\cdots\quad 0\right)\] and we can choose the ratios \(\alpha_{i}/\alpha_{1}\) arbitrarily such that \((\alpha_{i}/\alpha_{1})^{p^{m^{\prime}}-1}=\mu/\mu=1\), or written differently, \(\alpha_{i}/\alpha_{1}\in\mathbb{F}_{p^{m^{\prime}}}^{\times}\) (\(i=2,\ldots,j\)). Hence there are at most \((p^{m^{\prime}}-1)^{j-1}\) fixed points of \(\overline{c}\) with the first \(j\) coordinates non-zero. Thus counting all fixed points by choosing any \(1\leq j\leq l\) coordinates to be non-zero, we get at most \[\sum_{j=1}^{l}\binom{l}{j}\left(p^{m^{\prime}}-1\right)^{j-1} =\frac{1}{p^{m^{\prime}}-1}\left(\sum_{j=0}^{l}\binom{l}{j}\left( p^{m^{\prime}}-1\right)^{j}-1\right)\] \[=\frac{p^{m^{\prime}l}-1}{p^{m^{\prime}}-1}\leq\frac{p^{m^{ \prime}n}-1}{p^{m^{\prime}}-1}\] as desired. The proof is complete. Now we prove the lower bound in Theorem 3 for \(\mathrm{PSL}_{2}(q)\rtimes\langle\alpha\mapsto\alpha^{p^{\prime}}\rangle\): **Lemma 4.9**.: _Let \(p\) be a prime and \(q=p^{e}\). Let \(w\in(\mathrm{GL}_{2}(q)\rtimes\langle\alpha\mapsto\alpha^{p^{f}}\rangle)* \langle x\rangle\) be of length \(0<l\leq\frac{e}{2f}(p^{f}-1)+1\) (with \(f\mid e\) and \(1\leq f\leq e/2\)) such that \(\overline{w}\in(\mathrm{PGL}_{2}(q)\rtimes\langle\alpha\mapsto\alpha^{p^{f}} \rangle)*\langle x\rangle\) has positive length. Then \(\overline{w}\) is not a mixed identity for \(\mathrm{PSL}_{2}(q)\) with constants in \(\mathrm{PGL}_{2}(q)\rtimes\langle\alpha\mapsto\alpha^{p^{f}}\rangle\)._ Proof.: Set \(F\) to be the Frobenius automorphism \(\mathbb{F}_{q}\to\mathbb{F}_{q}\); \(\alpha\mapsto\alpha^{p^{f}}\) and let \(F\) act coordinate-wise on \(V\cong\mathbb{F}_{q}^{2}\) and on the matrices \(\mathbf{M}_{2}(q)\cong\mathrm{End}(V)\). Assume \[w =c_{0}x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}c_{l}\] \[=b_{0}.F^{m(0)}x^{\varepsilon(1)}b_{1}.F^{m(1)}\cdots b_{l-1}.F^{ m(l-1)}x^{\varepsilon(l)}b_{l}.F^{m(l)}\] is such that \(\overline{w}\) is a mixed identity with \(b_{j}\in\mathrm{GL}_{2}(q)\), integers \(0\leq m(j)\leq e/f-1\) (\(j=0,\ldots,l\)), and maps \(c_{j}\in\mathrm{GL}_{2}(q)\rtimes\langle\alpha\mapsto\alpha^{p^{f}}\rangle \setminus\mathbb{F}_{q}^{\times}1_{V}\) (\(j=1,\ldots,l-1\)), as we may assume by Lemma 2.2. Then we can shift the \(F\)'s so that \(w=a_{0}(x^{\varepsilon(1)})^{F^{n(1)}}a_{1}\cdots a_{l-1}(x^{\varepsilon(l)})^ {F^{n(l)}}a_{l}\) for linear maps \(a_{j}\in\mathrm{GL}_{2}(q)\) (\(j=0,\ldots,l\)) and integers \(0\leq n(j)\leq e/f-1\), since \(\overline{w}\) is a mixed identity. Again, we may concentrate on the word \[w^{\prime}=(x^{\varepsilon(1)})^{F^{n(1)}}a_{1}\cdots a_{l-1}(x^{\varepsilon(l )})^{F^{n(l)}}\] instead of \(w\), which shall be such that \(\overline{w}^{\prime}\in(\operatorname{PGL}_{2}(q)\rtimes\langle\alpha\mapsto \alpha^{p^{\prime}}\rangle)*\langle x\rangle\) is not a constant. We want to pursue the same strategy as in the proof of the lower bound for \(\operatorname{PSL}_{2}(q)\). When plugging in \(k(\lambda)=1_{V}+\lambda h\) for \(x\) into \(w^{\prime}\), we must evaluate \(k(\lambda)^{F^{n(j)}}=(1_{V}+\lambda h)^{F^{n(j)}}=1_{V}+\lambda^{p^{fn(j)}}h ^{F^{n(j)}}\). We want to have that the leading coefficient \(h^{F^{n(1)}}a_{1}h^{F^{n(2)}}\cdots h^{F^{n(l-1)}}a_{l-1}h^{F^{n(l)}}\) of the polynomial in \(\lambda\), which is obtained by evaluating \(w^{\prime}(k(\lambda))\), is non-zero. This means that \(h^{F^{n(j)}}a_{j}h^{F^{n(j+1)}}\) is non-zero (\(j=1,\ldots,l-1\)). Recall from the proof of the lower bound for \(\operatorname{PSL}_{2}(q)\) that \(h\) is a linear map such that \(h\colon b\mapsto v;v\mapsto 0\) for suitable \(0\neq v\in V\) and \(v\neq b\in B\), for \(B\) a basis of \(V\) containing \(v\). The condition that \(h^{F^{n(j)}}a_{j}h^{F^{n(j+1)}}\neq 0_{V}\) then means that \(v.F^{n(j)}.a_{j}\neq\lambda v.F^{n(j+1)}\), so \[v.F^{n(j)}.a_{j}.F^{-n(j+1)}=v.F^{n(j)-n(j+1)}.a_{j}^{F^{-n(j+1)}}\neq\mu v,\] where \(\mu=\lambda.F^{-n(j+1)}\). If \(n(j)=n(j+1)\), \(F^{n(j)-n(j+1)}.a_{j}^{F^{-n(j+1)}}=a_{j}^{F^{-n(j+1)}}\) is a non-trivial linear map and so has at most two fixed points. Else \(F^{n(j)-n(j+1)}.a_{j}^{F^{-n(j+1)}}\) is a (\(x\mapsto x^{p^{fm(j)}}\))-semi-linear map and hence has at most \[p^{\gcd(e,fm(j))}+1\leq p^{\gcd(e,f(e/f-1))}+1\leq p^{e/2}+1\] fixed points by Lemma 4.8. But \(2<p^{e/2}+1\) (as \(e\geq 1\), \(p\geq 2\)) and \[(p^{e/2}+1)(l-1)\leq(p^{e/2}+1)\frac{e}{2f}(p^{f}-1)\leq q-1<q+1=|\mathbf{P}( V)|\] as \(1\leq f\leq e/2\), where we use the assumption \(l\leq\frac{e}{2f}(p^{f}-1)+1\) in the first inequality. The second inequality holds since \(\frac{p^{f}-1}{p^{e/2}-1}\leq\frac{2f}{e}\) as both sides evaluate to \(0\) for \(f=0\) and to \(1\) for \(f=e/2\) and the left hand side is convex as a function of \(f\). Hence there is a solution \(v\) to the inequalities \(v.F^{n(j)}.a_{j}\neq\lambda v.F^{n(j+1)}\) (\(j=1,\ldots,l-1\)). By evaluating \(w^{\prime}(k(\lambda))\) we obtain a polynomial of degree \(\sum_{j=1}^{l}p^{fn(j)}\) in \(\lambda\). Now, if we consider all the words \(w^{\prime},w^{\prime F},\ldots,w^{\prime F^{e/f-1}}\) we note that the sum of the degrees of the corresponding polynomials is \[l\sum_{i=0}^{e/f-1}p^{fi}.\] Hence there is a word \(w^{\prime F^{i}}\) for \(0\leq i\leq e/f-1\) that gives a polynomial of degree at most \[l\frac{f}{e}\sum_{i=0}^{e/f-1}p^{fi}=l\frac{f}{e}\frac{p^{fe/f}-1}{p^{f}-1}=l \frac{f}{e}\frac{q-1}{p^{f}-1}.\] However, this is less than or equal to \(q-1\) when \(l\leq\frac{e}{f}(p^{f}-1)\). But by assumption \(l\leq\frac{e}{2f}(p^{f}-1)+1\leq\frac{e}{f}(p^{f}-1)\) since \(f\leq e/2\). Hence the image of the polynomial \(\lambda\mapsto w^{\prime}(k(\lambda))\) is not contained in a one-dimensional subspace by Lemma 4.4. This completes the proof. Now we turn to the proof for \(\operatorname{PSL}_{n}(q)\): **Lemma 4.10**.: _Let \(p\) be a prime and \(q=p^{e}\). Let \(w\in(\operatorname{GL}_{n}(q)\rtimes\langle\alpha\mapsto\alpha^{p^{f}}\rangle) *\langle x\rangle\) be of length \(0<l\leq\frac{e}{2f}(p^{f}-1)+1\) when \(n=2\), and \(0<l\leq\frac{e}{f}(p^{f}-1)\) for \(n\geq 3\) (where \(f\mid e\) and \(1\leq f\leq e/2\)) such that \(\overline{w}\in(\operatorname{PGL}_{n}(q)\rtimes\langle\alpha\mapsto\alpha^{p^{ f}}\rangle)*\langle x\rangle\) is of positive length. Then \(\overline{w}\) is not a mixed identity for \(\operatorname{PSL}_{n}(q)\) with constants in \(\operatorname{PGL}_{n}(q)\rtimes\langle\alpha\mapsto\alpha^{p^{f}}\rangle\)._ Proof.: As we did above, we want to plug in \(k(\lambda)=1_{V}+\lambda h\) into \(w^{\prime}=x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}\in( \operatorname{GL}_{n}(q)\rtimes\langle\alpha\mapsto\alpha^{p^{f}}\rangle)* \langle x\rangle\) for a suitable rank-one operator \(h\in\operatorname{End}(V)\cong\mathbf{M}_{n}(q)\) and a scalar \(\lambda\in\mathbb{F}_{q}\) to get a non-constant polynomial in \(\lambda\) of degree less than \(q\) forcing \(\overline{w}^{\prime}\) to be non-constant. For this purpose we want to find a vector \(v\) and a hyperplane \(H\) such that \(h\colon b\mapsto v\) for some \(b\notin H\), \(v\in H=\ker(h)\), and \(hc_{j}h\neq 0_{V}\) (\(j=1,\ldots,l-1\)). Then \(h^{2}=0\). By Lemma 4.8 applied to the \((x\mapsto x^{p^{f(j)}})\)-semi-linear map \(c=c_{j}\) (\(j=1\ldots,l\)) we obtain that \(\overline{c}_{j}\) has at most \[\frac{p^{n\gcd(e,fm(j))}-1}{p^{\gcd(e,fm(j))}-1}\leq\frac{p^{\frac{en}{2}}-1}{ p^{e/2}-1}=\frac{q^{n/2}-1}{q^{1/2}-1}\] fixed points, unless \(m(j)=0\) and \(c_{j}\) is linear and so \(\overline{c}_{j}\) has at most \[\frac{q^{n-1}-1}{q-1}+1\] fixed points in \(\mathbf{P}(V)=\mathbf{P}(\mathbb{F}_{q}^{n})\). Hence \(\overline{c}=\overline{c}_{j}\) can have at most \[\max\left(\frac{q^{n/2}-1}{q^{1/2}-1},\frac{q^{n-1}-1}{q-1}+1\right)\] fixed points. But multiplying both terms by \(q-1\) and subtracting the left from the right, we obtain \[q^{n-1}-1+q-1-(q^{n/2}-1)(q^{1/2}+1)=q^{n-1}-q^{\frac{n+1}{2}}-q^{n/2}+q+q^{1/ 2}-1.\] For \(n=2\), this is \[q-q^{3/2}-q+q+q^{1/2}-1=-q^{3/2}+q+q^{1/2}-1=-(q^{1/2}-1)^{2}(q^{1/2}+1)<0\] since \(q\geq 2\). If \(n=3\), we obtain \[q^{2}-q^{2}-q^{3/2}+q+q^{1/2}-1=-(q^{1/2}-1)^{2}(q^{1/2}+1)<0\] as well (\(q\geq 2\)). In these both cases, the above maximum is \(q^{1/2}+1\) resp. \(\frac{q^{3/2}-1}{q^{1/2}-1}=q+q^{1/2}+1\). So, noting that \(\frac{e}{2f}(p^{f}-1)\leq p^{e/2}-1=q^{1/2}-1\) as in the proof for \(\operatorname{PSL}_{2}(q)\) above, since by assumption \(l-1\leq\frac{e}{2f}(p^{f}-1)\), for \(n=2\) we get \[(q^{1/2}+1)(l-1) \leq(q^{1/2}+1)\frac{e}{2f}(p^{f}-1)\] \[\leq(q^{1/2}+1)(q^{1/2}-1)=q-1\] \[<q+1=\left|\mathbf{P}(V)\right|,\] so we find a suitable \(v\) in this case. For \(n=3\), we get by the assumption \(l\leq\frac{e}{f}(p^{f}-1)\) that \[(q+q^{1/2}+1)(l-1) <(q+q^{1/2}+1)\frac{e}{f}(p^{f}-1)\] \[\leq(q+q^{1/2}+1)2(q^{1/2}-1)\] \[=2(q^{3/2}+q+q^{1/2}-q-q^{1/2}-1)=2(q^{3/2}-1)\] \[<q^{2}+q+1=\left|\mathbf{P}(V)\right|,\] so there is a good choice for \(v\) as well in this case. For \(n\geq 4\), we have \[q^{n-1}-q^{\frac{n+1}{2}}-q^{n/2}+q+q^{1/2}-1>0\] and hence \(\frac{q^{n-1}-1}{q-1}+1\) is the above maximum. Note now that \(q-1>\frac{e}{f}(p^{f}-1)\) (as \(f\leq e/2\)) and by assumption \(l\leq\frac{e}{f}(p^{f}-1)\) we have \[\left(\frac{q^{n-1}-1}{q-1}+1\right)(l-1) <\left(\frac{q^{n-1}-1}{q-1}+1\right)\frac{e}{f}(p^{f}-1)\] \[<\left(\frac{q^{n-1}-1}{q-1}+1\right)(q-1)=q^{n-1}-1+q-1\] \[<q^{n-1}+q^{n-2}+\cdots+1=\frac{q^{n}-1}{q-1}\] Hence in all cases we find a suitable \(v\) such that \(\langle v\rangle\) is not a fixed point of any of the \(\overline{c}_{j}\) (\(j=1\ldots,l-1\)). The vectors \(v_{j}\coloneqq v.c_{j}\) (\(j=1,\ldots,l-1\)) do not lie in \(\langle v\rangle\). For \(H\) all hyperplanes are allowed such that \(v\in H\) and \(v_{j}\notin H\) (\(j=1,\ldots,l-1\)). Counting the hyperplanes that contain \(v\) and \(v_{j}\) for one \(1\leq j\leq l-1\), we get \[\frac{q^{n-2}-1}{q-1}(l-1) <\frac{q^{n-2}-1}{q-1}\frac{e}{f}(p^{f}-1)\] \[<\frac{q^{n-2}-1}{q-1}(q-1)=q^{n-2}-1\] \[<q^{n-2}+q^{n-3}+\cdots+q+1=\frac{q^{n-1}-1}{q-1},\] hence we can choose \(H\) containing \(v\) but none of the \(v_{j}\). Now we run the argument as above for \(\operatorname{PSL}_{2}(q)\). For this we need that \(l\leq\frac{e}{f}(p^{f}-1)\) which is guaranteed by the assumptions in the cases \(n=2\) and \(n\geq 3\). This ends the proof. It is still beyond out current understanding how to incorporate the inverse-transpose automorphism in such an argument. Indeed, the above proof cannot work if we allow the inverse-transpose automorphism. Namely, then \(((1_{V}+\lambda h)^{-1})^{\top}=(1_{V}-\lambda h)^{\top}=1_{V}-\lambda h^{\top}\). Let \(h=a^{\top}b\) for vectors \(a,b\in\mathbb{F}_{q}^{n}\) with \(ba^{\top}=0\), i.e. \(h^{2}=0\). Now suppose that \(c_{j}\in\operatorname{GL}_{n}(q)\) is a critical constant with \(vc_{j}v^{\top}=0\) for all \(v\in\mathbb{F}_{q}^{n}\), i.e. \(c_{j}\) is alternating. In that case \(h^{\top}c_{j}h=b^{\top}ac_{j}a^{\top}b=0\) occurs in the product \(c_{0}h^{*}c_{1}\cdots c_{l-1}h^{*}c_{l}\) and the leading coefficient of \(\lambda^{\sum_{j=1}^{l}p^{fn(j)}}\) in \(w^{\prime}(1_{V}+\lambda h)\) would be zero. Hence the above argument breaks down. ## 5. An alternative approach to \(\operatorname{PSL}_{2}(q)\) **Lemma 5.1**.: _The shortest mixed identity \(\overline{w}\) for \(\operatorname{PSL}_{2}(q)\) is of length at least \(q/8\)._ Let \(A*_{C}B\) be the amalgamated free product of the groups \(A\) and \(B\) over the the common subgroup \(C\). Recall that a _reduced expression_ in \(A*_{C}B\) is a tuple \((c;a_{0},b_{0},\ldots,a_{l},b_{l})\), where \(l\geq 0\); \(c\in C\); \(a_{j}\in A\setminus C\) for \(j\geq 1\) and \(b_{j}\in B\setminus C\) for \(j\leq l-1\), while \(a_{0}\in(A\setminus C)\cup\{1_{C}\}\) and \(b_{l}\in(B\setminus C)\cup\{1_{C}\}\). The key fact we need is that if \((c;a_{0},b_{0},\ldots,a_{l},b_{l})\) is a reduced expression, then \(ca_{0}b_{0}\cdots a_{l}b_{l}\) is a non-trivial element of \(A*_{C}B\), unless \(l=0\) and \(c=a_{0}=b_{0}=1_{C}\). In particular, these observations apply to the free product \(A*B\). In this case we shall also require the converse observation, namely that if \(G\) is a group, generated by the two subgroups \(A\) and \(B\), such that for all \(a_{0},\ldots,a_{l}\in A\) and \(b_{0},\ldots,b_{l}\in B\), with all except possibly \(a_{0}\) and \(b_{l}\) non-trivial, \(a_{0}b_{0}\cdots a_{l}b_{l}\) is non-trivial also unless \(l=0\) and \(a_{0}=b_{0}=1_{G}\), then the natural map \(A*B\to G\) is an isomorphism. **Theorem 7** ([23, Chapter II, Theorem 6]).: _Let_ \[B(\mathbb{F}_{q}),B(\mathbb{F}_{q}[t])\leq\operatorname{GL}_{2}(\mathbb{F}_{ q}[t])\] _be, respectively, the subgroups of invertible upper triangular matrices over \(\mathbb{F}_{q}\) and \(\mathbb{F}_{q}[t]\). Then \(\operatorname{GL}_{2}(\mathbb{F}_{q}[t])\) is the amalgamated free product of \(\operatorname{GL}_{2}(\mathbb{F}_{q})\) and \(B(\mathbb{F}_{q}[t])\) over \(B(\mathbb{F}_{q})=\operatorname{GL}_{2}(\mathbb{F}_{q})\cap B(\mathbb{F}_{q} [t])\)._ **Lemma 5.2**.: _Let_ \[g =-\begin{pmatrix}1-t^{3}&t+t^{2}-t^{4}\\ -t&1-t^{2}\end{pmatrix}\] \[=\begin{pmatrix}1&t^{2}\\ 0&1\end{pmatrix}\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\begin{pmatrix}1&t\\ 0&1\end{pmatrix}\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\begin{pmatrix}1&t\\ 0&1\end{pmatrix}\in\operatorname{SL}_{2}(\mathbb{F}_{q}[t])\] _and let \(\overline{g}\) be the image of \(g\) in \(\operatorname{PSL}_{2}(\mathbb{F}_{q}[t])\). Then \(\langle\overline{g}\rangle\cong\mathbb{Z}\), and \(\operatorname{PSL}_{2}(\mathbb{F}_{q}),\langle\overline{g}\rangle\leq \operatorname{PSL}_{2}(\mathbb{F}_{q}[t])\) generate their free product._ Proof.: For the first claim it suffices to check that \(g^{n}\) is non-central in \(\operatorname{SL}_{2}(\mathbb{F}_{q}[t])\). Let: \[u(f)=\begin{pmatrix}1&f\\ 0&1\end{pmatrix}\text{ and }r=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\] for \(f\in\mathbb{F}_{q}[t]\), so that \(g=u(t^{2})ru(t)ru(t)\). Then for \(n\geq 1\), \[g^{n}=u(t^{2})\bigl{(}ru(t)ru(t+t^{2})\bigr{)}^{n-1}ru(t)ru(t) \tag{1}\] is a reduced element of the amalgam from Theorem 7, so is non-central in \(\operatorname{SL}_{2}(\mathbb{F}_{q}[t])\). Now let \(n(j)\in\mathbb{Z}\), \(h_{j}\in\operatorname{PSL}_{2}(\mathbb{F}_{q})\) (\(0\leq j\leq l\)) be such that \(\overline{w}=\overline{g}^{n(0)}h_{0}\cdots\overline{g}^{n(l)}h_{l}\in \langle\overline{g}\rangle*\operatorname{PSL}_{2}(\mathbb{F}_{q})\) is a non-trivial reduced word, and let \(\tilde{h}_{j}\in\operatorname{SL}_{2}(\mathbb{F}_{q})\) be a lift of \(h_{j}\) (so that \(n(j)\neq 0\) for \(j\geq 1\), and \(\tilde{h}_{j}\neq\pm 1_{2}\) for \(j\leq l-1\)). We claim that \(\overline{w}\) is also a non-trivial element of \(\operatorname{PSL}_{2}(\mathbb{F}_{q}[t])\). We have that \(\overline{w}\) lifts to: \[w=\pm g^{n(0)}\tilde{h}_{0}\cdots g^{n(l)}\tilde{h}_{l}\] in \(\operatorname{SL}_{2}(\mathbb{F}_{q}[t])\). By Equation (1), an elementary contraction to this expression for \(w\), as an element of the amalgam from Theorem 7, corresponds to an index \(j\) such that \(\tilde{h}_{j}\in B(\mathbb{F}_{q})\cap\operatorname{SL}_{2}(\mathbb{F}_{q})\). Therefore let \(a_{j}\in\mathbb{F}_{q}^{\times}\), \(b_{j}\in\mathbb{F}_{q}\) be such that: \[\tilde{h}_{j}=\begin{pmatrix}a_{j}&b_{j}\\ 0&a_{j}^{-1}\end{pmatrix}.\] We claim that for such \(j\), and for \(x(t)\in\{t,-t^{2}\}\), \(y(t)\in\{-t,t^{2}\}\) we have: \[\pm ru(x)h_{j}u(y)r=k_{1}u(f)k_{2}\text{ or }k_{1}, \tag{2}\] for some \(k_{i}\in\operatorname{SL}_{2}(\mathbb{F}_{q})\setminus B(\mathbb{F}_{q})\) and \(f\in\mathbb{F}_{q}[t]\) non-constant. Applying all transformations (2) to \(w\) at the indices \(j\) for which \(\tilde{h}_{j}\in B(\mathbb{F}_{q})\cap\operatorname{SL}_{2}(\mathbb{F}_{q})\), we obtain a non-trivial reduced form for \(w\) in the amalgamated free product. Thus, as an element of \(\operatorname{GL}_{2}(\mathbb{F}_{q}[t])\), \(w\neq\pm 1_{2}\), and \(\overline{w}\in\operatorname{PSL}_{2}(\mathbb{F}_{q}[t])\) is non-trivial, as desired. We now prove the claim: _Case 1:_\(a_{j}\neq\pm 1\): \[ru(x)h_{j}u(y)r=\begin{pmatrix}0&a_{j}^{-1}\\ -a_{j}&0\end{pmatrix}u(f)r\] where \(f(t)=a_{j}^{-1}b_{j}+a_{j}^{-2}x(t)+y(t)\) is non-constant, since \(a_{j}^{2}\neq 1\), and either \(x(t)=-y(t)\) or \(x(t)\) and \(y(t)\) are of different degrees. _Case 2: \(a_{j}=\pm 1\), \(b_{j}\neq 0\)_: \[ru(x)h_{j}u(y)r=\pm ru(x+y\pm b_{j})r\] which is of the required form, as either \(x(t)+y(t)\) is non-constant, or \(x(t)=-y(t)\), in which case we have: \[ru(x)h_{j}u(y)r=\begin{pmatrix}\mp 1&0\\ b_{j}&\mp 1\end{pmatrix}\] which is also of the desired form. This verifies the two cases. Proof of Lemma 5.1.: Let \(\overline{w}\in\operatorname{PSL}_{2}(\mathbb{F}_{q})*\langle x\rangle\) be a mixed identity for \(\operatorname{PSL}_{2}(\mathbb{F}_{q})\). By Lemma 5.2, there is a monomorphism \(\iota\colon\operatorname{PSL}_{2}(\mathbb{F}_{q})*\langle x\rangle\to \operatorname{PSL}_{2}(\mathbb{F}_{q}[t])\) restricting to the identity on \(\operatorname{PSL}_{2}(\mathbb{F}_{q})\), with \(\deg(\iota(x))\leq 4\). For \(\alpha\in\mathbb{F}_{q}\), let \(\pi_{\alpha}\colon\operatorname{PSL}_{2}(\mathbb{F}_{q}[t])\to\operatorname{PSL }_{2}(\mathbb{F}_{q})\) be the epimorphism induced by evaluation of \(t\) at \(\alpha\) (equivalently, the congruence homomorphism modulo \(t-\alpha\)). Then \((\pi_{\alpha}\circ\iota)(\overline{w})=\overline{1}_{2}\) for all \(\alpha\in\mathbb{F}_{q}\). Let \(W\in\operatorname{SL}_{2}(\mathbb{F}_{q}[t])\) be a lift of \(\iota(\overline{w})\). At least one of the polynomials: \(W_{11}(t),W_{12}(t),W_{21}(t),W_{22}(t)\in\mathbb{F}_{q}[t]\) is non-constant, and every \(\alpha\in\mathbb{F}_{q}\) is a solution to one of the two systems of equations: \[\begin{pmatrix}W_{11}(t)&W_{12}(t)\\ W_{21}(t)&W_{22}(t)\end{pmatrix}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\text{ or }\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}.\] Meanwhile, the \(W_{ij}(t)\) have degree at most \(4l\). Hence \[\begin{pmatrix}W_{11}(t)^{2}&W_{12}(t)^{2}\\ W_{21}(t)^{2}&W_{22}(t)^{2}\end{pmatrix}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}.\] Thus \(8l\geq q\), as the \(W_{ij}(t)^{2}\) have degree at most \(8l\). We end this Section with a conjecture. **Conjecture 5.3**.: _There exists an absolute constant \(C>0\) such that for any field \(\mathbb{F}\) and every \(n\geq 2\), there exists \(g\in\operatorname{SL}_{n}(\mathbb{F}[t])\), the entries of which are polynomials of degree at most \(C\), such that the image \(\overline{g}\) of \(g\) in \(\operatorname{PSL}_{n}(\mathbb{F}[t])\) has infinite order and \(\langle\overline{g}\rangle,\operatorname{PSL}_{n}(\mathbb{F})\leq\operatorname {PSL}_{n}(\mathbb{F}[t])\) generate their free product._ If Conjecture 5.3 is true, then the lower bound in Theorem 3 for \(\operatorname{PSL}_{n}(q)\) would follow by precisely the same argument as we have given for \(\operatorname{PSL}_{2}(q)\) above. By the results of Stepanov [24], there _does_ exist an element \(g\) as above for every \(\mathbb{F}\) and \(n\geq 2\), but without the uniform bound on the degrees of the elements. ## 6. The projective symplectic groups \(\mathrm{PSp}_{2m}(q)\) Surprisingly, in contrast to the projective general linear case, there are mixed identities of bounded length for the symplectic groups \(\mathrm{PSp}_{2m}(q)\) for \(m\geq 2\). (Note that for \(m=1\) we have \(\mathrm{PSp}_{2}(q)\cong\mathrm{PSL}_{2}(q)\) so there are no short identities by Theorem 3.) This is a theorem due to Tomanov [26] in odd characteristic. For the sake of clarity, we reprove it here briefly and also establish the case when \(q\) is even, i.e. \(\mathbb{F}_{q}\) is of characteristic two. ### Tomanov's result for \(\mathrm{PSp}_{2m}(q)\) for \(m\geq 2\) **Theorem 8** (Tomanov).: _The group \(\mathrm{PSp}_{2m}(q)\) for \(m\geq 2\) satisfies a mixed identity of length \(8\)._ Let \(R\) be a commutative ring of characteristic \(\neq 2\) and \(m\geq 2\). Consider the symplectic group \(\mathrm{Sp}_{2m}(R)\) consisting of those matrices in \(\mathbf{M}_{2m}(R)\) that preserve the standard non-degenerate alternating bilinear form \(f\colon R^{2m}\times R^{2m}\to R\) given by \(f(u,v)=u\Omega v^{\top}\), where \(\Omega\coloneqq\left(\begin{smallmatrix}0_{m}&1_{m}\\ -1_{m}&0_{m}\end{smallmatrix}\right)\). Then, \(\mathrm{Sp}_{2m}(R)\) can be described concretely and a matrix \(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\mathbf{M}_{2m}(R)\) with \(a,b,c,d\in\mathbf{M}_{m}(R)\) lies in \(\mathrm{Sp}_{2m}(R)\) if and only if \[ab^{\top}-ba^{\top}=0_{m},\quad-bc^{\top}+ad^{\top}=1_{m}\quad\text{and}\quad cd ^{\top}-dc^{\top}=0_{m}.\] Now let \(g\) be an arbitrary element of \(\mathrm{Sp}_{2m}(R)\) that satisfies \(g^{2}=1_{2m}\). Then it follows that \[f(v.g,v)=f(v.g^{2},v.g)=f(v,v.g)=-f(v.g,v).\] Hence \(f(v.g,v)=0\) for all \(v\in R^{2m}\) since \(R\) is of characteristic \(\neq 2\). In particular, it follows that the \((m+1,1)\)-entry of \(g\) must vanish as \(g_{m+1,1}=-f(e_{1}.g,e_{1})=f(e_{1},e_{1}.g)=0\). Let's fix \[g_{0}\coloneqq\mathrm{diag}\left(\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right),1_{m-2},\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right),1_{m-2}\right)\in\mathrm{Sp}_{2m}(R),\] which satisfies \(g_{0}^{2}=1_{2m}\) and is a non-scalar element of \(\mathrm{Sp}_{2m}(R)\). It is well-defined since by assumption \(m\geq 2\). We conclude that for every \(x\in\mathrm{Sp}_{2m}(R)\) the \((m+1,1)\)-matrix entry of the matrix \(g=g_{0}^{x}\) vanishes and hence \(e_{1,m+1}g_{0}^{x}e_{1,m+1}=0_{2m}\in\mathbf{M}_{2m}(R)\). Consider now the matrix \(k\coloneqq 1_{2m}+e_{1,m+1}\in\mathrm{Sp}_{2m}(R)\), which is a symplectic transvection. We claim that \(g_{0}^{x}kg_{0}^{x}\) and \(k\) commute. Indeed, \[g_{0}^{x}kg_{0}^{x}k=(1_{2m}+g_{0}^{x}e_{1,m+1}g_{0}^{x})(1_{2m}+e_{1,m+1})=1_ {2m}+g_{0}^{x}e_{1,m+1}g_{0}^{x}+e_{1,m+1}\] and similarly for \(kg_{0}^{x}kg_{0}^{x}\). Hence, we conclude that for all \(x\in\mathrm{Sp}_{2m}(R)\), we have \[w(x)=[g_{0}^{x}kg_{0}^{x},k]=1_{2m}.\] Now if \(q\) is odd, we can directly set \(R\coloneqq\mathbb{F}_{q}\) and \(w\) becomes a mixed identity of \(\operatorname{Sp}_{2m}(q)\) of length \(8\), which descends to a mixed identity \(\overline{w}\) of \(\operatorname{PSp}_{2m}(q)\) since \(g_{0}\) and \(k\) are non-central. When \(\mathbb{F}_{q}\) is of characteristic two (i.e. \(q\) is even) assume that there is a surjective homomorphism \(\overline{\bullet}\colon\operatorname{Sp}_{2m}(R)\twoheadrightarrow\operatorname {Sp}_{2m}(q)\) induced by a homomorphism \(\varphi\colon R\twoheadrightarrow\mathbb{F}_{q}\). Then \(\overline{w}\) clearly is a mixed identity of \(\operatorname{Sp}_{2m}(q)\) which again descends to a mixed identity for \(\operatorname{PSp}_{2m}(q)\). It remains to define the homomorphism \(\varphi\) properly. For this purpose set \(R\coloneqq\mathbb{Z}[X]\) and let \(\varphi\colon\mathbb{Z}[X]\twoheadrightarrow\mathbb{F}_{q}\) be a surjective homomorphism. Then \(\overline{\bullet}\) is surjective, since the symplectic transvections are elements of the symplectic groups \(\operatorname{Sp}_{2m}(\mathbb{Z}[X])\) and \(\operatorname{Sp}_{2m}(q)\) which are mapped onto each other by \(\overline{\bullet}\) and they even generate \(\operatorname{Sp}_{2m}(q)\). This finishes the proof. ### Proof of Theorem 5 In this subsection, we will prove that any mixed identity \(\overline{w}\in\operatorname{PSp}_{2m}*\mathbb{F}_{r}\) for \(\operatorname{PSp}_{2m}(q)\) of length \(\leq q/2+1\) has a critical constant which lifts to an involution in \(\operatorname{Sp}_{2m}(q)\). First we need a lemma. **Lemma 6.1**.: _Let \(q\) be odd. For an element \(c\in\operatorname{Sp}_{2m}(q)\) the following are equivalent:_ 1. \(c^{2}=1_{2m}\)_, i.e._ \(c\) _squares to the identity._ 2. _The form_ \(g(u,v)\coloneqq f(u.c,v)\) _is alternating, where_ \(f\) _is the non-degenerate alternating form associated to_ \(\operatorname{Sp}_{2m}(q)\)_._ _The implication (ii)\(\Rightarrow\)(i) also holds for \(q\) even. In this case, it suffices that \(f\) and \(g\) are (skew) symmetric._ Proof.: (i)\(\Rightarrow\)(ii): Let \(v\in V\cong\mathbb{F}_{q}^{2m}\) be arbitrary. Then \(g(v,v)=f(v.c,v)=-f(v,v.c)=-f(v.c,v.c^{2})=-f(v.c,v)=-g(v,v)=0\) as \(f\) is skew-symmetric, \(c\) preserves \(f\), and \(\mathbb{F}_{q}\) has odd characteristic. Thus \(g\) is alternating. (ii)\(\Rightarrow\)(i): Assume \(g\) is alternating. Then \(g(u,v)=f(u.c,v)=-g(v,u)=-f(v.c,u)=-f(v,u.c^{-1})=f(u.c^{-1},v)\), holds for all \(u,v\in V\), since \(g\) and \(f\) are skew-symmetric and \(c\) preserves \(f\). Hence, as \(f\) is non-degenerate, \(u.c=u.c^{-1}\) for all \(u\), so that \(c=c^{-1}\) and thus \(c^{2}=1_{2m}\). This argument also works when \(q\) is even and \(f\) and \(g\) are (skew) symmetric. To prove the lower bound in Theorem 5, we define \(k(\lambda)\) for \(\lambda\in\mathbb{F}_{q}\) by \(x.k(\lambda)\coloneqq x+\lambda f(x,v)v=x.(1_{V}+\lambda h)\) for a vector \(v\) which we still have to choose and consider the expression \(w^{\prime}(k(\lambda))\), where \[w=c_{0}x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}c_{l}\quad \text{and}\quad w^{\prime}=x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon (l)}\] are of length \(\leq q/2+1\) such that \(\overline{w}\in\operatorname{PSp}_{2m}*\langle x\rangle\) is of positive length. This \(k(\lambda)\) is a symplectic transvection for all \(v\in V\setminus\{0\}\). Again, if we can choose \(v\) in such a way that \(hc_{j}h\neq 0\) for all intermediate constants \((j=1,\ldots,l-1)\) as in Lemma 4.2 and if \(l<q\) (which holds for all \(q>2\)), then we can apply the proof of Lemma 4.3 to get that \(\overline{w}\) is not a mixed identity for \(\mathrm{PSp}_{2m}(q)\). (For \(q=2\) there is Lemma 2.5.) We rewrite the former condition as \(x.hc_{j}h=f(f(x,v)v.c_{j},v)v=f(x,v)f(v.c_{j},v)v\neq 0\). This means that \(f(v.c_{j},v)\neq 0\) for all \(j=1,\ldots,l-1\). We claim that we can find a suitable \(v\in V\) whenever all \(g_{j}\coloneqq f(\bullet.c_{j},\bullet)\)\((j=1,\ldots,l-1)\) are non-alternating. To establish this claim, we need the following lemma. **Lemma 6.2**.: _Let \(g\colon V\times V\cong\mathbb{F}_{q}^{2m}\times\mathbb{F}_{q}^{2m}\to \mathbb{F}_{q}\) be a non-alternating form. Set \(V(g)\coloneqq\{v\in V\setminus\{0\}\,|\,g(v,v)=0\}\). Then \(|V(g)|\leq 2q^{2m-1}-1\)._ Proof.: As \(g\) is not alternating, we have that the polynomial \(p(v)\coloneqq g(v,v)=\sum_{i\leq j\leq 2m}g_{ij}v_{i}v_{j}\neq 0\) as there exists a \(v\) such that \(p(v)=g(v,v)\neq 0\). This expression \(p(v)\) is then a non-zero polynomial in the variables \(v_{1},\ldots,v_{2m}\) of degree two. Hence by the Schwartz-Zippel lemma it has at most \(2q^{2m-1}\) solutions, i.e. \(V(g)\leq 2q^{2m-1}-1\) as the zero vector is not included in \(V(g)\) but \(p(0)=0\). This completes the proof. Now we can prove the following lemma. **Lemma 6.3**.: _Let \(w=c_{0}x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}c_{l}\in\mathrm{ GL}_{2m}(q)*\langle x\rangle\) be of length \(0<l\leq q/2+1\) such that all \(g_{j}=f(\bullet.c_{j},\bullet)\) (\(j=1,\ldots,l-1\)) are non-alternating. Then \(\overline{w}\in\mathrm{PGL}_{2m}(q)*\langle x\rangle\) is non-constant on \(\mathrm{PSp}_{2m}(q)\)._ Proof.: We just have to find \(v\) such that \(g_{j}(v,v)=f(v.c_{j},v)\neq 0\) for all \(j=1,\ldots,l-1\). But since \(l\leq q/2+1\) and \(|V(g_{j})|\leq 2q^{2m-1}-1\) by Lemma 6.2 we get that \[\left|V\setminus\{0\}\setminus\bigcup_{j=1}^{l-1}V(g_{j})\right|\geq|V|-1- \sum_{j=1}^{l-1}|V(g_{j})|\geq q^{2m}-1-(q/2)\cdot(2q^{2m-1}-1)>0\] for \(q>2\). So there is a legal choice for \(v\). Also, then \(0<l\leq q/2+1<q\), so the proof of Lemma 4.3 applies. For \(q=2\) we apply Lemma 2.5. The proof is complete. Hence by Lemma 6.1 and 6.3, we immediately obtain the following corollary. **Corollary 6.4**.: _Let \(w=c_{0}x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}c_{l}\in\mathrm{ Sp}_{2m}*\langle x\rangle\) be of length \(0<l\leq q/2+1\) such that \(c_{j}^{2}\neq 1_{2m}\) for all \(j=1,\ldots,l-1\). Then \(\overline{w}\in\mathrm{PSp}_{2m}(q)*\langle x\rangle\) is non-constant on \(\mathrm{PSp}_{2m}(q)\)._ Proof of the lower bound in Theorem 5.: We have to show that, if \(\overline{w}\in\mathrm{PSp}_{2m}(q)*\mathbf{F}_{r}\) (which now has the free variables \(x_{1},\ldots,x_{r}\)) has no _critical_ constants that lift to involutions in \(\mathrm{Sp}_{2m}(q)\), then still, if it is a mixed identity for \(\mathrm{PSp}_{2m}(q)\), it must have length \(>q/2+1\). Indeed, non-critical constants may lift to involutions and still the mixed identity \(\overline{w}\) for \(\mathrm{PSp}_{2m}(q)\) must have length bigger than \(q/2+1\). More concretely, write \[w=c_{0}x_{i(1)}^{\varepsilon(1)}c_{1}\cdots c_{l-1}x_{i(l)}^{\varepsilon(l)}c_ {l}\] and assume that \(\overline{w}\in\mathrm{PSp}_{2m}(q)*\mathbf{F}_{r}\) is constant on \(\mathrm{PSp}_{2m}(q)\); \(c_{j}\in\mathrm{Sp}_{2m}(q)\) (\(j=0,\ldots,l\)). We proceed as in the proof of Lemma 2.2. Let \(s\colon x_{i}\mapsto g_{-i}xg_{i}\) for some tuple \((g_{\pm i})_{i=1}^{r}\in\mathrm{Sp}_{2m}^{2r}(q)\) and consider the word \[w^{\prime}\coloneqq w(s(x_{1}),\ldots,s(x_{r}))=c_{0}^{\prime}x^{\varepsilon( 1)}c_{1}^{\prime}\cdots c_{l-1}^{\prime}x^{\varepsilon(l)}c_{l}^{\prime}\in \mathrm{Sp}_{2m}(q)*\langle x\rangle.\] Then we have that \(c_{j}^{\prime}=g_{\varepsilon(j)i(j)}^{\varepsilon(j)}c_{j}g_{-\varepsilon(j+ 1)i(j+1)}^{\varepsilon(j+1)}\) (\(j=1,\ldots,l-1\)). By assumption, \(c_{j}\) and hence \(c_{j}^{\prime}\) does not square to one when \(j\in J_{-}(w)\) (since the latter is conjugate to the former). We have to make sure that \(c_{j}^{\prime}\) does not square to one for \(j\in J_{0}(w)\cup J_{+}(w)\). This means \[g_{\varepsilon(j)i(j)}^{\varepsilon(j)}c_{j}g_{-\varepsilon(j+1)i(j+1)}^{ \varepsilon(j+1)}\neq c \tag{3}\] where \(c^{2}=1_{2m}\). At first, we assume that \(q\) is odd. Then by [7, page 889], there are precisely \[f=\left|\mathrm{Sp}_{2m}(q)\right|\sum_{i=0}^{m}\frac{1}{\left|\mathrm{Sp}_{2 i}(q)\right|\left|\mathrm{Sp}_{2(m-i)}(q)\right|}\] solutions \(c\in\mathrm{Sp}_{2m}(q)\) to \(c^{2}=1_{2m}\). So there are \(f\left|\mathrm{Sp}_{2m}(q)\right|^{2r-1}\) solutions to the negation of the Inequalities (3). We can weakly estimate \(f\) for our purposes. Indeed, \(\left|\mathrm{Sp}_{2m}(q)\right|=q^{m^{2}}\prod_{i=1}^{m}\left(q^{2i}-1\right) \geq q^{m^{2}}\prod_{i=1}^{m}q^{2i-1}\geq q^{2m^{2}}\) for \(m\geq 0\), so we have \[f\leq\left|\mathrm{Sp}_{2m}(q)\right|\sum_{i=0}^{m}\frac{1}{q^{2i^{2}}\cdot q ^{2(m-i)^{2}}}\leq\left|\mathrm{Sp}_{2m}(q)\right|\sum_{i=0}^{m}\frac{1}{q^{( i+m-i)^{2}}}=\left|\mathrm{Sp}_{2m}(q)\right|\cdot\frac{m+1}{q^{m^{2}}},\] where we use the arithmetic-geometric mean inequality \(\left(\frac{a+b}{2}\right)^{2}\leq\frac{a^{2}+b^{2}}{2}\). So if \((l-1)f\left|\mathrm{Sp}_{2m}(q)\right|^{2r-1}<\left|\mathrm{Sp}_{2m}(q)\right| ^{2r}\) by counting we are done, as then \(w^{\prime}\in\mathrm{Sp}_{2m}(q)\) has no intermediate constants that square to one. This is equivalent to \(l-1<\left|\mathrm{Sp}_{2m}(q)\right|/f\). But we have that \(l-1<q\) (as \(l\leq q/2+1\) by assumption) and from the above that \(\left|\mathrm{Sp}_{2m}(q)\right|/f\geq\frac{q^{m^{2}}}{m+1}\), so it suffices to show that \(q\leq\frac{q^{m^{2}}}{m+1}\), i.e. \(m+1\leq q^{m^{2}-1}\) which holds for all \(q\) and \(m\geq 2\) as desired. So \(w^{\prime}\in\mathrm{Sp}_{2m}(q)*\langle x\rangle\) has no intermediate constants that lift to involutions, since \(w\in\mathrm{Sp}_{2m}(q)*\mathbf{F}_{r}\) had no critical constants that lift to involutions (both \(w\) and \(w^{\prime}\) are of the same length). Hence, by Corollary 6.4, this finishes the proof for \(q\) odd. For \(q\) even, the number of involutions in \(\operatorname{Sp}_{2m}(q)\) according to [7, page 891] is given by \[f=\left|\operatorname{Sp}_{2m}(q)\right|\left(\sum_{\begin{subarray}{c}i=0\\ i\text{ even}\end{subarray}}^{m}1/A_{i}+\sum_{\begin{subarray}{c}i=2\\ i\text{ even}\end{subarray}}^{m}1/B_{i}+\sum_{\begin{subarray}{c}i=1\\ i\text{ odd}\end{subarray}}^{m}1/C_{i}\right) \tag{4}\] where \[A_{i} =q^{i(i+1)/2+i(2m-2i)}\left|\operatorname{Sp}_{i}(q)\right| \left|\operatorname{Sp}_{2m-2i}(q)\right|\] \[B_{i} =q^{i(i+1)/2+i(2m-2i)}q^{i-1}\left|\operatorname{Sp}_{i-2}\right| \left|\operatorname{Sp}_{2m-2i}(q)\right|\] \[C_{i} =q^{i(i+1)/2+i(2m-2i)}\left|\operatorname{Sp}_{i-1}(q)\right| \left|\operatorname{Sp}_{2m-2i}(q)\right|.\] So since \(\left|\operatorname{Sp}_{i-2}(q)\right|q^{i/2-1}\leq\left|\operatorname{Sp}_ {i-1}(q)\right|\leq\left|\operatorname{Sp}_{i}(q)\right|\) we obtain \[A_{i},B_{i},C_{i} \geq q^{i(i+1)/2+i(2m-2i)}q^{i/2-1}\left|\operatorname{Sp}_{i-2}( q)\right|\left|\operatorname{Sp}_{2m-2i}(q)\right|\] \[\geq q^{i(i+1)/2+i(2m-2i)+i/2-1+2(\frac{i-2}{2})^{2}+2(m-i)^{2}}\] \[=q^{\frac{1}{2}i^{2}+\frac{1}{2}i+2mi-2i^{2}+\frac{1}{2}i-1+\frac {1}{2}i^{2}-2i+2+2m^{2}-4mi+2i^{2}}.\] The exponent of \(q\) is here \(i^{2}-i+1-2mi+2m^{2}.\) For fixed \(m\), this expression gets minimal when \(i=m+1/2\). But in Equation (4) we have \(i\leq m\), so plugging in \(i=m\) gives the lower bound \(m^{2}-m+1\). Again, we have to show that \(l-1<\left|\operatorname{Sp}_{2m}(q)\right|/f\), but \(l-1<q\) and, by Equation (4) and the bound we obtained for the exponent of \(q\), it holds that \(\left|\operatorname{Sp}_{2m}(q)\right|/f\geq\frac{q^{m^{2}-m+1}}{3(m/2+1)}.\) Here the the expression \(3(m/2+1)\) comes from the fact that Equation (4) has at most that many summands. Hence we have to show that \(q\leq\frac{q^{m^{2}-m+1}}{3(m/2+1)}\) which means \(q^{m^{2}-m}\geq 3(m/2+1)\). This holds for \(q>2\) and \(m\geq 2\). For \(q=2\) we apply Lemma 2.5. Thus we are done for \(q\) even as well. This finishes the first half of the proof and we are left to prove the upper bound \(O(q)\) for \(q\) even. Proof of the upper bound in Theorem 5.: The proof is essentially the same as the one for Lemma 4.1. Let \[k\coloneqq 1_{V}+h\in\operatorname{Sp}_{2m}(q),\] where \(x.h=f(x,v)v\) with \(v\neq 0\), be a symplectic transvection. Here \(V\cong\mathbb{F}_{q}^{2m}\) is the natural module of \(\operatorname{Sp}_{2m}(q)\). Proceed as in the proof of Lemma 4.1 to get a mixed identity \(w\in\operatorname{SL}_{2m}(q)*\langle x\rangle\) for \(\operatorname{SL}_{2m}(q)\) which descends to a mixed identity \(\overline{w}\) of \(\operatorname{PSL}_{2m}(q)\). But the only constants involved in \(w\) are powers of \(k\) which belong to \(\operatorname{Sp}_{2m}(q)\), so that \(w\) is also a mixed identity for \(\operatorname{Sp}_{2m}(q)\) (with constants in \(\operatorname{Sp}_{2m}(q)\)) which descends to a mixed identity of \(\operatorname{PSp}_{2m}(q)\). The problem with characteristic two is just that the map \(k\) is then an involution, which was excluded by the assumptions. Thus the proof is complete, since \(\overline{w}\) is of length \(O(q)\) as in Lemma 4.1. ## 7. The odd-degree projective orthogonal groups \(\mathrm{P}\Omega^{\circ}_{2m-1}(q)\) Similarly to the symplectic groups, the orthogonal groups \(\mathrm{P}\Omega^{\circ}_{2m-1}(q)\) (\(m\geq 3\) odd, or \(q\equiv 1\) mod \(4\)) have a short mixed identity. This is also a result of Tomanov [26]. We reprove it here: **Theorem 9** (Tomanov).: _There exists a mixed identity for \(\mathrm{P}\Omega^{\circ}_{2m-1}(q)\) for \(m\geq 3\) odd, or \(q\equiv 1\) mod \(4\) of length \(16\)._ Proof.: Consider \(\mathrm{P}\Omega^{\circ}_{2m-1}(q)\), \(m\geq 3\), and assume \(q\equiv 1\) mod \(4\) or that \(m\) is odd. Let \[\Omega=\begin{pmatrix}0&\cdots&0&1\\ \vdots&\iddots&\iddots&0\\ 0&\iddots&\iddots&\vdots\\ 1&0&\cdots&0\end{pmatrix}\] be the matrix of the symmetric bi-linear form \(f\) which is stabilized by \(\mathrm{GO}^{\circ}_{2m-1}(q)\). Define \[g_{0}\coloneqq\mathrm{diag}(-1_{m-1},1,-1_{m-1})=-1_{2m-1}+2e_{m,m}.\] We show that \(g_{0}\) lies in \(\Omega^{\circ}_{2m-1}(q)\) when \(m\) is odd or \(q\equiv 1\) modulo \(4\) (i.e. \(-1\) is a square in \(\mathbb{F}_{q}\)). When \(m\) is odd, we have that \(g_{0}\) is the product of the elements \[x\coloneqq\mathrm{diag}(-1_{\frac{m-1}{2}},1_{m},-1_{\frac{m-1}{2}})\] and \[y\coloneqq\mathrm{diag}(1_{\frac{m-1}{2}},-1_{\frac{m-1}{2}},1,-1_{\frac{m-1}{ 2}},1_{\frac{m-1}{2}}).\] However, \(x\) and \(y\) are conjugate and so \(xy\) is of spinor norm one. If \(q\equiv 1\) modulo \(4\), let \(\alpha\) be a square root of \(-1\) and observe that \(g_{0}=x^{2}\) where \(x=\mathrm{diag}(\alpha 1_{m-1},1,-\alpha 1_{m-1})\in\mathrm{SO}^{\circ}_{2m-1}(q)\). Hence \(g_{0}\) again has spinor norm one. Set now \(k(\lambda)\) to be the Eichler transformation \[k(\lambda)\coloneqq\begin{pmatrix}1&\cdots&\lambda&0\\ 0&\ddots&0&-\lambda\\ \vdots&\ddots&\ddots&\vdots\\ 0&\cdots&0&1\end{pmatrix}=1_{2m-1}+\lambda h\] with \(h=e_{1,2m-2}-e_{2,2m-1}.\) This element from \(\mathrm{SO}^{\circ}_{2m-1}(q)\) again is a square of an element from \(\mathrm{SO}^{\circ}_{2m-1}(q)\) for \(q\) odd, namely of \(k(\lambda/2)\), so has spinor norm 1. For \(x=(x_{i,j})_{i,j=1}^{2m-1}\) we compute \[x^{-1}=\Omega x^{\top}\Omega=(x_{2m-j,2m-i})_{i,j=1}^{2m-1}\] as \(\Omega=\Omega^{-1}\). We obtain \[g_{0}^{x}=x^{-1}g_{0}x=(-\delta_{i,j}+2x_{m,2m-i}x_{m,j})_{i,j=1}^{2m-1},\] since \[x^{-1}e_{m,m}x=\sum_{i,k}x_{2m-k,2m-i}e_{i,k}e_{m,m}\cdot\sum_{l,j}x_{l,j}e_{l, j}=\sum_{i,j}x_{m,2m-i}x_{m,j}e_{i,j}.\] Then, according to [26, pages 41 and 42], we have the matrix identity \[r(\lambda,x)r(\mu,x)=r(\mu,x)r(\lambda,x),\] where \(r(\lambda,x)=g_{0}^{x}k(\lambda)g_{0}^{x}k(-\lambda)\). Let's compute: we see that \[r(\lambda,x)=g_{0}^{x}(1+\lambda h)g_{0}^{x}(1-\lambda h)=1+\lambda g_{0}^{x}hg _{0}^{x}-\lambda h-\lambda^{2}g_{0}^{x}hg_{0}^{x}h.\] Now, using \(h^{2}=0_{2m-1}\) repeatedly, we get: \[r(\lambda,x)r(\mu,x)\] \[=(1_{2m-1}+\lambda g_{0}^{x}hg_{0}^{x}-\lambda h-\lambda^{2}g_{0} ^{x}hg_{0}^{x}h)\cdot(1_{2m-1}+\mu g_{0}^{x}hg_{0}^{x}-\mu h-\mu^{2}g_{0}^{x}hg _{0}^{x}h)\] \[=1_{2m-1}+\mu g_{0}^{x}hg_{0}^{x}-\mu h-\mu^{2}g_{0}^{x}hg_{0}^{x}h\] \[\quad+\lambda g_{0}^{x}hg_{0}^{x}(1_{2m-1}+\mu g_{0}^{x}hg_{0}^{x }-\mu h-\mu^{2}g_{0}^{x}hg_{0}^{x}h)\] \[\quad-\lambda h(1_{2m-1}+\mu g_{0}^{x}hg_{0}^{x}-\mu h-\mu^{2}g_{ 0}^{x}hg_{0}^{x}h)\] \[\quad-\lambda^{2}g_{0}^{x}hg_{0}^{x}h(1_{2m-1}+\mu g_{0}^{x}hg_{ 0}^{x}-\mu h-\mu^{2}g_{0}^{x}hg_{0}^{x}h)\] \[=1_{2m-1}+\mu g_{0}^{x}hg_{0}^{x}-\mu h-\mu^{2}g_{0}^{x}hg_{0}^{x}h\] \[\quad+\lambda g_{0}^{x}hg_{0}^{x}+\lambda\mu g_{0}^{x}hg_{0}^{x}g _{0}^{x}hg_{0}^{x}-\lambda\mu g_{0}^{x}hg_{0}^{x}h-\lambda\mu^{2}\lambda g_{0} ^{x}hg_{0}^{x}g_{0}^{x}hg_{0}^{x}h\] \[\quad-\lambda h-\lambda\mu hg_{0}^{x}hg_{0}^{x}+\lambda\mu h^{2} +\lambda\mu^{2}hg_{0}^{x}hg_{0}^{x}h\] \[\quad-\lambda^{2}g_{0}^{x}hg_{0}^{x}h-\lambda^{2}\mu g_{0}^{x}hg_ {0}^{x}hg_{0}^{x}+\lambda^{2}\mu g_{0}^{x}hg_{0}^{x}hh+\lambda^{2}\mu^{2}g_{0 }^{x}hg_{0}^{x}hg_{0}^{x}hg_{0}^{x}h\] \[=1_{2m-1}+(\lambda+\mu)(g_{0}^{x}hg_{0}^{x}-h)-(\lambda^{2}+\mu^ {2})g_{0}^{x}hg_{0}^{x}h-\lambda\mu g_{0}^{x}hg_{0}^{x}h\] \[\quad-\lambda\mu hg_{0}^{x}hg_{0}^{x}+\lambda\mu^{2}hg_{0}^{x}hg_ {0}^{x}h-\lambda\mu g_{0}^{x}hg_{0}^{x}hg_{0}^{x}+\lambda^{2}\mu^{2}g_{0}^{x} hg_{0}^{x}hg_{0}^{x}hg_{0}^{x}h\] Thus, we get \(r(\lambda,x)r(\mu,x)=r(\mu,x)r(\lambda,x)\) if and only if \(hg_{0}^{x}hg_{0}^{x}h=0_{2m-1}\). Using our formula for \(g_{0}^{x}\), we get: \[hg_{0}^{x}h\] \[=(e_{1,2m-2}-e_{2,2m-1})g_{0}^{x}(e_{1,2m-2}-e_{2,2m-1})\] \[=2(e_{1,2m-2}-e_{2,2m-1})\] \[\quad\cdot(x_{m,2}x_{m,1}e_{2m-2,1}+x_{m,2}^{2}e_{2m-2,2}+x_{m,1}^ {2}e_{2m-1,1}+x_{m,1}x_{m,2}e_{2m-1,2})\] \[\quad\cdot(e_{1,2m-2}-e_{2,2m-1})\] \[=2(x_{m,2}x_{m,1}e_{1,2m-2}-x_{m,2}^{2}e_{1,2m-1}+x_{m,1}x_{m,2}e_ {2,2m-1}-x_{m,1}^{2}e_{2,2m-2})\] Here we use \(m\geq 3\). And hence: \[hg_{0}^{x}hg_{0}^{x}h\] \[=2(x_{m,2}x_{m,1}e_{1,2m-2}-x_{m,2}^{2}e_{1,2m-1}+x_{m,1}x_{m,2}e_ {2,2m-1}-x_{m,1}^{2}e_{2,2m-2})\] \[\quad\cdot g_{0}^{x}(e_{1,2m-2}-e_{2,2m-1})\] \[=4(x_{m,2}x_{m,1}e_{1,2m-2}-x_{m,2}^{2}e_{1,2m-1}+x_{m,1}x_{m,2}e_ {2,2m-1}-x_{m,1}^{2}e_{2,2m-2})\] \[\quad\cdot(x_{m,2}x_{m,1}e_{2m-2,1}+x_{m,2}^{2}e_{2m-2,2}+x_{m,1}^ {2}e_{2m-1,1}+x_{m,1}x_{m,2}e_{2m-1,2})\] \[\quad\cdot(e_{1,2m-2}-e_{2,2m-1})\] \[=4(x_{m,2}x_{m,1}e_{1,2m-2}-x_{m,2}^{2}e_{1,2m-1}+x_{m,1}x_{m,2}e_ {2,2m-1}-x_{m,1}^{2}e_{2,2m-2})\] \[\quad\cdot(x_{m,2}x_{m,1}e_{2m-2,2m-2}-x_{m,2}^{2}e_{2m-2,2m-1}\] \[\quad+x_{m,1}^{2}e_{2m-1,2m-2}-x_{m,1}x_{m,2}e_{2m-1,2m-1})\] \[=0_{2m-1}\] This shows that there is also a mixed identity \(w(x)=[r(\lambda,x),r(\mu,x)]\) of constant length in the orthogonal groups \(\mathrm{P}\Omega_{2m-1}^{\circ}(q)\) (for \(m\geq 3\)) of odd degree for \(m\) odd or \(q\equiv 1\bmod 4\). The above proof does not work for \(m=2\), i.e. for \(\mathrm{P}\Omega_{3}^{\circ}(q)\cong\mathrm{PSL}_{2}(q)\). In this case, we have \(2m-2=2\), so that the computations of the matrix products above are different. Basically, the two \(2\times 2\)-blocks overlap. **Remark 7.1**.: The element \(g_{0}\) defined above lies in \(\mathrm{PSO}_{2m-1}^{\circ}(q)\), irrespective of the value of \(m\) or \(q\). The preceding argument therefore yields a mixed identity of bounded length for \(\mathrm{PSO}_{2m-1}^{\circ}(q)\), for all \(m\geq 3\) and \(q\) odd. It is as yet unclear whether \(\mathrm{P}\Omega_{2m-1}^{\circ}(q)\) has a mixed identity of bounded length in the case of \(m\) even and \(q\equiv 3\bmod 4\). ## 8. The projective special unitary groups \(\mathrm{PSU}_{n}(q)\) ### Proof of the upper bound in Theorem 4 Here we proceed as in the proof of Lemma 4.1: **Lemma 8.1**.: _There is a mixed identity of length \(O(q^{2})\) for \(\mathrm{PSU}_{n}(q)\)._ Proof.: Choose a unitary transvection \(k\in\mathrm{SU}_{n}(q)\) (see [27], page 67) and proceed as in the proof of Lemma 4. Again, \(k\) fixes a hyperplane \(H\) and \(k^{g}\) for \(g\in\mathrm{SU}_{n}(q)\) fixes the hyperplane \(H.g\) pointwise, so that both fix the codimension-two subspace \(U=H\cap H.g\leq V\cong\mathbb{F}_{q^{2}}^{n}\) pointwise. The rest is the same argument as in the proof of Lemma 4, noting that we are in \(\mathrm{SL}_{n}(q^{2})\). ### Proof of the lower bound in Theorem 4 Again, we start by just considering \(\mathrm{PSU}_{2}(q)\) to get an idea of how the proof for \(\mathrm{PSU}_{n}(q)\) (\(n\geq 3\)) might work. In the proof of the following lemma, we use the ideas from the proof of Lemma 4. Actually, since \(\mathrm{PSL}_{2}(q)\cong\mathrm{PSU}_{2}(q)\), the two lemmas nearly have the same content, apart from the different groups of constants. **Lemma 8**: _Assume \(w\in\mathrm{GL}_{2}(q^{2})*\langle x\rangle\) is of length \(0<l\leq q/2+1\) such that \(\overline{w}\in\mathrm{PGL}_{2}(q^{2})*\langle x\rangle\) is of positive length. Then \(\overline{w}\) is non-constant on \(\mathrm{PSU}_{2}(q)\)._ Proof.: Let \(f\) be the standard non-singular hermitian form on \(V\cong\mathbb{F}_{q^{2}}^{2}\) with respect to the Frobenius \(\mathbb{F}_{q^{2}}\to\mathbb{F}_{q^{2}}\); \(\alpha\mapsto\alpha^{q}\). Then \(x\mapsto k(\lambda,x)=x+\lambda f(x,v)v=(1_{V}+\lambda h)(x)\) defines an element of the general unitary group \(\mathrm{GU}_{2}(q)\) when \(\mathrm{tr}(\lambda)=0\) and \(f(v,v)=0\) (\(\lambda\in\mathbb{F}_{q^{2}}\), \(v\in V\)). Indeed, it is a _unitary transvection_: \[f(k(\lambda,x),k(\lambda,y)) =f(x+\lambda f(x,v)v,y+\lambda f(y,v)v)\] \[=f(x,y)+\lambda f(x,v)f(v,y)+\lambda^{q}f(y,v)^{q}f(x,v)\] \[\quad+\lambda^{q+1}f(x,v)f(y,v)^{q}f(v,v)\] \[=f(x,y)+\mathrm{tr}(\lambda)f(y,v)^{q}f(x,v)+0=f(x,y).\] Here \(f\) is semi-linear in the second entry. Indeed, \(x\mapsto k(\lambda,x)\) is an element of \(\mathrm{SU}_{2}(q)\) as it has determinant one. Proceed as in the proof of the lower bound for \(\mathrm{PSL}_{2}(q)\). Choose \(\alpha\in\ker(\mathrm{tr})\backslash\{0\}\) and set \(\lambda\coloneqq\alpha\mu\) for \(\mu\in\mathbb{F}_{q}\) arbitrary. Note that this parametrizes the kernel of the trace map \(\mathrm{tr}\colon\mathbb{F}_{q^{2}}\to\mathbb{F}_{q}\). Consider the word \[w=c_{0}x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}c_{l}\in \mathrm{GL}_{2}(q^{2})*\langle x\rangle\] and replace it by \[w^{\prime}=x^{\varepsilon(1)}c_{1}\cdots c_{l-1}x^{\varepsilon(l)}\] which becomes constant at the same time. Again, by Lemma 2, we may assume that all \(c_{j}\) are non-central (\(j=1,\ldots,l-1\)). We are looking for a non-trivial isotropic vector \(v\in V\cong\mathbb{F}_{q^{2}}^{2}\) such that \(hc_{j}h\neq 0\) for all \(j=1,\ldots,l-1\). This means, according to the above definition of \(h\), \(f(f(x,v)v.c_{j},v)v=f(x,v)f(v.c_{j},v)v\neq 0\), i.e. \(f(v.c_{j},v)\neq 0\). But since \(v\) is isotropic, this holds precisely, when \(v\) is not an eigenvector of \(c_{j}\) (\(j=1,\ldots,l-1\)). However, the \(c_{j}\) altogether have at most \(2(l-1)\) eigenspaces of dimension one, since each of them is non-central. Moreover, there are precisely \(q+1\) one-dimensional isotropic subspaces. Indeed, \(x^{q+1}+y^{q+1}=0\) has exactly \(q+1\) solutions, as it is equivalent to \((x/y)^{q+1}=-1\), since \(x,y\neq 0\) and the norm \(\mathrm{N}\colon\mathbb{F}_{q^{2}}^{\times}\to\mathbb{F}_{q}^{\times};\alpha \mapsto\alpha^{q+1}\) is \(q+1:1\) and surjective. But by assumption \(2(l-1)<q+1\), so that there is a legal choice for \(v\). Then we plug in \(x\mapsto k(\alpha\mu,x)\) into \(w^{\prime}\) and get a polynomial of degree \(l\) with \(q>l>0\) in \(\mu\in\mathbb{F}_{q}\) for \(q>2\). Applying Lemma 4.4 for \(q>2\) and Lemma 2.5 for \(q=2\), we conclude that \(\overline{w}^{\prime}\) and hence \(\overline{w}\) cannot be constant. Note here that the same proof applies to \(\mathrm{PSp}_{2}(q)\cong\mathrm{PSL}_{2}(q)\) with a slight variation. But also \(\mathrm{PSU}_{2}(q)\cong\mathrm{PSL}_{2}(q)\), so this is just another proof of Lemma 4.5. For the proof of the lower bound for \(\mathrm{PSU}_{n}(q)\), we need the following auxiliary lemma on the number of isotropic vectors that a space \(V\cong\mathbb{F}_{q^{2}}^{n}\) with non-zero hermitian form on it can admit. Its proof is standard and can be found in [27], page 65. **Lemma 8.3**.: _The number of non-zero isotropic vectors of a space \(V\cong\mathbb{F}_{q^{2}}^{n}=\mathbb{F}_{q^{2}}^{k+l}\), with the non-zero hermitian form \(f\) on it, is equal to_ \[N_{k,l,q}=(q^{k}-(-1)^{k})(q^{k-1}-(-1)^{k-1})q^{2l}+q^{2l}-1,\] _where \(\dim(\mathrm{rad}(f))=l<n\) and \(n=\dim(V)=k+l\). Set \(N_{n,q}\coloneqq N_{n,0,q}\). The expression \(N_{k,l,q}\) is equal to \(q^{2n-1}+O(q^{2(n-1)})\) for \(k\geq 2\). For \(k=1\), it is \(q^{2(n-1)}+O(1)\)._ The key to the proof of the lower bound for \(\mathrm{PSU}_{n}(q)\) is the following general observation concerning the vanishing sets of sesquilinear forms. **Lemma 8.4**.: _Let \(V\cong\mathbb{F}_{q^{2}}^{n}\) and \(f\colon V\times V\to\mathbb{F}_{q^{2}}\) be the standard unitary form \(f(u,v)=\sum_{i=1}^{n}u_{i}v_{i}^{q}\) on \(V\). Moreover, let \(g\colon V\times V\to\mathbb{F}_{q^{2}}\) be a non-degenerate sesquilinear form such that \(g(u,v)=\sum_{i,j=1}^{n}c_{ij}u_{i}v_{j}^{q}\) so that \((c_{ij})_{i,j=1}^{n}\neq\lambda 1_{V}\) (for all \(\lambda\in\mathbb{F}_{q^{2}}^{\times}\)) is non-scalar. Set \(V(f)\coloneqq\{v\in V\setminus\{0\}\,|\,f(v,v)=0\}\). Then:_ \[\frac{|V(f)\cap V(g)|}{|V(f)|}\leq\frac{2}{q}+O(1/q^{2}).\] In other words, \(V(f)\) and \(V(g)\) have few points in common. Proof.: Assume w.l.o.g. that \(c_{21}\neq 0\). Indeed, if there is no \(c_{ij}\neq 0\) for \(i\neq j\) (in which case we could permute the coordinates so that \((i,j)=(2,1)\) and hence \(c_{21}\neq 0\)), then \((c_{ij})_{i,j=1}^{n}\) is a diagonal matrix with not all diagonal entries equal to each other. Again, by permuting the coordinates, we may assume that \(c_{11}=\lambda\neq\mu=c_{22}\). Choose two non-zero elements \(a,b\in\mathbb{F}_{q^{2}}\) such that \(a^{q+1}+b^{q+1}=1\). This is possible, since the norm \(\mathrm{N}\colon\mathbb{F}_{q^{2}}^{\times}\to\mathbb{F}_{q}^{\times};\alpha\mapsto \alpha^{q+1}\) is surjective. Then \(u=\left(\begin{smallmatrix}a&b\\ -b^{q}&a^{q}\end{smallmatrix}\right)\) is an element of \(\mathrm{SU}_{2}(q)\): \[uu^{*}=\begin{pmatrix}a&b\\ -b^{q}&a^{q}\end{pmatrix}\begin{pmatrix}a^{q}&-b\\ b^{q}&a\end{pmatrix}=\begin{pmatrix}a^{q+1}+b^{q+1}&0\\ 0&a^{q+1}+b^{q+1}\end{pmatrix}=1_{2}.\] Now we compute \[u\begin{pmatrix}\lambda&0\\ 0&\mu\end{pmatrix}u^{*} =\begin{pmatrix}a&b\\ -b^{q}&a^{q}\end{pmatrix}\begin{pmatrix}\lambda&0\\ 0&\mu\end{pmatrix}\begin{pmatrix}a^{q}&-b\\ b^{q}&a\end{pmatrix}\] \[=\begin{pmatrix}\lambda a&\mu b\\ -\lambda b^{q}&\mu a^{q}\end{pmatrix}\begin{pmatrix}a^{q}&-b\\ b^{q}&a\end{pmatrix}\] \[=\begin{pmatrix}\lambda a^{q+1}+\mu b^{q+1}&ab(\mu-\lambda)\\ a^{q}b^{q}(\mu-\lambda)&\mu a^{q+1}+\lambda b^{q+1}\end{pmatrix}.\] Since \(\lambda\neq\mu\), the two off-diagonal matrix entries are non-zero and we can conjugate \((c_{ij})_{i,j=1}^{n}\) by \(u\oplus 1_{n-2}\) to get \(c_{21}\neq 0\), while we preserve the form \(f\). Let \(v\in V\) be isotropic with respect to \(f\) and \(v_{1}\neq 0\). There are exactly \(N_{n,q}-N_{n-1,q}\) such vectors. Assume \(v\) is isotropic with respect to \(g\) as well. Then \(v.\lambda=(\lambda v_{1},v_{2},\ldots,v_{n})\) for \(\lambda\in\mathbb{F}_{q^{2}}\), \(\lambda^{q+1}=1\), is isotropic for \(f\), too. This defines an action of the cyclic group \(C=\ker(\mathrm{N}\colon\mathbb{F}_{q^{2}}^{\times}\to\mathbb{F}_{q}^{\times})= \{\alpha\in\mathbb{F}_{q^{2}}\,|\,\alpha^{q+1}=1\}\) on the points of \(V(f)\). In order that \(v.\lambda\) is isotropic for \(g\) as well, we must have: \[0 =g(v.\lambda,v.\lambda)-g(v,v)\] \[=\lambda^{q+1}c_{11}v_{1}^{q+1}-c_{11}v_{1}^{q+1}+(\lambda-1) \sum_{i=2}^{n}c_{1i}v_{1}v_{i}^{q}+(\lambda^{q}-1)\sum_{i=2}^{n}c_{i1}v_{i}v_{ 1}^{q}\] \[=0+\lambda^{q}\sum_{i=2}^{n}c_{i1}v_{i}v_{1}^{q}+\lambda\sum_{i=2 }^{n}c_{1i}v_{1}v_{i}^{q}-\sum_{i=2}^{n}\left(c_{i1}v_{i}v_{1}^{q}+c_{1i}v_{1} v_{i}^{q}\right)\] \[=a\lambda^{q}+b\lambda-a-b\] \[=a\lambda^{-1}+b\lambda-a-b.\] This equation has at most two solutions in \(\lambda\) when \(a\) and \(b\) are not both zero (indeed, these are \(1\), and \(a/b\) when \(a,b\neq 0\)). In the opposite case, \(a=0\), so \(v\) lies in the kernel \(U=\ker(\varphi)\) of the non-zero (since \(c_{21}\neq 0\)) linear functional \(\varphi\colon v\mapsto\sum_{i=2}^{n}c_{i1}v_{i}\). The space \(U\) cannot be totally isotropic with respect to \(f\), since \(\dim(U)=n-1\) and \(n\geq 3\). Set \(k\coloneqq n-1-\dim(\operatorname{rad}(f|_{U}))\geq 1\) According to Lemma 8.3, there are \[N_{k,n-1-k,q}=\begin{cases}q^{2(n-2)}+O(1)&\text{for $k=1$}\\ q^{2(n-1)-1}+O(q^{2(n-2)})&\text{for $k\geq 2$}\end{cases}\] such non-zero vectors \(v\). Hence we can estimate the cardinality of \(V(f)\cap V(g)\) as follows: \[|V(f)\cap V(g)|\leq N_{n-1,q}+N_{k,n-1-k,q}+\frac{2}{q+1}(N_{n,q}-N_{n-1,q}-N_{ k,n-1-k,q}).\] If \(k=1\), applying Lemma 8.3, we obtain \[|V(f)\cap V(g)| \leq q^{2(n-1)-1}+O(q^{2(n-2)})+q^{2(n-2)}+O(1)\] \[\quad+\frac{2}{q+1}(q^{2n-1}+O(q^{2(n-1)})-q^{2(n-1)-1}\] \[\quad+O(q^{2(n-2)})-q^{2(n-2)}+O(1))\] \[=2q^{2(n-1)}+O(q^{2(n-1)-1}).\] Similarly, for \(k\geq 2\), we get \[|V(f)\cap V(g)| \leq 2q^{2(n-1)-1}+O(q^{2(n-2)})\] \[\quad+\frac{2}{q+1}(q^{2n-1}+O(q^{2(n-1)})-2q^{2(n-1)-1}+O(q^{2(n -2)}))\] \[=2q^{2(n-1)}+O(q^{2(n-1)-1})\] as well. Thus, \[\frac{|V(f)\cap V(g)|}{|V(f)|}\leq\frac{2q^{2(n-1)}+O(q^{2(n-1)-1})}{q^{2n-1}+ O(q^{2(n-1)})}=\frac{2}{q}+O(1/q^{2}).\] The proof is complete. **Lemma 8.5**.: _Assume \(w\in\operatorname{GL}_{n}(q^{2})*\langle x\rangle\) (\(n\geq 3\)) is of length \(0<l\leq q/2+O(1)\) such that \(\overline{w}\in\operatorname{PGL}_{n}(q^{2})*\langle x\rangle\) is of positive length. Then \(\overline{w}\) is non-constant on \(\operatorname{PSU}_{n}(q)\)._ Proof.: We proceed as in the proof of Lemma 8.2. We have to make sure that there is a vector \(v\in V\) such that \(f(v,v)=0\) and \(g_{j}(v,v)\coloneqq f(v.c_{j},v)\neq 0\) for \(j=1,\ldots,l-1\). But by the previous lemma for \(g=g_{j}\) we have \[\frac{|V(f)\cap V(g)|}{|V(f)|}\leq\frac{2}{q}+O(1/q^{2}),\] and \(1/(2/q+O(1/q^{2}))=q/2+O(1)\), so that \(V(f)\setminus\bigcup_{j=1}^{l-1}V(g_{j})\neq\emptyset\). The proof is complete. ## 9. Outlook and further comments We note that the mixed identities \(w\in G*\mathbf{F}_{r}\) considered herein for \(G=S_{n}\) and \(A_{n}\), and most other groups covered in this article, are _singular_, i.e. they lie in the kernel of the augmentation map \(\varepsilon\colon G*\mathbf{F}_{r}\to\mathbf{F}_{r}\) which fixes \(\mathbf{F}_{r}\) element-wise and maps \(G\ni g\mapsto 1_{\mathbf{F}_{r}}\), following the terminology introduced in [14], their _content_ is trivial. By Theorem 1 in [22] the former is necessary for \(S_{n}\), as, by this theorem, there are no non-singular identities of bounded length. We will address this question in forthcoming work for quasi-simple groups of Lie type, [2]. Let us come back to the case \(\mathrm{P}\Omega^{\circ}_{2m-1}(q)\), which we cover for even \(m\) only when \(q\equiv 1\ \mathrm{mod}\ 4\). The case \(q\equiv 3\ \mathrm{mod}\ 4\) is rather peculiar. It seems plausible and likely that there is no mixed identity of bounded length in this case, even though the almost simple group \(\mathrm{PSO}^{\circ}_{2m-1}(q)\) including also the elements of non-trivial spinor norm does satisfy a mixed identity of bounded length, see Remark 7.1. This shows even more drastically then for \(\mathrm{PSL}_{n}(q)\) that passage to an almost simple group might change the asymptotics of the length of shortest mixed identities. In a forthcoming work, we plan to address the remaining families of simple groups of Lie type of bounded rank.
2307.04408
TIM: Teaching Large Language Models to Translate with Comparison
Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning. However, these models can sometimes struggle with tasks that require more specialized knowledge such as translation. One possible reason for such deficiency is that instruction tuning aims to generate fluent and coherent text that continues from a given instruction without being constrained by any task-specific requirements. Moreover, it can be more challenging for tuning smaller LLMs with lower-quality training data. To address this issue, we propose a novel framework using examples in comparison to teach LLMs to learn translation. Our approach involves presenting the model with examples of correct and incorrect translations and using a preference loss to guide the model's learning. We evaluate our method on WMT2022 test sets and show that it outperforms existing methods. Our findings offer a new perspective on fine-tuning LLMs for translation tasks and provide a promising solution for generating high-quality translations. Please refer to Github for more details: https://github.com/lemon0830/TIM.
Jiali Zeng, Fandong Meng, Yongjing Yin, Jie Zhou
2023-07-10T08:15:40Z
http://arxiv.org/abs/2307.04408v3
# TIM: Teaching Large Language Models to Translate with Comparison ###### Abstract Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning. However, these models can sometimes struggle with tasks that require more specialized knowledge such as translation. One possible reason for such deficiency is that instruction tuning aims to generate fluent and coherent text that continues from a given instruction without being constrained by any task-specific requirements. Moreover, it can be more challenging to tune smaller LLMs with lower-quality training data. To address this issue, we propose a novel framework using examples in comparison to teach LLMs to learn translation. Our approach involves output comparison and preference comparison, presenting the model with carefully designed examples of correct and incorrect translations and an additional preference loss for better regularization. Empirical evaluation on four language directions of WMT2022 and FLORES-200 benchmarks shows the superiority of our proposed method over existing methods. Our findings offer a new perspective on fine-tuning LLMs for translation tasks and provide a promising solution for generating high-quality translations. Please refer to Github for more details: [https://github.com/lemon0830/TIM](https://github.com/lemon0830/TIM). ## 1 Introduction Generative large language models have shown remarkable performance in various NLP tasks Brown et al. (2020); Ouyang et al. (2022). For machine translation, the GPT models achieve very competitive translation quality, especially for high-resource languages Hendy et al. (2023); Zhu et al. (2023), which opens up new possibilities for building more effective translation systems. It is impractical to deploy such large models for the translation task only, and using or tuning open-sourced generative language models has become an attractive research direction. In this regard, researchers have explored strategies for example selection and instruction design through In-Context Learning (ICL) Lin et al. (2022); Agrawal et al. (2022). However, evaluations of open-sourced LLMs show that they do not perform as well as strong multilingual supervised baselines in most translation directions Zhu et al. (2023). Additionally, ICL can increase decoding latency due to longer context. Based on these observations, researchers suggest tuning relatively small LLMs for translation with a few high-quality supervised instructions Zhu et al. (2023); Hendy et al. (2023); Jiao et al. (2023). Instruction tuning has been shown to be an efficient method for making LLMs better aligned to the task descriptions preferred by humans Stiennon et al. (2020); Ouyang et al. (2022); Chung et al. (2022); Wang et al. (2023). The only requirement is to collect task-specific data, on which LLMs will be fine-tuned with the language modeling loss. However, optimizing for simple next-token prediction loss will cause models to overlook context information, especially for low-capacity models. It is serious for the tasks in which the specialized knowledge in context is necessary for task completion (e.g., translation), and ignoring such knowledge on translation can lead to inadequacy and hallucination. Therefore, there is a need to investigate the limitations of LLMs and explore methods for improving their performance in specialized tasks. In this paper, we propose to teach the language models to learn translation with examples in comparison, named **TIM**, aiming to make full use of a small amount of high-quality translation data. Based on the training data, we further construct two kinds of comparisons: output comparison and preference comparison. Output comparison is used to learn responses to different instructions for the same input. Preference comparison is used to maximize the gap between correct and incorrect translations. Specifically, to help identify specific areas where the model may be making errors, we introduce an additional preference loss, which is originally used to learn reward models (Stiennon et al., 2020), as regularization to penalize unexpected outputs. We evaluate our proposed method on WMT22 and FLORES-200 test sets (EN\(\Leftrightarrow\)DE, EN\(\Leftrightarrow\)ZH), and the improvement over the baselines shows the effectiveness of our method. Our model shows better zero-shot translation performance and stability in prompt choice. Moreover, the performance of the models tuned by our TIM increases as the model size increases, with the improvement being more pronounced in the case of smaller models. In particular, the tuned LLaMa-2-13B (Touvron et al., 2023) achieves top 1 on quality estimation without references in the EN\(\Leftrightarrow\)DE, outperforming the dedicated models for quality estimation. ## 2 Method In brief, we tune generative language models to learn translation with output comparison and preference comparison in the instruction tuning framework. First, we will give a formal introduction to instruction tuning. Then, we present the detail of two kinds of comparisons of our method consisting of output comparison and preference comparison, and an additional preference learning loss. Finally, we explore different approaches to parameter tuning. ### Background: Instruction Tuning Instruction tuning aims to enhance the capacity of language models in handling instructions in natural languages. The concept is that the models can be trained to execute tasks specified in instructions, which would enable them to comprehend the tasks and even process tasks not encountered before. Generally, each instance of instruction-following data starts with "instructions" \(c\) describing a task, and a corresponding output \(y\) indicating the answer to the instruction. The "input" \(x\), the optional context or input for the task, is not necessary but is required for the machine translation task. Given the instruction data, the language models are optimized by minimizing the negative log-likelihood of the output \(y\): \[L_{lm}=-\frac{1}{|y|}\sum_{i}^{|y|}\text{logp}(y_{i}|c,x). \tag{1}\] Notably, the objective is the same as that used in pretraining. ### Output Comparison An important ingredient of our method is the construction of samples used to provide comparison signals for model learning. In addition to regular translation data, we construct data used for comparison by introducing sequence ordering, dictionary information, or translation errors, which are shown in Figure 1. Order-guided data.We introduce a variation of the translation process, and we reverse the translations of certain examples and provide an accompanying note indicating the reverse generation order (Order-guided Data in Figure 1). By training on these reverse sentences, the model gains the ability to capture dependencies that may not be evident in the original sentence order. This helps improve the model's comprehension of instructions and enhances its capability to generate coherent and contextually appropriate translations. Dictionary-guided Data.To make the model aware of the underlying reasons for different translations, we inform the model of different correct outputs with the help of bilingual dictionaries1. Instead of synthesizing the comparison data, we utilize an existing multi-reference corpus. By looking up the bilingual dictionary, we establish word alignments between a single source sentence and multiple references. The word alignments serve as annotations appended to the input. Illustrated in Figure 1, the notes contain distinct word alignments, and the outputs of **Example 1** and **Example 2** differ despite the same input sentences. Footnote 1: [https://github.com/facebookresearch/MUSE](https://github.com/facebookresearch/MUSE) Error-guided Data.We introduce translations with error annotations inspired by Jiao et al. (2023). For correct input-output pairs, the added notes indicate no mistakes in the references, while the notes of incorrect input-output pairs indicate detailed translation errors. As shown in the left part of Figure 1, the translation of **Example 1** is correct while the translation of **Example 2** has a major locale convention format mistake, corresponding to the added note. ### Preference Comparison In preference comparison, we assign contrastive outputs for each type of data, denoted as _Bad Out put_, and train the model with an extra preference loss. As illustrated in Figure 3, we propose two types of the _Bad Output_: 1) **Noisy-based**, in which we intentionally introduce noise into the original output by randomly deleting words or swapping the positions of two words; 2) **LM-based**, in which we fine-tune a relatively small LM (e.g., BLOOM-1b7) and generate output using a simple sampling strategy for each instance. With examples of correct and incorrect translations, the model are optimized to distinguish higher-quality translations, which can reduce the resource requirement for training. One way to utilize the contrastive outputs is to train a reward model and further fine-tune language models with the reward model using reinforcement learning, i.e., RLHF (Stiennon et al., 2020; Ouyang et al., 2022). Instead of using such a complex two-stage training process, we directly tune the language model using a token-level preference loss: \[L_{pl}=-\frac{1}{N-I}\sum_{i=I}^{N}max(0,-r_{\theta}(h_{i}^{(0)})+r_{\theta}(h_ {i}^{(1)})+1.0), \tag{2}\] where \(y_{0}\) and \(y_{1}\) denote the preferred output and comparison output, \(I\) is the index starting from the segments different between \(y_{0}\) and \(y_{1}\), \(N\) is the maximum length of two sequences, and \(h_{i}\) is the hidden state of the \(i\)-th token, respectively. Specifically, \(r_{\theta}\) is a linear head that takes the hidden state of the top layer and returns a scalar. The overall loss function for tuning the model is \[L=L_{lm}+\lambda L_{pl}, \tag{3}\] where \(\lambda\) is a coefficient of the preference learning loss. We simply set \(\lambda\) as 1.0 in this paper. Figure 1: **Illustration of three types of output comparison. The text in blue highlights the difference between the added notes and the resulting difference due to these specific notes.** Figure 3: **An example of contrastive outputs for preference Comparison. The “Bad Output” denotes the noisy translation used to be compared with the “Output”.** Figure 2: **Overall framework of our proposed TIM. Given the contrastive outputs of each instance, we optimize the LLMs with the general language modelling loss and the token-level preference loss.** ### Tuning Strategies In this paper, we adopt three different strategies for fine-tuning, listed in descending order from the number of trainable parameters. LoRA: Tuning with Low-rank Matrices.LoRA Hu et al. (2022) is a technique that reduces the number of trainable parameters by introducing new low-rank matrices to any module in the model while keeping the original weights frozen. This results in a significant reduction in storage requirements and efficient task-switching during deployment without impacting inference latency. FixEmb: Tuning with Embedding Fixed.LoRA-based tuning has a limitation where the limited number of trainable parameters may restrict its expressiveness. A simple solution to overcome this is to fine tune the parameters of the model layers while keep the embeddings fixed. This allows the model to gain more flexibility in adjusting its performance without compromising the semantic information captured by the embeddings. Full: Tuning Full Parameters.Full parameter tuning has recently been demonstrated more effective than LORA. The major limitation of full parameter fine-tuning is the memory footprint, but it is not serious for 7B models and little data. ## 3 Experiments In this section, we begin by conducting preliminary experiments to investigate the impact of inference strategies and the resilience of our TIM under varying instructions. Subsequently, we evaluate TIM on the WMT and FLORES-200 dev-test tasks in four language directions. ### Settings To avoid data leakage Garcia et al. (2023), we use the latest WMT22 test set and FLORES-200 dev-test. * WMT22 test sets. We use the test sets from WMT22 competition2, which consist of more recent content from diverse domains such as news, social, e-commerce, and conversational domains. The test sets comprise 1984, 2037, 1875, and 2037 samples for the German-to-English (De\(\Rightarrow\)En), English-to-German (En\(\Rightarrow\)De), Chinese-to-English (Zh\(\Rightarrow\)En), and English-to-Chinese (En\(\Rightarrow\)Zh) language pairs, respectively. Footnote 2: [https://www.statmt.org/wmt22/translation-task.html](https://www.statmt.org/wmt22/translation-task.html) * FLORES-200 dev-test. We use the dev-test split from the FLORES-200 benchmarks3. This dataset includes 1,012 sentences extracted from English Wikipedia, covering a broad range of topics and domains. These sentences have been carefully checked by professional translators into approximately 200 languages. Footnote 3: [https://github.com/facebookresearch/flores/blob/main/flores200](https://github.com/facebookresearch/flores/blob/main/flores200) Footnote 4: [https://github.com/mipost/saccrbelu](https://github.com/mipost/saccrbelu) To ensure a fair and consistent evaluation, we fine-tuned all models for 1 epoch with a batch size of 128, while imposing a maximum text length of 512. The learning rates are 2e-5 for FixEmb and Full, and 3e-4 for LoRA, respectively. The weight decay parameter is set to 0.0. We conducted fine-tuning on eight NVIDIA A100 GPUs, utilizing the Deep-Speed ZeRO stage3 for model parallelism. The results of the final checkpoints are reported. For automatic evaluations, we utilize two widely adopted metrics: BLEU Papineni et al. (2002) implemented in SacreBLEU4, and COMET5 with _Unbabel/wmt22-comet-da_. BLEU is driven by n-gram similarity, while COMET relies on cross-lingual pre-trained models. Footnote 5: [https://github.com/Unbabel/COMET](https://github.com/Unbabel/COMET) Footnote 6: [https://huggingface.co/fugiscence/bloomz-7b1-mt](https://huggingface.co/fugiscence/bloomz-7b1-mt) ### Baselines We leverage **BLOOMZ-7b-mt6** and **LLaMA-2-7b7**Touvron et al. (2023b) as the backbones and evaluate the following baselines: Footnote 6: [https://huggingface.co/dataetsets/tatsu-lab/alpaca](https://huggingface.co/dataetsets/tatsu-lab/alpaca) Alpaca-(*)is a reproduction of the Alpaca model fine-tuned solely on the alpaca multi-task dataset8. Footnote 6: The results in Zhang et al. (2023) are directly reported. MT-(*)is fine-tuned on the human-written validation data from previous WMT competitions, i.e., the newstest2017-2021 of Chinese\(\Leftrightarrow\)English and German\(\Leftrightarrow\)English, which consist of 45,433 sentence pairs for all four directions. Besides, we report the results of WMT22 winners, and NLLB-3.3B Costa-jussa et al. (2022). The latter is a multilingual translation model trained on a massive parallel corpus of over 200 languages9. We use the notation **TIM-(*)** to refer to LLMs fine-tuned using our proposed TIM approach. The training data for TIM-(*) includes the alpaca dataset, the WMT translation data as well as data described in Section 2.2. In practice, to construct the order-guided data, we utilize the WMT translation data. Besides, we rely on the annotated data of newstest2020 Zh\(\Rightarrow\)En and En\(\Rightarrow\)De in the Multidimensional Quality Metrics (MQM) datasets10. For a source sentence and 10 submissions, annotators provided labels indicating whether each submission contains an "error" or "no error". We extract the "no error" submissions for each source sentence to create a multi-reference corpus, which serves as the basis for the dictionary-guided data. We incorporate both the "no error" and "error" submissions for error-guided data. Footnote 10: [https://github.com/google/wmt-mqm-human-evaluation](https://github.com/google/wmt-mqm-human-evaluation) ### Pre-Experiments Here, we investigate the effect of inference strategies and instructions. We fine-tune the **BLOOMZ-7b-mt** with our TIM and conduct evaluations on the WMT22 test sets. Effect of Inference Strategies.We compare the performance of sampling and beam search, and the two search algorithms are combined with the notes in our dictionary-guided and error-guided data. Table 1 presents the experimental results. First, we observe that instructing the model to generate translations without errors does not result in a significant performance gain. We speculate that the preference loss function implicitly allows the LLMs to learn to generate error-free translations, making the additional instructions unnecessary. Secondly, previous studies have shown that introducing alignment information from dictionaries can improve translation performance Lu et al. (2023); Zheng et al. (2021); Zhang and Zong (2016). Surprisingly, adding alignment notes harms the performance, and this may be due to that most of the words in the dictionaries we use are common words, or that the wording styles of the dictionaries differ greatly from the reference. How to better collect and use dictionary for machine translation is left for future work. Effect of Instructions.In human interaction scenarios, instructions provided by users may vary in styles and forms, and thus it is essential to evaluate the robustness of TIM under different instructions. We use ten distinct instructions and the result in Figure 4 indicates that our TIM achieves consistent performance across all the tested instructions. ### Main Results Based on the observation in Section 3.3, we use a simple instruction "Translate from {src} to {tgt}.n{input}" and beam search with a beam size of 4 for all models during inference. Table 2 presents the translation performance on the WMT22 test sets and FLORES-200 dev-test. We have the following observations: First, we observe significant performance fluctuations across different language models, training data, and language pairs for (*)-_LoRA_ and (*)-_Full_. For example, with **BLOOMZ-7b-mt** as the backbone, _Alpaca-LoRA_ outperforms _Alpaca-Full_ in most language pairs, while _MT-LoRA_ underperforms _MT-Full_. Our speculation is that LoRA can prevent LLMs from overfitting but is limited in the number of trainable parameters. In contrast, the experiment result of (*)-_FixEmb_ indicates that fine-tuning with fixed embedding parameters can better leverage the generalization of LLMs and prevent overfitting. Second, training LLMs with comparison can \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **Zh\(\Rightarrow\)En** & **En\(\Rightarrow\)Zh** & **De\(\Rightarrow\)En** & **En\(\Rightarrow\)De** \\ \hline Sample & 22.75 & 34.98 & 24.72 & 19.09 \\ w/ _No Err._ & 23.10 & 36.37 & 25.20 & 19.34 \\ w/ _Dict._ & 21.28 & 34.55 & 24.37 & 18.19 \\ Beam-4 & 24.51 & 37.83 & 26.12 & 20.90 \\ w/ _No Err._ & 24.26 & **38.17** & **26.24** & **21.10** \\ w/ _Dict._ & **24.55** & 36.32 & 26.16 & 20.19 \\ \hline \hline \end{tabular} \end{table} Table 1: **Effect of inference strategies.** We fine-tune BLOOMZ-7b-mt with our TIM and report BLEU scores on four language pairs. Figure 4: **Effect of instructions.** We fine-tune BLOOMZ-7b-mt with our TIM and report BLEU scores of 10 different instructions on four language pairs. further enhance the understanding of the translation task. Compared to _Alpaca_-(*), _MT_-(*) models, _TIM_-(*) exhibits notably better results on both the WMT22 test sets and FLORES-200 dev-test. ## 4 Analysis ### Effect of Model Sizes In this section, we present a comparison between TIM and instruction tuning across different model sizes on the WMT22 test set. Figure 5 illustrates the consistent improvements achieved by TIM, indicating its generalizability. Besides, as the foundation LLM's size increases, the translation performance of the LLMs fine-tuned with TIM improves. In particular, the improvement is more significant when the model size is smaller. This observation supports our hypothesis that the smaller model has weaker ability to comprehend instructions, and it may not effectively learn task patterns with simple instruction tuning especially using a small amount of training data, By contrast, training LLMs with comparison help them to better identify the task's requirements and better leverage internal cross-lingual knowledge. ### Zero-shot Translation To evaluate TIM's performance in translation directions never seen previously, i.e., zero-shot multilingual capability, we conduct experiments on the WMT22 multilingual-to-English translation benchmark which encompasses 4 translation directions: Czech-to-English (cs\(\Rightarrow\)en), Japanese-to-English (ja\(\Rightarrow\)en), Russian-to-English (ru\(\Rightarrow\)en), and Ukrainian-to-English (uk\(\Rightarrow\)en). We compare our method with the following open-sourced models: Alpaca-7b11, Vicuna-13b12, BayLing-7b, -13b (Zhang et al., 2023), NLLB-3.3b (Costa-jussa et al., 2022), ChatGPT, and GPT4 (OpenAI, 2023). We report the results of the above models in Zhang et al. (2023). Due to the better performance of LLaMA-2 in multilingual-to-English, we report \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Zh\(\Rightarrow\)En**} & \multicolumn{2}{c}{**En\(\Rightarrow\)Zh**} & \multicolumn{2}{c}{**De\(\Rightarrow\)En**} & \multicolumn{2}{c}{**En\(\Rightarrow\)De**} \\ & BLEU & COMET & BLEU & COMET & BLEU & COMET & BLEU & COMET \\ \hline \multicolumn{8}{c}{**Test:**_WMT22 Test Sets_**} & **Backbone:**_BLOOMZ-7b-mt_ \\ WMT22 Winners\({}^{*}\) & 33.5 & 81.0 & 54.3 & 86.8 & 33.7 & 85.0 & 38.4 & 87.4 \\ NLLB-3.3b\({}^{*}\) & 21.07 & 76.92 & 32.52 & 81.56 & 29.54 & 83.42 & 33.98 & 86.23 \\ \hline Alpaca-LoRA & 12.61 & 76.36 & 24.30 & 81.18 & 16.04 & 71.17 & 8.05 & 57.54 \\ Alpaca-Full & 13.01 & 75.95 & 20.65 & 78.69 & 16.98 & 72.46 & 2.28 & 36.91 \\ MT-LoRA & 21.47 & 79.20 & 35.22 & 85.00 & 23.59 & 76.91 & 15.74 & 66.42 \\ MT-FixEmb & 23.08 & 78.95 & 37.09 & 85.02 & 24.99 & 78.19 & 19.05 & 71.89 \\ MT-Full & 22.81 & 79.15 & 34.49 & 84.26 & 24.72 & 77.84 & 18.79 & 71.65 \\ \hline \multicolumn{8}{c}{_w/ Noisy-based Bad Output_} \\ TIM-LoRA & 22.11 & 78.89 & 35.70 & 84.90 & 23.55 & 76.70 & 16.46 & 66.80 \\ TIM-FixEmb & 24.11 & 79.70 & 37.46 & **85.29** & **26.20** & 78.79 & **20.97** & 74.63 \\ TIM-Full & 23.49 & 79.17 & 34.70 & 84.26 & 25.11 & 78.40 & 20.99 & 74.12 \\ \multicolumn{8}{c}{_w/ LM-based Bad Output_} \\ TIM-LoRA & 22.22 & 78.81 & 35.71 & 84.67 & 23.82 & 76.57 & 16.62 & 66.67 \\ TIM-FixEmb & **24.51** & **79.71** & **37.83** & 85.10 & 26.12 & **78.94** & 20.90 & **74.91** \\ TIM-Full & 23.81 & 79.33 & 35.57 & 84.75 & 25.43 & 78.19 & 20.74 & 74.24 \\ \hline \hline \multicolumn{8}{c}{**Test:**_FLORES-200_**Backbone:**_LLaMA-2-7b_} \\ MT-LoRA & 23.55 & 78.85 & 30.18 & 81.02 & 30.02 & 83.77 & 26.89 & 83.23 \\ MT-FixEmb & 24.36 & 79.00 & 33.34 & 83.27 & 30.47 & 83.98 & 27.85 & 83.62 \\ MT-Full & 24.04 & 78.85 & 32.86 & 83.17 & 29.97 & 83.77 & 27.20 & 83.23 \\ \hline \multicolumn{8}{c}{_w/ Noisy-based Bad Output_} \\ TIM-LoRA & 26.00 & 85.75 & 32.90 & 84.65 & 41.77 & 88.69 & 32.33 & 85.91 \\ TIM-FixEmb & **26.47** & 85.64 & 34.84 & **85.47** & 42.24 & **88.95** & 33.01 & **86.32** \\ TIM-Full & 26.30 & 85.71 & 34.46 & 85.23 & 42.01 & 88.68 & 32.28 & 86.05 \\ \multicolumn{8}{c}{_w/ LM-based Bad Output_} \\ TIM-LoRA & 25.92 & 85.80 & 32.75 & 84.18 & 41.90 & 88.77 & 32.17 & 86.05 \\ TIM-FixEmb & 26.13 & 85.61 & **35.15** & 85.27 & **42.91** & 88.84 & **33.32** & 86.20 \\ TIM-Full & 26.25 & **85.81** & 34.53 & 85.18 & 41.96 & 88.82 & 32.79 & 86.05 \\ \hline \hline \end{tabular} \end{table} Table 2: **Evaluation results of different LLMs on 4 language pairs from WMT22 test sets and Flores devsets. Methods with * denote that we directly report the scores from the corresponding paper, and others are from our implementation.** the performance of fine-tuned LLaMA-2-7b and LLaMA-2-13b with our TIM, respectively. As depicted in Figure 6, _TIM-(*)_ (i.e., TIM-FixEmb-7b, TIM-LoRA-13b, and TIM-FixEmb-13b) exhibit good zero-shot multilingual capability on these translation directions. Compared to _Alpaca-7b_, _Vicuna-13B_, _BayLing-7b_, and _BayLing-13b_, _TIM-(*)_ exhibits superior translation ability, highlighting that aligning training languages strengthens the alignment of other languages as a by-product. Additionally, _TIM-(*)_ obtains comparative performance with _NLLB-3.3B_ in most language pairs, and significantly better on Ja\(\Rightarrow\)En. These results demonstrate that adding carefully constructed translation data, combined with an effective training strategy such as our proposed TIM, can enhance the overall task capability of LLMs. ### Ablation Study To analyze the impact of different components of TIM, we investigate variants of _TIM-FixEmb_ taking **BLOOMZ-7b-mt** as the backbone: _MT w/_ (*), where we add the (*)-guided comparisons in training data; _TIM[*]_, where we use _noisy-based_ or _LM-based_ bad output for preference comparison; _TIM w/o \(L_{pl}\)_, where we remove \(\mathcal{L}_{pl}\)_; and _TIM w/o OutCom_, where we remove output comparison. As a supplement to BLEU, we analyze the phenomenon of hallucination on the Zh\(\Rightarrow\)En test set using the hallucination detector provided by Zhou et al. (2021). The BLEU scores, sentence-level and token-level hallucination scores are reported in Table 3, respectively. The experimental results of 1, 2, 3, and 4 indicate a noteworthy reduction in translation hallucination when output comparison is incorporated into language models. Particularly, the inclusion Figure 5: **Effect of model sizes.** We present a comparison between TIM and instruction tuning across LLMs with different model sizes including BLOOM-1b7, BLOOM-3b, BLOOMZ-7b-mt, LLaMA-2-7b, and LLaMA-2-13b. \begin{table} \begin{tabular}{l|l r r r r} \hline \hline **Id** & **Method** & **BLEU\(\uparrow\)** & **S-Hal\(\downarrow\)** & **T-Hal\(\downarrow\)** & \(\Delta\%\)**T-Hal.** \\ \hline 0 & Alpaca & 10.96 & 73.87 & 20.36 & - \\ 1 & MT & 23.08 & 68.21 & 10.58 & -9.78\% \\ 2 & _w/ Rev_ & 23.41 & 67.36 & 9.62 & -10.74\% \\ 3 & _w/ Dict_ & 23.73 & 66.77 & 8.93 & -11.43\% \\ 4 & _w/ Error_ & 23.94 & 66.61 & 9.59 & -10.77\% \\ 5 & TIM[Noisy] & 24.11 & 67.31 & 9.39 & -10.97\% \\ 6 & TIM[LM] & **24.51** & **66.03** & **8.83** & **-11.53\%** \\ 7 & _w/o L\({}_{pl}\)_ & 23.76 & 68.00 & 9.53 & -10.83\% \\ 8 & _w/o OutCom_ & 23.21 & 67.46 & 9.69 & -10.67\% \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation study.** We fine-tune BLOOMZ-7b-mt with our TIM and report BLEU and hallucination scores on Zh\(\Rightarrow\)En. Figure 6: **Zero-shot translation.** We fine-tune LLaMA2 and compare our TIM-FixEmb-7b, TIM-LoRA-13b, and TIM-FixEmb-13b with the open-sourced models on WMT22 multilingual-to-English translation benchmark. of dictionary-guided data is crucial among various data types. This suggests that providing translation-related information and instructing the model to generate corresponding translations during training can promote the model to produce more faithful translations. Furthermore, the results of 1 and 8 indicate that LLMs can learn better translation output through preference comparison, even without the requirement of any output comparison data. Finally, although the performance of _TIM[Noisy]_ proved to be competitive with _TIM[LM]_ in terms of BLEU and COMET scores (Table 2), the results of 5 and 6 in Table 3 indicate that incorporating bad examples based on actual LM errors can provide more meaningful training signals compared to artificial noisy data. ### MT Metrics Evaluation Apparently, the preference scores can reflect the quality of model output. To demonstrate how well them reflect quality assessment, we use MTME13 to evaluate the performance of our preference scores on standard test sets from the WMT22 Metrics Shared Tasks in De\(\Rightarrow\)En and En\(\Rightarrow\)De. We compare ours with some reference-free metrics (i.e., COMET-QE (Rei et al., 2021), COMETKiwi (Rei et al., 2022), UniTE-src (Wan et al., 2022), and HWTSC-Teacher-SIM (Liu et al., 2022)) and reference-based metrics (i.e., metricx_xxl_MQM_2020 (Freitag et al., 2022), BLEURT-20 (Sellam et al., 2020), COMET-22 (Rei et al., 2022), BLEU (Papineni et al., 2002), and chrF (Popovic, 2015)). Footnote 13: [https://github.com/google-research/mt-metrics-eval](https://github.com/google-research/mt-metrics-eval) For each pair consisting of a source sentence and the corresponding hypothesis, we wrap them with our **Training Prompt**, and use the score of the last token in the hypothesis as the final score. Table 4 shows the system-level accuracy (**Acc**) and Pearson correlations (**PCCs**). In particular, our _TIM-LLMa-13b_ and _TIM-BLOOMZ-7b_ outperform all the reference-free metrics and achieve better Pearson correlation on De\(\Rightarrow\)En than others. This demonstrates that the LLMs are implicitly a reward model which can be jointly optimized during instruction tuning (Rafailov et al., 2023). ## 5 Related Work Research on machine translation based on Large Language Models (LLMs) can be divided into two categories: LLMs as interface and instruction tuning. The studies of using LLMs as an interface focus on empirical analysis. For example, Hendy et al. (2023) evaluate ChatGPT, GPT3.5 (text-davinci-003), and text-davinci-002 in eighteen different translation directions involving high and low resource languages. Zhu et al. (2023) further evaluate four popular LLMs (XGLM, BLOOMZ, OPT and ChatGPT) on 202 directions and 102 languages, and compare them with strong supervised baselines, which provides a more comprehensive benchmark result. Many efforts are also put into investigating translation exemplars selection strategy of in-context learning (Lin et al., 2022; Agrawal et al., 2022). Another line of work introduces knowledge, such as word alignments extracted from a dictionary, to LLMs for better translation (Lu et al., 2023). Tuning smaller LLMs (e.g., 7B) for translation tasks is a promising direction since they are better at English than supervised translation models. However, even for directions from other languages to English, the gap between language models fine-tuned with translation data and supervised systems is still evident (Jiao et al., 2023; Zhang et al., 2023). Different from them, we introduce output comparison and preference comparison data and present a preference regularization to alleviate hallucination and help LLMs learn translation better. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Acc.**} & \multicolumn{2}{c}{**PCCs.**} \\ & & **De\(\Rightarrow\)En** & **En\(\Rightarrow\)De** \\ \hline metricx\_xxl\_MQM\_2020** & **74.56** & 48.98 & **84.69** \\ BLEURT-20 & 73.68 & 45.84 & 71.89 \\ **TIM-LLMa-13b\({}^{*}\)** & 72.81 & 50.37 & 62.67 \\ \hline COMET-22 & 72.81 & 44.63 & 77.06 \\ BERTScore & 71.05 & 43.96 & 42.82 \\ **TIM-BLOOMZ-7b\({}^{*}\)** & 69.30 & 62.14 & 42.59 \\ \hline COMET-QE\({}^{*}\) & 69.30 & 44.32 & 50.21 \\ COMETKiwi\({}^{*}\) & 68.42 & 40.95 & 67.35 \\ MS-COMET-QE-22\({}^{*}\) & 68.42 & 39.49 & 53.92 \\ BLEU & 67.54 & 35.24 & 17.88 \\ chrF & 65.79 & 35.45 & 34.63 \\ UniTE-src\({}^{*}\) & 64.91 & 40.20 & 50.91 \\ HWTSC-Teacher-Sim\({}^{*}\) & 60.52 & 32.17 & 38.53 \\ \hline \hline \end{tabular} \end{table} Table 4: **Pearson correlation of all metrics with system-level MQM scores for De\(\Leftrightarrow\)En**. Rows are sorted by the system-level pairwise accuracy across the two language pairs. The best results are indicated in bold. Reference-free metrics are indicated using an asterisk. Conclusion We propose TIM, a training method that fine-tunes open-source large language models for the translation task with the comparison of translations. Experiments and analyses validate the effectiveness of TIM in terms of translation quality and zero-shot translation ability. For the reference-free MT metrics evaluation, _TIM-LLaMA-13b_ even outperforms representative metrics like COMET and BLEURT in De\(\Rightarrow\)En, showing that our method can well learn the translation and evaluation jointly. Future work can explore the use of more diverse references for output comparison, and more advanced preference learning objectives.
2304.05246
OpenAL: Evaluation and Interpretation of Active Learning Strategies
Despite the vast body of literature on Active Learning (AL), there is no comprehensive and open benchmark allowing for efficient and simple comparison of proposed samplers. Additionally, the variability in experimental settings across the literature makes it difficult to choose a sampling strategy, which is critical due to the one-off nature of AL experiments. To address those limitations, we introduce OpenAL, a flexible and open-source framework to easily run and compare sampling AL strategies on a collection of realistic tasks. The proposed benchmark is augmented with interpretability metrics and statistical analysis methods to understand when and why some samplers outperform others. Last but not least, practitioners can easily extend the benchmark by submitting their own AL samplers.
W. Jonas, A. Abraham, L. Dreyfus-Schmidt
2023-04-11T14:35:14Z
http://arxiv.org/abs/2304.05246v1
# OpenAL: Evaluation and Interpretation of Active Learning Strategies ###### Abstract Despite the vast body of literature on Active Learning (AL), there is no comprehensive and open benchmark allowing for efficient and simple comparison of proposed samplers. Additionally, the variability in experimental settings across the literature makes it difficult to choose a sampling strategy, which is critical due to the one-off nature of AL experiments. To address those limitations, we introduce OpenAL, a flexible and open-source framework to easily run and compare sampling AL strategies on a collection of realistic tasks. The proposed benchmark is augmented with interpretability metrics and statistical analysis methods to understand when and why some samplers outperform others. Last but not least, practitioners can easily extend the benchmark by submitting their own AL samplers. ## 1 Introduction Active Learning (AL) has proved its worth in practice to optimize labeling tasks [15]. However, it remains challenging to apply in practice as its benefit can vary significantly depending on the task [12]. The optimal sampler may depend on several experimental hyperparameters, such as the initial labeled set size, the batch size, the ML model used, or the number of iterations, among others. Those hyperparameters values vary substantially between studies, even for similar tasks, as shown in Table 1. This diversity in experimental settings impairs reproducibility and makes methods comparisons arduous. Existing AL benchmarks have tackled this variability by fixing some parameters arbitrarily or targeting specific AL problems, such as using only Logistic Regression as a base learner [22], outlier detection [19], or structural reliability [13]. But comparing sampling strategies reliably requires to repeat the experiments several times [9] using various tasks and models [14]. OpenAL follows those best practices and encompasses various realistic tasks, models, and use cases. We designed them as close as possible to real tasks. We address the following caveats: **Initialization induced variability.** It has been proven that the variance in performance induced by the initial set of selected samples can be greater than the difference between sampling strategies [9]. We propose to use a 10-fold stratified shuffle split to get enough significance when comparing methods [5]. **Plausibility of the experimental setting.** Research task settings must be well representative of real-life ones to be helpful. Experiments on CIFAR-10 in the literature often vary from 6 batches of 5% of the whole dataset to 3 batches of 10% [16]. According to earlier work on realistic applications of AL [21], it is usually used to reduce data labeling between 1% and 10%. OpenAL's default is to label 1% of the data in 10 iterations on tabular and image classification tasks. We kickstarted the image classification models using transfer learning or self-supervision, following the industry best practices. **Reproducibility.** Our framework is open source, and all experiments results are made available and can be easily run again. We provide the accuracies and other AL metrics for the most common AL samplers, along with all train, test, and initial batch indices used for those experiments. **Online evaluation of sampling strategies.** Research works rely on the area under the accuracy curve of a left-out test set to evaluate the performance of AL strategies. This testing set is not available in real experiments making it hard to trust their behavior online [10]. OpenAL logs unsupervised metrics to improve the offline strategies' interpretability and be able to interpret their behavior online [1]. We first start by describing the setup of our tasks, the model selection methodology, and the evaluation criteria for sampling strategies. Then we present the results of our experiments per strategy across all tasks. We finish by focusing on the metrics observed and how they explain the performance of some strategies. Finally we open new perspectives on AL experiments and how this benchmark could be useful and extended in the future. ## 2 Evaluation framework OpenAL features eleven classification tasks on tabular datasets and four on image datasets. Tabular datasets come from OpenML [20; 7] and must be plausible enough _i.e._ having at least 10000 samples to justify the cost of setting up an AL pipeline and being non-trivial, or not solvable easily with 1% randomly selected samples. We were left with 11 tasks, which we deemed sufficient to obtain reliable results to compare AL strategies. **Cross-validation.** Each task is repeated ten times with different test sets and batch initialization. We use a stratified shuffle split with 20% of the data dedicated to the test. This amount of repetition is said to provide enough significance for method comparison [9]. Our accuracy plots display confidence intervals of \(10^{th}\) to \(90^{th}\) quantiles over the ten folds. **Active learning experimental setting.** We chose experimental parameters to be as close as possible to industrial use cases. Each experiment starts with 0.1% randomly selected labeled data with at least one sample from each class. Nine iterations follow it with batch size 0.1% to end up with a total of 1% of the data labeled. We do not use a specific stopping criterion and stop the experiment when this labeling budget is exhausted. In most experiments, this budget allows the best AL method to reach a performance plateau, as shown in experiments in Section 4. OpenAL includes seminal uncertainty-based strategies [17] (Margin, Confidence, and Entropy), weighted \begin{table} \begin{tabular}{l l l l l} \hline \hline Paper & Dataset & Init size & Batch size & nb iterations \\ \hline Active Learning for convolutional & CIFAR 10 & 10\% & 10\% & 3 \\ neural networks: a core set & CIFAR 100 & 10\% & 10\% & 3 \\ approach [16] & SVHN & 1\% & 8\% then 43\% & 3 \\ \hline Deep batch active learning by diverse, & SVHN & 100 & 100 & 350 \\ uncertain gradient lower bounds [3] & OpenML \#156 & 100 & 1000 & 4 \\ & CIFAR 10 & 100 & 10000 & 4 \\ \hline Variational Adversarial Active & CIFAR 100 & 10\% & 5\% & 6 \\ Learning [18] & Caltech-256 & 10\% & 5\% & 6 \\ & ImageNet & 10\% & 5\% & 6 \\ \hline BatchBALD: Efficient and Diverse & MNIST & 10 & 10 & 25 \\ Batch Acquisition for Deep Bayesian & EMNIST & 10 & 10 & 25 \\ Active Learning [8] & CINIC-10 & 200 & 10 & 120 \\ \hline \hline \end{tabular} \end{table} Table 1: AL experiment parameters KMeans (WKMeans) [23], incremental weighted KMeans (IWKMeans) [2], and k-center greedy (Kcenter) [16]. Note that what most literature works call core-sets use k-center greedy because of the latter's high computational cost. We call it by its original name to avoid any confusion. Since KCenter relies on the weights of the penultimate layer of a neural network for its computation, we used the embedding method proposed in scikit-learn for embedding tree models. It vectorizes the data using a PCA computed on the activation of the tree leaves. **Selection of the best model.** Models are usually selected using cross-validation, which is tricky to perform in Active Learning where labeled data is scarce [11]. We expect the practitioners to have prior knowledge of which models could perform well for the task at hand. For tabular datasets and MNIST, we simulate this prior knowledge by doing model selection over the whole dataset using a 5-fold cross-validation. We consider a multi-layer perceptron and two tree-based models, Random Forest and Gradient Boosting Tree, as they are known to excel on tabular data. For CIFAR-10 and CIFAR-100, we use embeddings precomputed on ImageNet and finetune the last layer. For CIFAR-10 only, we also consider embeddings precomputed on unlabeled data using contrastive learning [4]. **Experiment caching for easy comparison.** All the benchmark elements are seeded, which guarantees reproducibility at the machine level. Because seeded number generation may change from one machine to another, we also provide the indices of all train and test indices used in our benchmark. Once a strategy has run, all its corresponding metrics results are cached and can be used for plotting or method comparison. Running a new strategy is as simple as taking the dedicated notebook, wrapping the strategy in our sampler formalism, and running it. Submitting the results can then be done through a GitHub pull request. ## 3 Software OpenAL is coded in Python and available through the GitHub platform1. We also provide documentation explaining how to install, use, and publish results using our framework2. The repository contains all results of previous experiences. Running the benchmark on all reference samplers or on a new one is as simple as 3 lines of code that are contained in the main_run.py file: Footnote 1: [https://github.com/dataiku-research/OpenAL](https://github.com/dataiku-research/OpenAL) initial_conditions = load_initial_conditions(dataset_id) experimental_parameters = load_experiment(dataset_id, initial_conditions) run(experimental_parameters, methods) All experiments are modular and split in blocks for easy running and customization. _Initial conditions_ contain the samples initially labeled and the number of folds. _Experimental parameters_ include the batch size and the number of iterations. The _run_ function runs the experiment and generates accuracy and metrics results in a dedicated folder that the user can submit through a pull request for validation. After replicating the results on our side, we will integrate this new sampler into OpenAL and share the results with the community. ## 4 Experiments and results The tasks included in OpenAL are listed in Table 2. We report accuracy and the following set of metrics measured during our experiments: **Agreement.** Agreement ratio between the inductive model and a 1-nearest-neighbor (1-NN) classifier trained on labeled data. We expect a high agreement to be correlated with good exploration. **Contradictions.** Ratio of test samples where the models at the previous iteration and current iteration disagree. It is an upper bound on accuracy change from one iteration to the other. **Hard exploration.** Ratio of test samples where the 1-NN of the previous iteration and current iteration disagree. **Top exploration.** Mean difference of the distance between test samples and their nearest neighbour in the labeled pool from one iteration to the next. **Violations.** This unsupervised metric measures how many data compliance rules computed on the test set are violated in the labeled dataset [6]. Conformance rules are computed by extracting eigenvectors on the reference dataset and setting conformance boundaries based on the standard deviation of the projected reference data. Sample conformance is given by the number of times its projections fall outside the conformance boundaries. Overall, the highest the violation, the more the labeled samples deviate from the test set. **Best active learning strategy.** WKMeans and IWKMeans have similar performances and dominate the benchmark in terms of accuracy on all tasks, as observed in Table 3 and in previous work [1, 2]. One notable difference is that IWKMeans has fewer violations than WKMeans which means that its training set is more representative of the test set, as observed in Figure 1. We expected this result as IWKMeans is designed to sample data more uniformly than WKMeans. If this does not impact accuracy, we could expect a different generalization power between the two models which advocates for adding a domain adaptation task in the future. **Uncertainty-based AL strategies.** As all uncertainty metrics have the same rank in binary classification, we resort to multi-class tasks to compare them. The only non-binary tabular classification task of our benchmark is #42803. We observe that Confidence and Entropy strategies perform poorly, even worse than random. Looking at the metrics, we notice that most of the two strategies' values do not stand out except for higher violations for Confidence and Entropy as seen in Figure 2. This means that the training set is very different from the test set, which may be due to those samplers focusing on noisy samples [2]. Margin reaffirms its dominance which explains why it is preferably used in many studies [23]. \begin{table} \begin{tabular}{l l l l l} \hline \hline Name & \#samples & \#classes & \#features & class balance \\ \hline \#1461 Bank-marketing & 45221 & 2 & 7/9 & 0.88 / 0.12 \\ \#1471 Egg-eye-state & 14980 & 2 & 14/0 & 0.55 / 0.45 \\ \#1502 Skin-segmentation & 245057 & 2 & 3/0 & 0.21 / 0.79 \\ \#1590 Adult & 48842 & 2 & 6/8 & 0.76 / 0.24 \\ \#40922 Run or walk information & 88588 & 2 & 6/0 & 0.5 / 0.5 \\ \#41138 APSFailure & 76000 & 2 & 170/0 & 0.98 / 0.02 \\ \#41162 Kick & 72983 & 2 & 14/18 & 0.88 / 0.12 \\ \#42395 Santander Customer Satisfaction & 200000 & 2 & 200/0 & 0.9 / 0.1 \\ \#42803 Road-safety & 363243 & 3 & 61/5 & 0.66 / 0.29 / 0.05 \\ \#43439 Medical-Appointment-No-Shows & 110527 & 2 & 8/4 & 0.8 / 0.2 \\ \#43551 Employee-Turnover-at-TECHCO & 34452 & 2 & 9/1 & 0.02 / 0.98 \\ MNIST & 70000 & 10 & 28x28 & 0.1 each \\ CIFAR10 & 60000 & 10 & 32x32x3 & 0.1 each \\ CIFAR100 & 60000 & 100 & 32x32x3 & **0.01 each** \\ \hline \hline \end{tabular} \end{table} Table 2: OpenAL datasets’ characteristics. For tabular data, features correspond ton continuous/categorical features. For images, the shape of one image is given. Figure 1: Results for dataset #42803: Accuracy (left) and Violation (right). Note that the violation metrics is at 0 on the first iteration because the dataset is too small to compute them. Figure 3: Agreements for datasets: #42803 (left) and #43439 (right). \begin{table} \begin{tabular}{r r r r r r r} \hline \hline Dataset & Random & KMeans & Confidence & Margin & KCenter & WKmeans \\ \hline 1471 & 68.6 \(\pm 0.8\) & 68.7 \(\pm 1.2\) & 69.7 \(\pm 1.0\) & 69.7 \(\pm 1.0\) & 67.4 \(\pm 0.6\) & **71.2**\(\pm 1.1\) \\ 41138 & 98.4 \(\pm 0.1\) & 98.4 \(\pm 0.1\) & 98.9 \(\pm 0.1\) & 98.9 \(\pm 0.1\) & 98.7 \(\pm 0.1\) & **99.0**\(\pm 0.1\) \\ 1502 & 98.7 \(\pm 0.4\) & 99.3 \(\pm 0.0\) & 99.2 \(\pm 0.2\) & 99.2 \(\pm 0.2\) & **99.5**\(\pm 0.1\) & **99.5**\(\pm 0.1\) \\ 1590 & 81.8 \(\pm 1.0\) & 80.7 \(\pm 0.8\) & 81.5 \(\pm 1.0\) & 81.5 \(\pm 1.0\) & **82.0**\(\pm 0.6\) & **82.9**\(\pm 0.6\) \\ 41162 & 85.0 \(\pm 0.7\) & 84.3 \(\pm 0.9\) & 85.7 \(\pm 0.9\) & 85.7 \(\pm 0.9\) & **86.7**\(\pm 0.5\) & **86.3**\(\pm 0.8\) \\ 43439 & 76.6 \(\pm 0.3\) & 76.1 \(\pm 0.6\) & 76.5 \(\pm 0.4\) & 76.5 \(\pm 0.4\) & **77.3**\(\pm 0.9\) & **77.0**\(\pm 0.5\) \\ 40922 & 96.6 \(\pm 0.4\) & 96.0 \(\pm 0.6\) & **97.7**\(\pm 0.5\) & **97.7**\(\pm 0.5\) & **97.3**\(\pm 0.1\) & **97.7**\(\pm 0.5\) \\ 42395 & 89.7 \(\pm 0.1\) & 89.6 \(\pm 0.1\) & **89.8**\(\pm 0.1\) & **89.8**\(\pm 0.1\) & **89.8**\(\pm 0.1\) & **89.8**\(\pm 0.1\) \\ 43551 & 97.2 \(\pm 0.5\) & 95.2 \(\pm 1.8\) & **97.0**\(\pm 0.8\) & **97.0**\(\pm 0.8\) & **97.5**\(\pm 0.7\) & **97.5**\(\pm 0.9\) \\ 40922 & 96.6 \(\pm 0.4\) & 96.0 \(\pm 0.6\) & **97.7**\(\pm 0.5\) & **97.7**\(\pm 0.5\) & **97.3**\(\pm 0.1\) & **97.7**\(\pm 0.5\) \\ 40922 & 96.6 \(\pm 0.4\) & 86.0 \(\pm 0.6\) & **97.7**\(\pm 0.5\) & **97.7**\(\pm 0.5\) & **97.3**\(\pm 0.1\) & **97.7**\(\pm 0.5\) \\ 40922 & 96.6 \(\pm 0.4\) & 96.0 \(\pm 0.6\) & **97.7**\(\pm 0.5\) & **97.7**\(\pm 0.5\) & **97.3**\(\pm 0.1\) & **97.7**\(\pm 0.5\) \\ 1461 & 88.8 \(\pm 0.3\) & 88.8 \(\pm 0.2\) & **89.4**\(\pm 0.1\) & **89.4**\(\pm 0.1\) & 88.8 \(\pm 0.3\) & **89.4**\(\pm 0.1\) \\ 41138 & 98.4 \(\pm 0.1\) & 98.4 \(\pm 0.1\) & **98.9**\(\pm 0.1\) & 98.9 \(\pm 0.1\) & 98.7 \(\pm 0.1\) & **99.0**\(\pm 0.1\) \\ 42803 & 76.1 \(\pm 0.4\) & 75.6 \(\pm 0.5\) & 74.4 \(\pm 1.2\) & **76.9**\(\pm 0.4\) & 76.2 \(\pm 0.3\) & **76.9**\(\pm 0.4\) \\ CIFAR-10 & 70.2 \(\pm 0.7\) & 70.4 \(\pm 0.4\) & 68.9 \(\pm 1.0\) & **71.2**\(\pm 0.3\) & 66.9 \(\pm 1.2\) & **71.6**\(\pm 0.4\) \\ CIFAR-10S & 85.0 \(\pm 0.8\) & 84.3 \(\pm 0.5\) & 84.8 \(\pm 0.8\) & **86.3**\(\pm 0.5\) & 85.9 \(\pm 0.5\) & **86.4**\(\pm 0.6\) \\ MNIST & 82.6 \(\pm 0.8\) & 82.5 \(\pm 0.5\) & 82.0 \(\pm 0.9\) & 85.4 \(\pm 0.5\) & 80.5 \(\pm 1.2\) & **87.1**\(\pm 0.4\) \\ \hline \hline \end{tabular} \end{table} Table 3: Benchmark results per dataset and sampling strategy. We show the average accuracy over 10 folds. Entropy (_resp._ IWKMeans) has been omitted because their results were close to Confidence (_resp._ WKMeans). Datasets are ordered to display patterns of dominance for samplers. Figure 2: Results for dataset CIFAR-10: Accuracy (left) and Violation (right). The importance of data representation.IWKMeans and WKMeans are overall the best methods, but we observe a subset of methods on which uncertainty-based Margin is on par with them and another one where exploration-based KCenter reaches the same accuracy as displayed in Table 3. Surprisingly, the agreement metric seems to be a good indicator of which method is on par with the best. When all samples have a similar agreement score, KCenter manages to reach the best accuracy. Conversely, when the agreement score of Margin is significantly below Random or Kcenter, Margin reaches the best performance. Figure 3 illustrates the case of dominance of Margin on #42803 and dominance of KCenter on #43439. We hypothesize that the quality of the representation learned is responsible for this effect. When the distance in the representation space is inconsistent with the labels, diversity becomes ineffective or even counter-productive. The peculiar case of #1502.Task #1502 is peculiar since we considered it tabular, but its three features are an image's red, green, and blue channels. It is the only task where KMeans has better accuracy than random. More than that, we observe two regimes in this experiment. KMeans and KCenter-Greedy, two purely exploratory techniques, dominate the two first iterations of the experiment. After that, they plateau at a suboptimal accuracy, while uncertainty based-methods take the lead. We hypothesize that at iteration 1, the training set is too small for the model to produce meaningful uncertainty scores. WKMeans, which combines uncertainty and exploration, manages to take the best of both worlds. This unusual behavior could be correlated to the unique pattern shown in the violations metric where KMeans minimize this score while WKMeans keeps it increasing, as displayed in Figure 4. Unfortunately, we cannot draw a conclusion from one task, and we hope that adding further tasks could help us reproduce this behavior and understand it better. ## 5 Limitations We have limited ourselves to OpenML datasets and the most common image ones for this proof of concept. In the future, we plan to explore other data sources, such as Kaggle, and other modalities or tasks, such as text and object detection. We could also explore other models and settings, such as different batch sizes, to observe their effect on overall performance. Given the high computational cost of the benchmark, we have set aside very costly methods such as BADGE [3] or BatchBALD, but we plan to add them in the near future. ## 6 Conclusion This first version of OpenAL proves the value of comparing methods on fixed predefined tasks. Although the global outcome that the more sophisticated methods dominate the others was expected, the systematic monitoring and analysis of the metrics helped dig into the results. We believe that our benchmark is a first step towards helping the practitioners to be more confident in their choice of samplers. By ensuring a complete reproducibility of the results, we also allow strategy developer to test their method against our references quickly. Thanks to the metrics, they can also understand faster why their method may fail on a peculiar dataset and why other methods perform better. For example, we Figure 4: Results for dataset #1502: Accuracy (left) and Violation (right). have highlighted that diversity in active learning strategies is as good as the data representation on which it relies. In the end, we hope this benchmark will accelerate research in active learning and facilitate its adoption in industrial contexts.
2308.13929
TeleFMG: A Wearable Force-Myography Device for Natural Teleoperation of Multi-finger Robotic Hands
Teleoperation enables a user to perform dangerous tasks (e.g., work in disaster zones or in chemical plants) from a remote location. Nevertheless, common approaches often provide cumbersome and unnatural usage. In this letter, we propose TeleFMG, an approach for teleoperation of a multi-finger robotic hand through natural motions of the user's hand. By using a low-cost wearable Force-Myography (FMG) device, musculoskeletal activities on the user's forearm are mapped to hand poses which, in turn, are mimicked by a robotic hand. The mapping is performed by a spatio-temporal data-based model based on the Temporal Convolutional Network. The model considers spatial positions of the sensors on the forearm along with temporal dependencies of the FMG signals. A set of experiments show the ability of a teleoperator to control a multi-finger hand through intuitive and natural finger motion. A robot is shown to successfully mimic the user's hand in object grasping and gestures. Furthermore, transfer to a new user is evaluated while showing that fine-tuning with a limited amount of new data significantly improves accuracy.
Alon Mizrahi, Avishai Sintov
2023-08-26T18:08:32Z
http://arxiv.org/abs/2308.13929v2
# TeleFMG: A Wearable Force-Myography Device for Natural Teleoperation of Multi-finger Robotic Hands ###### Abstract Teleoperation enables a user to perform tasks from a remote location. Hence, the user can interact with a long-distance environment through the operation of a robotic system. Often, teleoperation is required in order to perform dangerous tasks (e.g., work in disaster zones or in chemical plants) while keeping the user out of harm's way. Nevertheless, common approaches often provide cumbersome and unnatural usage. In this letter, we propose _TeleFMG_, an approach for teleoperation of a multi-finger robotic hand through natural motions of the user's hand. By using a low-cost wearable Force-Myography (FMG) device, musculoskeletal activities on the user's forearm are mapped to hand poses which, in turn, are mimicked by a robotic hand. The mapping is performed by a data-based model that considers spatial positions of the sensors on the forearm along with temporal dependencies of the FMG signals. A set of experiments show the ability of a teleoperator to control a multi-finger hand through intuitive and natural finger motion. Furthermore, transfer to new users is demonstrated. ## I Introduction The global outbreak of the COVID-19 presented a great challenge to medical personnel. Medical doctors and nurses were required to balance between the need to provide life-saving treatment to quarantined patients and the necessity to protect themselves from being infected [1]. Personal protective equipment (PPE) cannot fully protect staff and they require to replace them often while shortage is quite common. In addition to infection risks, PPE such as latex gloves and masks is highly polluting with long term environmental impact [2]. These challenges also exist in various other hazardous domains, such as disaster zones, chemical factories, space [3] and deep water [4], where human operators must complete various tasks. Robotic system are the ideal solution to form the required interaction between medical staff and patients with no risk of exposure, or replace human operators in dangerous work. There have already been many robots working to disinfect hospitals or deliver food and medicine [5]. Similarly, robots are used to inspect hazardous environments [6]. Also, telepresence robots are used for remote meetings with doctors and loved ones [7]. However, active participation of robots, where they fully interact with patients in a clinical setting (e.g., nursing, usage of medical instruments and physical examination) still lacks while they potentially can do much more. Similarly, robots in other hazardous environments are mainly used for inspection and carry equipment, and are limited in actual interaction with the environment. While fully autonomous robots able to manage complex tasks in healthcare and hazardous environments are still some way in the future, human experts must always be in the decision process and have some control. Hence, teleoperation of robots by expert human operators and, particularly, telemedicine (i.e., remote delivery of medical care) [8] by the medical staff are an ideal solution to put them out of harm's way. In active teleoperation, the state of the human arm and hand are mapped to motion of a robot [9]. Hence, a human operator takes control over robot functions, can drive it through the environment and move a robotic arm in order to perform tasks. A common and simple approach is the use of a game-pad to move the robot [10]. Haptic teleoperation is the use of a specialized joystick or a robotic arm to control another robot along with force feedback in order to simulate the forces that the robot is experiencing [11]. These solutions for teleoperations are usually expensive, non-intuitive, bulky and require much training. More natural Fig. 1: Teleoperation demonstrations of a multi-finger robotic hand using the TeleFMG device in (a) pointing gesture, (b) whole hand grasping of a bottle and (c) pinch grasping of a small cube. approaches directly observe the pose of the arm and hand through the use of visual perception or a sensor glove. With vision, an RGB-D camera observes the motion of the user, estimates arm and finger poses through the use of human pose estimation models and moves a robot accordingly [12, 13]. Relying on continuous visual perception limits the performance when visual uncertainty (e.g., poor lighting or shadows) or occlusion may occur. The use of haptic gloves is an alternative through direct measurement of hand poses [14, 15]. While the use of sensor gloves can provide accurate finger pose estimations, they tend to be bulky and expensive while limiting the tactile sensation of the user. Hence, the user must remove them in order to perform other tasks in between teleoperation sessions. A widely researched approach is to acquire and learn neurological activities through Electro-Myography (EMG) [16]. EMG detects electrical signals generated by muscle tissue. For instance, an EMG was used along with an Inertial Measurement Unit (IMU) in order to teleoperate a robotic arm and anthropomorphic five-finger hand [17]. Similarly, an EMG wrist band was proposed for control of a non-anthropomorphic robotic gripper [18]. A different work combined EMG with an haptic device to control a mobile robot [19]. EMG, however, usually requires extensive training along with expensive and highly sizable equipment. Also, EMG accuracy may be compromised by electrode placement, sweat and crosstalk [20]. Force-Myography (FMG), on the other hand, is an easier alternative to sense the state of a human arm [21]. FMG signals were shown to be simple to acquire with a relatively high-accuracy. Consequently, FMG was used in data-based classification of hand gestures [22, 23] and rehabilitation studies [24]. Comparative studies have shown that FMG is less sensitive to positioning variations, does not require direct contact with the skin, and significantly outperforms EMG's accuracy [25, 26]. Previous work by the authors have shown that FMG can be used to recognize objects grasped by the human hand [27] and can generalize to various new users [28]. With such information, a robot in a Human-Robot Collaboration (HRC) scenario can identify a grasped tool, infer about the intended task and act to assist. In order to be appealing, teleoperation and interaction with a robotic system must be natural, intuitive and ergonomic. Wearable FMG offers such qualities with easy-to-use and low cost hardware. While FMG has been previously used in a wide variety of classification tasks, it was never explored in the context of finger pose estimation for teleoperation. In this work, we explore the ability of FMG to accurately map FMG signals to the physical state of the human hand, i.e., estimate finger joint angles. We introduce the _TeleFMG_ system. TeleFMG is a wearable FMG device worn on the user's forearm, similar to the one proposed in [27], and used alongside with a data-based model in order to estimate the pose of the human fingers. With the benchmarking of various neural-network models, we observe the required data for accurate finger pose estimation and teleoperation of a robotic system. We investigate the accuracy of a model trained with data collected from one user and the effort required to transfer it to new users. In addition to accuracy evaluation, we are also interested in the ability of the model to successfully transfer tasks from the user to the robotic hand. Such tasks include hand gestures, whole hand grasping and pinching (Figure 1). With such capability, a user can remotely control a multi-finger hand. While not in the scope of this work, the motion of a robotic arm equipped with the hand can be performed by including an IMU on the wearable system in order to track and approximate motion [29]. Nevertheless, the operation of a dexterous robotic hand with a multitude of degrees of freedom is more complex and is the focus of our work. ## II Method ### _FMG wearable device_ Previous work by the authors has proposed a low-cost wearable FMG device in the context of HRC [27]. Based on the design, an advanced prototype is developed and fabricated. The device, seen in Figure 2, is composed of 28 Force-Sensitive Resistors (FSR), short-tail model FSR-400 by Interlink Electronics. FSR sensors are composed of thin sheets of polymer that alter their electrical resistance in response to the amount of pressure applied to their surface. The sensors are arranged on two bands, upper and lower forearm bands, each having 14 FSR sensors equally spaced Fig. 2: The TeleFMG system including two bands with 14 FSR senors each, and the sensor glove for labeling FMG signals with hand poses in the data collection phase. in two rows. The bands are fabricated by 3D printing with an elastic polymer (Thermoplastic elastomer). Hence, they are flexible, light-weight and allow unrestricted movement of the entire arm. To ensure compliant skin press on the FSR senors, each sensor is covered by a push-button mechanism. The button includes an inner bulge that presses on the FSR even if the surface of the button is not parallel to the FSR. This enables adaptation on the uneven surface of the user's forearm with continuous contact. All FSR sensors are connected to a Teensy 4.1 micro-controller. Since the Teensy has only 18 analog channels, each two sensors of the 28 ones are connected to the same analog input in the Teensy through a voltage divider of \(4.7k\Omega\) resistor. Using a transistor-based (mini MOSFET) switching system, the Teensy is able cyclically sample different sets of sensors in a frequency of up to 40 Hz and transfer to a computing unit via cable. While not utilized in this work, a Bluetooth component was included and can handle wireless transfer of data in real-time. ### _Sensor Glove_ The wearable device measures FMG signals from the forearm in order to map them to finger poses. A labeling system is required for recording the state of the hand. Hence, a hand labeling device based on a glove was developed. The device is composed of a cloth glove with five 2.2" SEN-10264 ROHS flex sensors and five potentiometers model PT15NH05-103A2020-S by Amphenol Piher. The flex sensors bridge the back of the hand and fingers over the knuckles, and measure the angle \(\theta_{\text{MCP},i}\) of the Metacarpophamageal (MCP) joints where \(i=\{1,\ldots,5\}\) is the index of the finger. Similarly, the potentiometers measure the angle \(\theta_{\text{PP},i}\) of the Proximal Interphalangeal (PIP) joints. While the potentiometers straight-forwardly provide joint angle, the flex sensors were calibrated to map deflection to angles. In total, the sensors measure ten finger angles on the hand. In this work, we do not consider abduction and adduction motions of the fingers. The sensors are connected to the glove using 3D-printed flanges and stitches. During data collection, the labeling system is connected to a main computer along with the FMG device to allow synchronous stream of data. ### _Problem Formulation_ With the above wearable and labeling hardware, we aim to map FMG sensing to finger pose of the human hand. Let \(\mathbf{x}\in\mathbb{R}^{28}\) be the observable state of the musculoskeletal system measured by the \(28\) FSR sensors on the FMG device in contact with the forearm. Similarly, the state of the hand \(\mathbf{q}\in\mathbb{R}^{10}\) is the set of \(10\) finger joint angles where \(\mathbf{q}=\{\theta_{\text{MCP},1},\theta_{\text{PP},1},\ldots,\theta_{\text {MCP},5},\theta_{\text{HP},5}\}\). The angles are zero when the fingers are fully extended. We search for a model \(f\) which maps FMG signals to the pose of the hand. Since an analytical model for such map cannot be acquired, we search for a robust and multi-user data-based model. With such a model, a robotic hand can mimic the motion of the human hand in real-time. ### _Data Collection_ Training data is collected by recording FMG states through the wearable device and synchronously labeling them with hand states using the sensor glove. With a stream rate of 33 Hz, each FMG sample \(\mathbf{x}_{i}\) is recorded along with its corresponding hand state \(\mathbf{q}_{i}\). Data is collected on a single participant in \(n\) sessions. Before each session, the device is taken off of the forearm and re-positioned in order to collect data with positional uncertainty. In the beginning of the session and right after the FMG device was positioned on the forearm, a set of samples was taken while the participant relaxed arm muscles. The mean vector \(\mathbf{x}_{o}\) of these samples is considered as the session baseline and is subtracted from any sample recorded in the session, i.e., \(\tilde{\mathbf{x}}_{i}=\mathbf{x}_{i}-\mathbf{x}_{o}\). Such subtraction compensates for non-equal and non-uniform tightening of the device between sessions. During a session, which included \(m\) recorded samples, the participant conducted various random motions of the fingers. Since the musculoskeletal system can vary with the same finger pose but with motion of the arm and wrist, data is collected while also randomly manipulating the wrist and arm in the workspace. Sessions also included task performing such as gripping of various objects and common gestures. The resulting training data is a set of \(N=mn\) labeled FMG measurements \(\mathcal{P}=\{(\tilde{\mathbf{x}}_{1},\mathbf{q}_{1}),\ldots,(\tilde{\mathbf{ x}}_{N},\mathbf{q}_{N})\}\). A similar dataset was collected for testing trained models in independent collection sessions. ### _Data-based Model_ The presented problem requires supervised learning over dataset \(\mathcal{P}\) by means of regression. One may train a Fully-Connected Neural-Network (FC-NN) to directly map a single FMG signal \(\tilde{\mathbf{x}}_{i}\) to the corresponding pose \(\mathbf{q}_{i}\) of the hand. However, it is hypothesized that temporal sequence reading of FMG signals would provide more accurate pose estimations. Let \(\mathcal{S}_{H}\subset\mathbb{R}^{28}\times\ldots\times\mathbb{R}^{28}\) be the product space of the observable FMG space over \(H\) sequential measurements. In a pre-processing step, dataset \(\mathcal{P}\) is modified to include sequences of \(H\) FMG signals. That is, a temporal FMG sequence of length \(H\) at time \(t\) \[S_{t}=\{\tilde{\mathbf{x}}_{t-H},\ldots,\tilde{\mathbf{x}}_{t-1},\tilde{ \mathbf{x}}_{t}\}\in\mathcal{S}_{H} \tag{1}\] is extracted from \(\mathcal{P}\) and labeled with the corresponding hand pose \(\mathbf{q}_{t}\). Hence, a new dataset \(\mathcal{P}^{\prime}=\{(S_{1},\mathbf{q}_{1}),\ldots,(S_{N-H},\mathbf{q}_{N-H})\}\) is used to train a temporal-based model. We hypothesise that the system is govern by a map \(f:\mathcal{S}_{H}\rightarrow\mathbb{R}^{10}\). Hence, temporal measurements from the FMG device can be mapped to the state of the hand at time \(t\) through \[\mathbf{q}_{t}=f(S_{t}). \tag{2}\] Training a sequential model for (2) can be done using the Long Short-Term Memory (LSTM). LSTM is a class of Recurrent Neural-Networks (RNN) aimed to learn sequential data [30]. LSTM is able to selectively retain or discard information from previous time steps making it well-suited for long-term dependencies. However, LSTM models can be computationally expensive and slow to train, particularly for longer sequences [31]. Also, LSTM cannot preserve the relative positions between sensors on the FMG device. Convolutional Neural-Networks (CNN) are commonly used to learn image data and any tabular data. Hence, it is possible to formulate an array where the components are organized in the formation of the FSR sensors on the FMG device. A single channel array is then fed into the convolutional layers of the CNN. Nevertheless, such form cannot take temporal dependencies into account. In other words, it would be beneficial for a model to observe the spatial and temporal relations in the FMG signals, i.e., we require a spatio-temporal model [32]. Each FMG signal vector \(\tilde{\mathbf{x}}_{t-i}\in S_{t}\) is reshaped to a matrix \(U_{t-i}\) of size \(4\times 7\). In such form, a component \(u_{t-i}^{(a,b)}\) in \(U_{t-i}\) denotes the FMG signal in row \(a\) and column \(b\) of the FMG device. Hence, matrix \(U_{t-i}\) provides a spatial representation of the FMG measurement on the forearm. Consequently, a reformulated sequence \[S_{t}=\{U_{t-H},\ldots,U_{t-1},U_{t}\} \tag{3}\] is a spatio-temporal representation of the data. To formulate a spatio-temporal model, we use a Temporal Convolutional Network (TCN) [33]. TCN is a type of NN that uses convolutional layers to process sequential data. The convolutional layers extract significant features from the data. By using dilated convolutions, the TCN is able to capture long-term dependencies in a computationally efficient manner, making it a popular and efficient choice for temporal prediction tasks. Our proposed TCN architecture is illustrated in Figure 3. Each matrix \(U_{t-H}\in S_{t}\) undergoes three convolutional layers to extract meaningful features. Each convolution layer includes a ReLU activation function, batch normalization and a skip connection. After passing through the convolutional layers, the \(H\) outputted matrices of size \(7\times 7\) are flattened and treated as a sequence of \(H\) vectors. These vectors are then fed into three residual blocks which employ 1D dilated convolutions. A dilated convolutional layer allows the kernel of size \(k\) to observe a wider area of the input without having to increase its size, enabling to analyze the temporal changes in the features over time. This is done by skipping \(d-1\) elements between the kernel elements where \(d\) is the dilation rate. The kernels of all three residual blocks have size \(k=10\) while dilation rate increases to \(d=1,2,4\) between layers. The proposed model was trained with \(\mathcal{P}^{\prime}\) while adding Gaussian noise for robustness. ### _Real time work_ Given the trained TCN model, estimated poses \(\mathbf{q}_{t}\) of the human hand are to be mimicked by a multi-finger robotic hand in a teleoperation setup. Anthropomorphic robot hands usually have four- (e.g., Allegro hand) or five- (e.g., Shadow [34] and DLR [35] hands) fingers. In the case of a five-finger robotic hand, the finger joint angles are directly mapped to the joints of the robotic hand. When considering a four-finger robotic hand, the joint angle estimations of the little (pinky) finger are disregarded as it minimally degrades the functionality of the hand. For each finger on the users hand, the MCP and PIP angles are estimated with FMG signals as described above. Then, they are directly mapped to the corresponding joints of the robotic hand. However, the Distal Interphalangeal (DIP) joints at the tips of the user's fingers are not measured. Yet, the human hand is known to have coupled movements termed _synergies_[36]. A common representation of the synergies between the PIP and DIP of each finger is \(\theta_{\text{DIPj}}=\frac{2}{3}\theta_{\text{PIPj}}\)[9, 37]. Such ratio was used in our implementation in order to determine the DIP angles of a robotic hand based on estimation of the PIP ones on the users hand. ## III Experiments and Results In this section, we test and analyze the accuracy of a trained model and the ability of a user to teleoperate a robotic hand through FMG. Videos of data collection and experiments can be seen in the supplementary material. ### _Database_ Dataset \(\mathcal{P}\) was collected from one participant as described in Section II-D using the FMG device and labeling glove. Fig. 3: An illustration of the TCN model which acquires spatio-temporal FMG signals and maps them to the pose of the human hand. The pose is mimicked by a robotic hand. The collection included \(n=15\) sessions with \(m=25,530\) recorded samples yielding a total of \(N=382,950\) training samples. The FMG device was taken off between sessions and reapplied in slightly altered locations. In addition, a test set of \(105,000\) samples was recorded in \(5\) separate sessions not included in the training. Snapshots from a collection session are seen in Figure 4. ### _Model Evaluation_ We analyzed the performance of various deep learning models trained on dataset \(\mathcal{P}\). The use of TCN as discussed in Section II-E is benchmarked with other models including: FC-NN outputting ten joint angles; five FC-NN (5\(\times\)FC-NN), one for each finger; LSTM; and CNN. For FC-NN and 5\(\times\)FC-NN, the hand pose is estimated solely based on an instantaneous FMG measurement, i.e., \(\mathbf{q_{\ell}}=f(\mathbf{x_{\ell}})\). Similarly, the CNN considers instantaneous FMG measurements formulated as arrays based on sensor locations on the device. On the other hand, LSTM can consider temporal sequences of FMG measurement as in (1). Hence, a sequence \(S_{t}\) of flattened FMG signals at time \(t\) is directly fed into the model to predict \(\mathbf{q_{\ell}}\). For the TCN, the data was modified to formulate matrices as in (3) prior to training and testing. Hyper-parameters of all models were optimized to provide best results. Table I summarizes the mean angle error over the test data and for all five models. First, momentary observation of FMG signals with FC-NN exhibits poor results. Solely re-structuring the data to learn spatial dependencies with a CNN does not provide additional accuracy. Similarly, only observing temporal dependencies in the data with an LSTM does not provide additional accuracy improvement. By including spatial representation of the FMG data to the temporal sequencing, TCN exhibits superior results. Table II presents the mean errors when individually estimating each finger joint using the TCN model. The larger errors originate from the thumb where the Carpometacarpal (CMC) joint is not modeled and provides some uncertainties. Figure 5 shows an example of real-time estimation of finger motions using the FMG device. The results show the ability to estimate hand poses based on FMG signals. ### _Teleoperation evaluation_ With the trained model, we wish to evaluate TeleFMG on a robotic hand. Hence, we experiment with the four-finger Allegro hand by Wonik Robotics. The Allegro is a fully-actuated hand comprised of 16 actuators, four in each finger. Three actuators on each finger control the MCP, PIP and DIP while the fourth actuate abduction and adduction motions. The latter is not used in this work and manually set to a constant value. TeleFMG is first tested for teleoperation of several hand gestures including: open hand, closed hand, pointing with the index finger, thumbs-up, and a two-finger V-sign with the index and middle fingers. Then, we test the teleoperation for performing five tasks of interaction with objects including: whole hand grasp of a ball and a bottle, grasp of a thin elongated object such as a screwdriver and a brush handle, pinch grasping of small objects such as an ATM card, cube and rubber duck, and lifting bags with four fingers. We first Fig. 4: Snapshots of data collection using the FMG device and the labeling glove in various hand poses. Fig. 5: Example of real-time estimation of finger joint angles based on FMG for (a) thumb, (b) index finger, (c) middle finger and (d) ring finger. test the success rate of mimicking the gestures and actions of the participant whom contributed the training data. Table III presents the success rate out of 20 attempts. Overall, all tasks were performed with high success rate. Tasks that involve the thumb such as thumbs-up and pinch grasping failed slightly more often due to more inaccuracies of the thumb as discussed previously. This can be coped by modeling of the CMC joint in future work. Teleoperation snapshots of real-time whole hand grasping of a bottle and pinch grasping of an ATM card can be seen in Figures 6 and 7, respectively. Similarly, Figure 8 shows teleoperation demonstration of various gestures and object grasping. We note that some lagging between user and robot motions occur due to communication limitations of the hardware. ### _TeleFMG for new users_ The TeleFMG evaluated above was trained and tested on a single participant. We now wish to test the ability of the trained model to generalize to novel users not included in the training. Four new test participants were used for testing teleoperation while having a variety of forearm dimensions. Table IV provides a list of anthropometric measures (i.e., forearm length (FL), lower forearm circumference (LFC) and upper forearm circumference (UFC)) for the four participants and the total success rate for performing the tasks listed in Table III. While relatively low, the results show an ability to transfer the model to different users. Note that the device did not fit well to User 3 due to a smaller forearm circumference resulting in lower success rate. To improve generalization abilities, one can collect training data from various users as in [28]. Another approach, tested here, is to fine-tune the TCN model with a limited amount of training data from the new user. Given a user, we fine-tune the model with a learning rate of \(10^{-5}\) and \(8\) epochs with some data collected from the user using the sensor glove. Figure 9 shows success rate results after fine-tuning the model for users 1 and 2 with regards to the number of samples collected from the user. Results show that with new data of up to 5% of the size of the original training Fig. 8: Snapshots of the 4-finger Allegro hand mimicking the motion of the user through TeleFMG in (top row) gestures and (bottom row) grasping various objects including (left to right) rubber duck, bag, ball and screwdriver. Fig. 6: Snapshots of the 4-finger Allegro hand grasping a bottle using TeleFMG. Fig. 7: Snapshots of the 4-finger Allegro hand pinch grasping an ATM card using TeleFMG. data, the model was tuned to 91% and 86% for users 1 and 2, respectively. Such data collection for fine-tuning takes approximately 10 minutes. Figure 10 shows a demonstration of User 1 performing a pointing gesture after the fine-tuning of the model. ### _Feature Importance_ We now explore the importance of the FSR sensors on joint angle prediction accuracy. Permutation feature importance is a common method to evaluate the impact of each feature in a model [38]. We measure the increase in the prediction error after permuting the values of each single sensor in the test data separately. The score is the error increase resulting from the permutation of a sensor's values and is computed according to \[E_{i}=\frac{e_{i}-e}{e}\times 100\%, \tag{4}\] where \(e\) is the mean error of the non-permuted model and \(e_{i}\) is the mean error when feature \(i\) is permuted. Sensor placements and sensor importance heatmap are illustrated in Figure 11. Table V provides the numeric values of the feature importance. Similar to previous results in [28], the relative errors indicate high dependence on the lower forearm sensors. Nevertheless, most sensors along the arm contribute to pose estimation accuracy. ## IV Conclusions In this paper, we have shown the ability of Force-Myography (FMG) on the forearm to estimate the pose of the human hand. Hence, the proposed TeleFMG enables teleoperation of robotic hands through natural motions of the human hand. In TeleFMG, a wearable FMG device is used to measure musculoskeletal activities on the forearm and map them, using a data-based model, to corresponding poses of the hand. It has been shown that a data-based model that maintains the relative spatial positions of the sensors on the forearm along with consideration of temporal dependencies provides best accuracy. Furthermore, a set of teleoperation experiments shows the ability to naturally Fig. 11: Illustration of the sensor locations and feature importance scores of the FSR sensors on the FMG device. Fig. 10: Demonstration of User 2 teleoperating a pointing gesture after fine-tuning the model. Fig. 9: Teleoperation total task success rate in model fine-tuning for new users 1 and 2 with regards to the number of new samples collected from the users. command a multi-finger robotic hand to mimic gestures and grasping tasks. Future work to advance TeleFMG may include an IMU for telemanipulation of an entire arm in addition to the hand. Furthermore, the addition of haptic actuators on the device can provide tactile sensation to the user upon contact of a robotic finger and applied forces. For a complete system, virtual reality goggles can be included for a sense of presence.